This is my first real dive into hosting a server beyond a few Docker containers in my NAS. I’ve been learning a lot over the past 5 days, first thing I learned is that Proxmox isn’t for me:

https://sh.itjust.works/post/49441546 https://sh.itjust.works/post/49272492 https://sh.itjust.works/post/49264890

So now I’m running headless Ubuntu and having a much better time! I migrated all of my Docker stuff to my new server, keeping my media on the NAS. I originally set up an NFS share (NAS->Server) so my Jellyfin container could snag the data. This worked at first, quickly crumbled without warning, and HWA may or may not be working.

Enter the Jellyfin issue: transcoded playback (and direct, doesn’t matter) either give “fatal player error” or **extremely **slow, stuttery playback (basically unusable). Many Discord exchanges later, I added an SMB share (same source folder, same destination folder) to troubleshoot to no avail, and Jellyfin-specific problems have been ruled out.

After about 12hrs of ‘sudo nano /etc/fstab’ and ‘dd if=/path/to/nfs_mount/testfile of=/dev/null bs=1M count=4096 status=progress’, I’ve found some weird results from transferring the same 65GB file between different drives:

NAS’s HDD (designated media drive) to NAS’s SSD = 160MB/s NAS’s SSD to Ubuntu’s SSD = 160MB/s NAS’s HDD to Ubuntu’s SSD = .5MB/s

Both machines are cat7a ethernet straight to the router. I built the cables myself, tested them many times (including yesterday), and my reader says all cables involved are perfectly fine. I’ve rebooted them probably a fifty times by now.

NAS (Synology DS923+): -32GB RAM -Seagate EXOS X24 -Samsung SSD 990 EVO

Ubuntu: -Intel i5-13500 -Crucial DDR5-4800 2x32GB -WD SN850X NVMe

If you were tasked with troubleshooting a slow mount bind between these two machines, what would you do to improve the transfer speeds? Please note that I cannot SSH into the NAS, I just opened a ticket with Synology about it.

Here’s the current /etc/fstab after extensive Q&A from different online communities

NFS mount: 192.168.0.4:/volume1/data /mnt/hermes nfs4 rw,nosuid,relatime,vers=4.1,rsize=13>

SMB mount: //192.168.0.4/data /mnt/hermes cifs username=_____,password=_______,vers=3.>

            • weewkron@lemmy.world
              link
              fedilink
              arrow-up
              2
              ·
              6 days ago

              Can you run

              sudo ethtool <interface>
              

              Should tell you what the NIC is physically seeing on your Ubuntu machine. Also maybe just do a generic speed test from your Ubuntu machine to see if its everything on the NIC or just lateral traffic being impacted

              • LazerDickMcCheese@sh.itjust.worksOP
                link
                fedilink
                arrow-up
                1
                ·
                5 days ago

                “-bash: syntax error near unexpected token `newline’” I’m not familiar with ethtool, but I looked up some commands related to ethtool. Unfortunately, everything I tried give me “bad command line argument(s)”

                  • LazerDickMcCheese@sh.itjust.worksOP
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    2 days ago

                    br-04577e8d1ec8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.27.0.1 netmask 255.255.0.0 broadcast 172.27.255.255 inet6 fe80::f43a:6cff:fe6e:6f74 prefixlen 64 scopeid 0x20<link> ether f6:3a:6c:6e:6f:74 txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

                    br-059b78f628b4: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.25.0.1 netmask 255.255.0.0 broadcast 172.25.255.255 inet6 fe80::18:abff:fee0:3eb3 prefixlen 64 scopeid 0x20<link> ether 02:18:ab:e0:3e:b3 txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

                    br-0a5f3a65b300: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.19.0.1 netmask 255.255.0.0 broadcast 172.19.255.255 inet6 fe80::e00e:50ff:fe65:836 prefixlen 64 scopeid 0x20<link> ether e2:0e:50:65:08:36 txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

                    br-1945efd955e7: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.26.0.1 netmask 255.255.0.0 broadcast 172.26.255.255 inet6 fe80::8c68:a5ff:fe3a:9873 prefixlen 64 scopeid 0x20<link> ether 8e:68:a5:3a:98:73 txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

                    br-3d620c7c2cae: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.22.0.1 netmask 255.255.0.0 broadcast 172.22.255.255 inet6 fe80::c2b:66ff:fe94:2b49 prefixlen 64 scopeid 0x20<link> ether 0e:2b:66:94:2b:49 txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

                    br-460d6535b2c5: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.24.0.1 netmask 255.255.0.0 broadcast 172.24.255.255 inet6 fe80::642c:cfff:fe44:dbdc prefixlen 64 scopeid 0x20<link> ether 66:2c:cf:44:db:dc txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

                    br-475a728d1c35: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.23.0.1 netmask 255.255.0.0 broadcast 172.23.255.255 inet6 fe80::ccd2:f8ff:fe28:3421 prefixlen 64 scopeid 0x20<link> ether ce:d2:f8:28:34:21 txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

                    br-4f0e4b158e77: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 172.20.0.1 netmask 255.255.0.0 broadcast 172.20.255.255 ether 6a:b9:50:03:81:49 txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

                    br-523dfe276b24: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.29.0.1 netmask 255.255.0.0 broadcast 172.29.255.255 inet6 fe80::c489:10ff:fe7d:c60b prefixlen 64 scopeid 0x20<link> ether c6:89:10:7d:c6:0b txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

                    br-57763f5382b6: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.28.0.1 netmask 255.255.0.0 broadcast 172.28.255.255 inet6 fe80::74a5:7ff:fe65:c6ef prefixlen 64 scopeid 0x20<link> ether 76:a5:07:65:c6:ef txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

                    br-598a0f745a98: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.18.0.1 netmask 255.255.0.0 broadcast 172.18.255.255 inet6 fe80::c66:3aff:feb9:911e prefixlen 64 scopeid 0x20<link> ether 0e:66:3a:b9:91:1e txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

                    br-ab783b77c95c: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 172.31.0.1 netmask 255.255.0.0 broadcast 172.31.255.255 inet6 fe80::649f:6bff:fe13:2fe8 prefixlen 64 scopeid 0x20<link> ether 66:9f:6b:13:2f:e8 txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

                    br-bef45e98255d: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.21.0.1 netmask 255.255.0.0 broadcast 172.21.255.255 inet6 fe80::cc5f:6bff:fe87:b447 prefixlen 64 scopeid 0x20<link> ether ce:5f:6b:87:b4:47 txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

                    br-f48ae7f54dbb: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.30.0.1 netmask 255.255.0.0 broadcast 172.30.255.255 inet6 fe80::d437:84ff:feb2:ca4a prefixlen 64 scopeid 0x20<link> ether d6:37:84:b2:ca:4a txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

                    docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255 inet6 fe80::cc6:caff:fe43:79a9 prefixlen 64 scopeid 0x20<link> ether 0e:c6:ca:43:79:a9 txqueuelen 0 (Ethernet) RX packets 1783 bytes 1910011 (1.9 MB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1922 bytes 351712 (351.7 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

                    enp2s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.0.44 netmask 255.255.255.0 broadcast 192.168.0.255 inet6 fe80::9e6b:ff:fea5:51f prefixlen 64 scopeid 0x20<link> ether 9c:6b:00:a5:05:1f txqueuelen 1000 (Ethernet) RX packets 4387465737 bytes 6336735875164 (6.3 TB) RX errors 0 dropped 8 overruns 0 frame 0 TX packets 754588388 bytes 573935751223 (573.9 GB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

                    lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1000 (Local Loopback) RX packets 127840 bytes 10957792 (10.9 MB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 127840 bytes 10957792 (10.9 MB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

                    veth0775369: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet6 fe80::cc3c:2cff:fe9c:5db0 prefixlen 64 scopeid 0x20<link> ether ce:3c:2c:9c:5d:b0 txqueuelen 0 (Ethernet) RX packets 221480 bytes 212832018 (212.8 MB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 254661 bytes 202198400 (202.1 MB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

                    veth0c0ea06: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet6 fe80::38e3:cfff:fe9d:bb11 prefixlen 64 scopeid 0x20<link> ether 3a:e3:cf:9d:bb:11 txqueuelen 0 (Ethernet) RX packets 194122 bytes 19377179 (19.3 MB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 354068 bytes 582336025 (582.3 MB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

                    veth10feba1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet6 fe80::ecf5:74ff:fe18:8241 prefixlen 64 scopeid 0x20<link> ether ee:f5:74:18:82:41 txqueuelen 0 (Ethernet) RX packets 481334 bytes 63464919 (63.4 MB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 455170 bytes 820601446 (820.6 MB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

                    veth1d28ecf: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet6 fe80::bca2:b6ff:fec1:86f1 prefixlen 64 scopeid 0x20<link> ether be:a2:b6:c1:86:f1 txqueuelen 0 (Ethernet) RX packets 75387 bytes 11145936 (11.1 MB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 40041 bytes 255176942 (255.1 MB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

                    veth1e42990: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet6 fe80::2052:25ff:fe39:703 prefixlen 64 scopeid 0x20<link> ether 22:52:25:39:07:03 txqueuelen 0 (Ethernet) RX packets 6333109 bytes 68605366213 (68.6 GB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 7502722 bytes 1336724524 (1.3 GB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

                    veth42cfbe1: fl

            • ryannathans@aussie.zone
              link
              fedilink
              arrow-up
              1
              ·
              edit-2
              6 days ago

              Could be anything from shit cable, to failing network equipment, to bad driver. Please tell me it’s hardwired and not on wifi

    • just_another_person@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      6 days ago

      That doesn’t look right. What are the two IP’s of the machines on your network?

      Edit: you must be using containers or something. Don’t use bridge networking if you’re unsure of the performance issues there.

      • LazerDickMcCheese@sh.itjust.worksOP
        link
        fedilink
        arrow-up
        2
        ·
        6 days ago

        192.168.0.4 and 192.168.0.44 for NAS and server, respectively. Currently just an idle Jellyfin container. I’m not sure what bridge networking is without looking it up, so I’m assuming that’s not happening here

          • LazerDickMcCheese@sh.itjust.worksOP
            link
            fedilink
            arrow-up
            3
            ·
            edit-2
            6 days ago

            Oh yeah, Tailscale. I’ll run iperf without it to compare, but I’ve never had an issue with my tailnet before

            still not great. And I think ‘sudo tailscale up --accept-routes’ broke my shit. Now SSH is failing. I’m calling it a night, I’ll report back tomorrow

            • just_another_person@lemmy.world
              link
              fedilink
              arrow-up
              4
              ·
              6 days ago

              Well a 6-7X improvement is something, but you still see the Tailnet running there.

              Honestly, if you don’t know networking and routing, don’t mess with Tailscale. It breaks shit like this for all these people who don’t know what they’re doing who are doing things like installing it on all their local machines because they have no idea how it’s used or it’s purpose, and it’s clearly your problem right here because both you, and your tailnet are confused.

              I know for a fact your containers are ALSO running Tailscale or something equally not good, because you’ve definitely got a polluted routing table from local route loops, and you’re confused as to what that is, how to prevent it, and why it’s broken.

              1. Shut it down EVERYWHERE ON YOUR LOCAL NETWORK.
              2. Make sure your default routes only point to LOCAL ADDRESSES
              3. Recheck your transfer speeds which should be 100MBytes/s+
              • LazerDickMcCheese@sh.itjust.worksOP
                link
                fedilink
                arrow-up
                2
                ·
                6 days ago

                Interesting. I’ve been using Tailscale for years, this is the first I’ve heard of it causing LAN networking problems. I thought the purpose of Tailscale was to establish a low maintenance VPN for people who won’t/can’t set up a reverse proxy, especially for beginners like myself. Later today I’ll try to clear it out and report back

                • just_another_person@lemmy.world
                  link
                  fedilink
                  arrow-up
                  6
                  ·
                  6 days ago

                  Tailscale is for point-to-ooint connections between locations, so yes a VPN. That doesn’t mean two machines on a local network should be using it to talk to each other. Let me explain a bit:

                  Say you have two machines on two different networks 100 miles apart. You put those two on Tailscale, that virtual interface sends traffic through their servers and figures out the routing, and then they can talk to each other…cool.

                  Now move those two machines to the same network and what happens? Tailscale sends their traffic out of that same virtual interface and THEN brings it back into the network. Sure they can still talk to each other sort of, but you’re just skipping using your local network. Doesn’t make any sense.

                  This is because of “default routes”. Whenever you plug a machine into network with a router, that router sends along information on where this machine needs to send it’s traffic to get routed properly. Usually whatever your home router is. This is the default route.

                  Once you bring up the Tailscale interface without proper routing for your local networks taken into account, it sets your default route for Tailscale endpoints, meaning all of your traffic first goes out through Tailscale, and you get what you’re seeing here.

                  Regardless of what you read around and on Reddit, Tailscale is not as simple as it seems, especially if you don’t know networking basics. It’s meant to be used with exit node endpoints that route to a larger number of machines to prevent issues like this, NOT as a client in every single machine you want to talk to each other. I see A LOT of foolish comments around here where people say they install it on all of their local machines, and they don’t know what they are doing.

                  To my point: read this issue to see someone with similar problems, but make sure to read through the dupe issue linked for a longer discussion over the past number of years.

                  Extra thread here explaining some things.

                  This blog goes deeper into a possible solution for your setup.

                  The simplest solution for Linux is usually just making sure to NOT run Tailscaled as root, just as your local user. This should prevent the global override of your machines default routes in most cases, but not all.

                  The proper and more permanent solution is running Tailscale on your router and letting that single device act as an exit node and handle advertising the proper routes to clients through the DERP server translation.

                  Also, see the netcheck docs as it can help quickly debug if things are working properly or not.

                  • LazerDickMcCheese@sh.itjust.worksOP
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    5 days ago

                    Great answer, thank you. To your point, I tried to disable the Tailscale service on my Ubuntu machine and the consequences were bad enough that I’m going to try to avoid Tailscale as much as possible. In disabling it, it also shut down open-ssh, so I had go to the machine with a keyboard and monitor (gross). Re-ran iperf3…while still a bit lower than I’d expect, I don’t think I have any room to complain here all things considered.