Date: Tue, 31 Jan 2023 14:31:49 -0500 From: Paul Mather <paul@gromit.dlib.vt.edu> To: 2yt@gmx.com Cc: freebsd-stable@freebsd.org Subject: Re: Slow WAN traffic to FreeBSD hosts but not to Linux hosts---how to debug/fix? Message-ID: <55B6F3B3-1676-476D-9784-63164930199B@gromit.dlib.vt.edu> In-Reply-To: <tr9g44$d3l$1@ciao.gmane.io> References: <95EDCFCA-7E3F-458F-85A6-856D606B9D98@gromit.dlib.vt.edu> <tr9g44$d3l$1@ciao.gmane.io>
next in thread | previous in thread | raw e-mail | index | archive | help
--Apple-Mail=_72FB16F0-865A-43BF-9F71-D02BAB1D8931 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=us-ascii On Jan 30, 2023, at 5:25 PM, 2yt@gmx.com <mailto:2yt@gmx.com> wrote: > On 1/30/23 14:17, Paul Mather wrote: >> TL;DR: When working from home, I can max out my residential 200 Mbit = network connection when downloading from remote Linux hosts at $JOB but = only manage about 20% of my max residential connection speed when = downloading from remote FreeBSD hosts at $JOB. When at $JOB, both = FreeBSD and Linux hosts have no problem saturating their GbE connections = transferring between each other. Why is this and how can I debug and = fix it? >> I have a 200 Mbit residential cable connection (Xfinity, 200 Mbit = down/~10 Mbit up). I've noticed recently that I can easily get 10--20 = MB/s download speeds when transferring data from Linux hosts at work but = when I try to download that same data from the FreeBSD hosts I use the = speed usually tops out at 3--4 MB/s. These are Linux and FreeBSD hosts = that are on the same subnet at work. Transfers from the FreeBSD hosts = at work (within-subnet and within-site) are fine and match those of the = Linux hosts---often 112 MB/s. So, it just appears to be the traffic = over the WAN to my home that is affected. The WAN path from home to = this subnet is typically 15 hops with a typical average ping latency of = about 23 ms. >> The FreeBSD hosts are a mixture of -CURRENT, 13-STABLE, and = 13.1-RELEASE. I had done some TCP tuning based upon the calomel.org = <http://calomel.org/> <http://calomel.org/> tuning document = (https://calomel.org/freebsd_network_tuning.html), but removed those = tuning settings when I noticed the problem but the problem still = persists. The only remaining customisation is that the 13-STABLE has = "net.inet.tcp.cc.algorithm=3Dcubic". (I notice that -CURRENT now has = this as default so wanted to try that on 13-STABLE, too.) The FreeBSD = systems are using either igb or em NICs. The Linux systems are using = similar hardware. None has a problem maintaining local GbE transfer = speeds---it's only the slower/longer WAN connections that have problems = for the FreeBSD hosts. >> It seems that Linux hosts cope with the WAN path to my home better = than the FreeBSD systems. Has anyone else noticed this? Does anyone = have any idea as to what is obviously going wrong here and how I might = debug/fix the FreeBSD hosts to yield faster speeds? My workaround at = the moment is to favour using the remote Linux hosts for bulk data = transfers. (I don't like this workaround.) >> Any help/insight is gratefully appreciated. >> Cheers, >> Paul. >=20 > sysctl net.inet.tcp.cc.algorithm=3Dhtcp >=20 > I would set "htcp" on the server and home computer to improve through = in your type of situation. I did not mention this explicitly but part of the "some TCP tuning based = upon the calomel.org <http://calomel.org/> tuning document" I mention = having done (and then removed) was to use the "htcp" congestion control = algorithm. I restored the use of "htcp" at your suggestion and notice = it does improve matters slightly, but I still get nowhere near maxing = out my download pipe as I can when downloading from Linux hosts at $JOB. = Switching back to "htcp" on the FreeBSD servers improves matters from = 3--4 MB/s for bulk downloads to 5--6 MB/s (with some variability) based = upon several test downloads. The clients at home are a mixture but typically are macOS and FreeBSD. = My home setup uses OPNsense 23.1 as a gateway, using NAT for IPv4 and = Hurricane Electric for IPv6. (I'm using "htcp" CC on OPNsense. I'm = also using the Traffic Shaper on OPNsense and have a FQ_CoDel setup = defined that yields an A/A+ result on BufferBloat tests.) The remote = servers at $JOB (both Linux and FreeBSD) are on the same subnet as each = other and not behind a NAT. I have been doing the download tests over = IPv4 ("curl -v -4 -o /dev/null ..."). Cheers, Paul.= --Apple-Mail=_72FB16F0-865A-43BF-9F71-D02BAB1D8931 Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=us-ascii <html><head><meta http-equiv=3D"content-type" content=3D"text/html; = charset=3Dus-ascii"></head><body style=3D"overflow-wrap: break-word; = -webkit-nbsp-mode: space; line-break: after-white-space;"><span = style=3D"font-family: AtkinsonHyperlegible-Regular;">On Jan 30, 2023, at = 5:25 PM, </span><a href=3D"mailto:2yt@gmx.com" style=3D"font-family: = AtkinsonHyperlegible-Regular;">2yt@gmx.com</a><span style=3D"font-family: = AtkinsonHyperlegible-Regular;"> wrote:</span><br = style=3D"font-family: AtkinsonHyperlegible-Regular;"><div = style=3D"font-family: AtkinsonHyperlegible-Regular;"><br><blockquote = type=3D"cite">On 1/30/23 14:17, Paul Mather wrote:<br><div><blockquote = type=3D"cite">TL;DR: When working from home, I can max out my = residential 200 Mbit network connection when downloading from remote = Linux hosts at $JOB but only manage about 20% of my max residential = connection speed when downloading from remote FreeBSD hosts at $JOB. = When at $JOB, both FreeBSD and Linux hosts have no problem = saturating their GbE connections transferring between each other. = Why is this and how can I debug and fix it?<br>I have a 200 Mbit = residential cable connection (Xfinity, 200 Mbit down/~10 Mbit up). = I've noticed recently that I can easily get 10--20 MB/s download = speeds when transferring data from Linux hosts at work but when I try to = download that same data from the FreeBSD hosts I use the speed usually = tops out at 3--4 MB/s. These are Linux and FreeBSD hosts that are = on the same subnet at work. Transfers from the FreeBSD hosts at = work (within-subnet and within-site) are fine and match those of the = Linux hosts---often 112 MB/s. So, it just appears to be the = traffic over the WAN to my home that is affected. The WAN path = from home to this subnet is typically 15 hops with a typical average = ping latency of about 23 ms.<br>The FreeBSD hosts are a mixture of = -CURRENT, 13-STABLE, and 13.1-RELEASE. I had done some TCP tuning = based upon the <a = href=3D"http://calomel.org/">calomel.org</a> <<a = href=3D"http://calomel.org/">http://calomel.org/</a>> tuning document = (<a = href=3D"https://calomel.org/freebsd_network_tuning.html">https://calomel.o= rg/freebsd_network_tuning.html</a>), but removed those tuning settings = when I noticed the problem but the problem still persists. The = only remaining customisation is that the 13-STABLE has = "net.inet.tcp.cc.algorithm=3Dcubic". (I notice that -CURRENT now = has this as default so wanted to try that on 13-STABLE, too.) The = FreeBSD systems are using either igb or em NICs. The Linux systems = are using similar hardware. None has a problem maintaining local = GbE transfer speeds---it's only the slower/longer WAN connections that = have problems for the FreeBSD hosts.<br>It seems that Linux hosts cope = with the WAN path to my home better than the FreeBSD systems. Has = anyone else noticed this? Does anyone have any idea as to what is = obviously going wrong here and how I might debug/fix the FreeBSD hosts = to yield faster speeds? My workaround at the moment is to favour = using the remote Linux hosts for bulk data transfers. (I don't = like this workaround.)<br>Any help/insight is gratefully = appreciated.<br>Cheers,<br>Paul.<br></blockquote><br>sysctl = net.inet.tcp.cc.algorithm=3Dhtcp<br><br>I would set "htcp" on the server = and home computer to improve through in your type of = situation.</div></blockquote><br></div><div style=3D"font-family: = AtkinsonHyperlegible-Regular;"><br></div><div style=3D"font-family: = AtkinsonHyperlegible-Regular;">I did not mention this explicitly but = part of the "some TCP tuning based upon the <a = href=3D"http://calomel.org/">calomel.org</a> tuning document" I = mention having done (and then removed) was to use the "htcp" congestion = control algorithm. I restored the use of "htcp" at your suggestion = and notice it does improve matters slightly, but I still get nowhere = near maxing out my download pipe as I can when downloading from Linux = hosts at $JOB. Switching back to "htcp" on the FreeBSD servers = improves matters from 3--4 MB/s for bulk downloads to 5--6 MB/s (with = some variability) based upon several test downloads.</div><div = style=3D"font-family: AtkinsonHyperlegible-Regular;"><br></div><div = style=3D"font-family: AtkinsonHyperlegible-Regular;">The clients at home = are a mixture but typically are macOS and FreeBSD. My home setup = uses OPNsense 23.1 as a gateway, using NAT for IPv4 and Hurricane = Electric for IPv6. (I'm using "htcp" CC on OPNsense. I'm = also using the Traffic Shaper on OPNsense and have a FQ_CoDel setup = defined that yields an A/A+ result on BufferBloat tests.) The = remote servers at $JOB (both Linux and FreeBSD) are on the same subnet = as each other and not behind a NAT. I have been doing the download = tests over IPv4 ("curl -v -4 -o /dev/null ...").</div><div = style=3D"font-family: AtkinsonHyperlegible-Regular;"><br></div><div = style=3D"font-family: AtkinsonHyperlegible-Regular;">Cheers,</div><div = style=3D"font-family: AtkinsonHyperlegible-Regular;"><br></div><div = style=3D"font-family: = AtkinsonHyperlegible-Regular;">Paul.</div></body></html>= --Apple-Mail=_72FB16F0-865A-43BF-9F71-D02BAB1D8931--
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?55B6F3B3-1676-476D-9784-63164930199B>