Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 23 Apr 2015 05:14:33 -0700
From:      Navdeep Parhar <nparhar@gmail.com>
To:        Scott Larson <stl@wiredrive.com>
Cc:        freebsd-net@freebsd.org
Subject:   Re: net.inet.ip.forwarding impact on throughput
Message-ID:  <20150423121433.GA15890@ox>
In-Reply-To: <CAFt8naGoDDN%2B64snnCtwWfRMN5BkFJ0tc%2BBytifk-7u5_FgCsQ@mail.gmail.com>
References:  <CAFt8naGoDDN%2B64snnCtwWfRMN5BkFJ0tc%2BBytifk-7u5_FgCsQ@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Tue, Apr 21, 2015 at 12:47:45PM -0700, Scott Larson wrote:
>      We're in the process of migrating our network into the future with 40G
> at the core, including our firewall/traffic routers with 40G interfaces. An
> issue which this exposed and threw me for a week turns out to be directly
> related to net.inet.ip.forwarding and I'm looking to just get some insight
> on what exactly is occurring as a result of using it.

Enabling forwarding disables LRO and TSO and that probably accounts for
a large part of the difference in throughput that you've observed.  The
number of packets passing through the stack (and not the amount of data
passing through) is the dominant bottleneck.

fastforwarding _should_ make a difference, but only if packets actually
take the fast-forward path.  Check the counters available via netstat:
# netstat -sp ip | grep forwarded

Regards,
Navdeep

>      What I am seeing is when that knob is set to 0, an identical pair of
> what will be PF/relayd servers with direct DAC links between each other
> using Chelsio T580s can sustain around 38Gb/s on iperf runs. However the
> moment I set that knob to 1, that throughput collapses down into the 3 to
> 5Gb/s range. As the old gear this is replacing is all GigE I'd never
> witnessed this. Twiddling net.inet.ip.fastforwarding has no apparent effect.
>      I've not found any docs going in depth on what deeper changes enabling
> forwarding does to the network stack. Does it ultimately put a lower
> priority on traffic where the server functioning as the packet router is
> the final endpoint in exchange for having more resources available to route
> traffic across interfaces as would generally be the case?
> 
> 
> *[image: userimage]Scott Larson[image: los angeles]
> <https://www.google.com/maps/place/4216+Glencoe+Ave,+Marina+Del+Rey,+CA+90292/@33.9892151,-118.4421334,17z/data=!3m1!4b1!4m2!3m1!1s0x80c2ba88ffae914d:0x14e1d00084d4d09c>Lead
> Systems Administrator[image: wdlogo] <https://www.wiredrive.com/>; [image:
> linkedin] <https://www.linkedin.com/company/wiredrive>; [image: facebook]
> <https://www.twitter.com/wiredrive>; [image: twitter]
> <https://www.facebook.com/wiredrive>; [image: instagram]
> <https://www.instagram.com/wiredrive>T 310 823 8238 x1106
> <310%20823%208238%20x1106>  |  M 310 904 8818 <310%20904%208818>*
> _______________________________________________
> freebsd-net@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-net
> To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org"



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20150423121433.GA15890>