Date: Wed, 06 May 2015 17:03:25 +0200 From: Mark Schouten <mark@tuxis.nl> To: freebsd-net@freebsd.org Subject: Re: Frequent hickups on the networking layer Message-ID: <554A2D3D.3060408@tuxis.nl> In-Reply-To: <21824.58754.452182.195043@hergotha.csail.mit.edu> References: <137094161.27589033.1430255162390.JavaMail.root@uoguelph.ca> <5540889A.5030904@tuxis.nl> <21824.58754.452182.195043@hergotha.csail.mit.edu>
index | next in thread | previous in thread | raw e-mail
Hi, On 04/29/2015 04:06 PM, Garrett Wollman wrote: > If you're using one of the drivers that has this problem, then yes, > keeping your layer-2 MTU/MRU below 4096 will probably cause it to use > 4k (page-sized) clusters instead, which are perfectly safe. > > As a side note, at least on the hardware I have to support, Infiniband > is limited to 4k MTU -- so I have one "jumbo" network with 4k frames > (that's bridged to IB) and one with 9k frames (that everything else > uses). So I was thinking, a customer of mine runs mostly the same setup, and has no issues at all. The only difference, MTU of 1500 vs MTU of 9000. I also created a graph in munin, graphing the number of mbuf_jumbo requests and failures. I find that when lots of writes occur to the iscsi-layer, the number of failed requests grow, and so so the number of errors on the ethernet interface. See attached images. My customer is also not suffering from crashing ctld-daemons, which crashes every other minute in my setup. So tonight I'm going to switch to an MTU of 1500, I'll let you know if that helped. Regards, Mark Schoutenhome | help
Want to link to this message? Use this
URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?554A2D3D.3060408>
