Date: Tue, 22 Apr 2008 06:35:38 -0700 From: Chris Pratt <eagletree@hughes.net> To: gnn@freebsd.org Cc: Robert Watson <rwatson@freebsd.org>, net@freebsd.org Subject: Re: zonelimit issues... Message-ID: <F2373438-DA8B-4B6D-8E5E-D52520C4AEC7@hughes.net> In-Reply-To: <m2prsj4pqx.wl%gnn@neville-neil.com> References: <m2hcdztsx2.wl%gnn@neville-neil.com> <48087C98.8060600@delphij.net> <382258DB-13B8-4108-B8F4-157F247A7E4B@hughes.net> <20080420103258.D67663@fledge.watson.org> <33AC96BF-B9AC-4303-9597-80BC341B7309@hughes.net> <m2prsj4pqx.wl%gnn@neville-neil.com>
next in thread | previous in thread | raw e-mail | index | archive | help
On Apr 21, 2008, at 12:43 AM, gnn@freebsd.org wrote: > ...snip > > Well there are plenty of us motivated to get at these issues. Can you > do me a favor and characterize your traffic a bit? Is it mostly TCP, The traffic that seems to take us out is TCP port 80. I'll make a generalized guess but it does seem to follow. We freeze on one of two dramatically heavy use days for our industry (Sunday and Monday evening). The hang will actually occur on Monday or Tuesday following these days if sufficient traffic hits us. It has not always followed this pattern but most frequently. There is always a high presence of high frequency attacks of various sorts. For example referer spam posts which hit us hard on our busy evenings. So it is TCP and I would presume we usually have the establishment of many useless sessions that could cause us to bump up against limits and cause exhaustion coupled with our real traffic peaks. This thread has given me several things to try and I'm adjusting (e.g., nmbclusters) upward to see what happens. I should also mention that this system has the natural limitations on it's traffic ceiling of two T1s on two NICs and a 3rd LAN NIC fielding continuous round-robin mysql replication and rsync style mirroring. It uses two bge interfaces and one server type em interface. It's always troubled me that the zonelimit issues have always been associated with higher volume circuits (in what I've read). But since our issue is very directly related to traffic levels and seem to occur at times where my monitors show us way over committed on the two outward facing T1s, I'm still going to proceed with the adjustments and see if it increases our survivability. Thanks for your time on this. > or heavily UDP or some sort of mix? The issues I see are UDP based, > which is less surprising as UDP has no backpressure and it is easy to > over commit the system by upping the socket buffer space allocated > without upping the number of clusters to compensate. > > Best, > George > > _______________________________________________ > freebsd-net@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-net > To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org"
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?F2373438-DA8B-4B6D-8E5E-D52520C4AEC7>