From owner-svn-src-all@freebsd.org Tue Feb 2 21:41:14 2016 Return-Path: Delivered-To: svn-src-all@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 9AE9EA990EF; Tue, 2 Feb 2016 21:41:14 +0000 (UTC) (envelope-from slw@zxy.spb.ru) Received: from zxy.spb.ru (zxy.spb.ru [195.70.199.98]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 5A3B31591; Tue, 2 Feb 2016 21:41:14 +0000 (UTC) (envelope-from slw@zxy.spb.ru) Received: from slw by zxy.spb.ru with local (Exim 4.86 (FreeBSD)) (envelope-from ) id 1aQihE-00026L-ER; Wed, 03 Feb 2016 00:41:12 +0300 Date: Wed, 3 Feb 2016 00:41:12 +0300 From: Slawa Olhovchenkov To: Alfred Perlstein Cc: Xin LI , "svn-src-head@freebsd.org" , "svn-src-all@freebsd.org" , "src-committers@freebsd.org" , John Baldwin Subject: Re: svn commit: r295136 - in head: sys/kern sys/netinet sys/sys usr.bin/netstat Message-ID: <20160202214112.GR88527@zxy.spb.ru> References: <201602020557.u125vxCP084718@repo.freebsd.org> <36439709.poT7RgRunK@ralph.baldwin.cx> <56B10D67.4050602@freebsd.org> <56B11323.70905@freebsd.org> <20160202210958.GV37895@zxy.spb.ru> <56B11DF0.3060401@freebsd.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <56B11DF0.3060401@freebsd.org> User-Agent: Mutt/1.5.24 (2015-08-30) X-SA-Exim-Connect-IP: X-SA-Exim-Mail-From: slw@zxy.spb.ru X-SA-Exim-Scanned: No (on zxy.spb.ru); SAEximRunCond expanded to false X-BeenThere: svn-src-all@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: "SVN commit messages for the entire src tree \(except for " user" and " projects" \)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 02 Feb 2016 21:41:14 -0000 On Tue, Feb 02, 2016 at 01:21:52PM -0800, Alfred Perlstein wrote: > >>> I would second John's comment on the necessity of the change though, > >>> if one already have 32K of *backlogged* connections, it's probably not > >>> very useful to allow more coming in. It sounds like the application > >>> itself is seriously broken, and unless expanding the field have some > >>> performance benefit, I don't think it should stay. > >> Imagine a hugely busy image board like 2ch.net, if there is a single > >> hiccup, it's very possible to start dropping connections. > > In reality start dropping connections in any case: nobody will be > > infinity wait of accept (user close browser and go away, etc). > > > > Also, if you have more then 4K backloged connections -- you have > > problem, you can't process all connections request and in next second > > you will be have 8K, after next second -- 12K and etc. > > > In our case the user would not really know if our "page" didn't load > because we were just an invisible gif. > > So back to the example, let's scale that out to today's numbers. > > 100mbps -> 10gigE, so that would be 1500 conn/sec -> 150,000 conn/sec. > so basically at 0.20 of a second of any sort of latency I will be > overflowing the listen queue and dropping connections. OK, you talk about very specilal case -- extremaly short connections, about one data packet. Yes, in this case you got this behaivor. I think case of 2ch is different. > Now when you still have CPU to spare because connections *are* precious, > then the model makes sense to slightly over-provision the servers to > allow for somebacklog to be processed. > > So, in today's day and age, it really does make sense to allow for > buffering more than 32k connections, particularly if the developer knows > what he is doing. > > Does this help explain the reasoning? Yes, some special cases may be exist.