From owner-freebsd-hackers@FreeBSD.ORG Sun Mar 29 15:59:04 2015 Return-Path: Delivered-To: freebsd-hackers@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 31F987F6; Sun, 29 Mar 2015 15:59:04 +0000 (UTC) Received: from zxy.spb.ru (zxy.spb.ru [195.70.199.98]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id D87772E9; Sun, 29 Mar 2015 15:59:03 +0000 (UTC) Received: from slw by zxy.spb.ru with local (Exim 4.84 (FreeBSD)) (envelope-from ) id 1YcFbz-000J0E-5C; Sun, 29 Mar 2015 18:58:55 +0300 Date: Sun, 29 Mar 2015 18:58:55 +0300 From: Slawa Olhovchenkov To: Adrian Chadd Subject: Re: irq cpu binding Message-ID: <20150329155855.GO23643@zxy.spb.ru> References: <20150328224634.GH23643@zxy.spb.ru> <20150328230533.GI23643@zxy.spb.ru> <20150328234116.GJ23643@zxy.spb.ru> <20150329003354.GK23643@zxy.spb.ru> <20150329081902.GN23643@zxy.spb.ru> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.23 (2014-03-12) X-SA-Exim-Connect-IP: X-SA-Exim-Mail-From: slw@zxy.spb.ru X-SA-Exim-Scanned: No (on zxy.spb.ru); SAEximRunCond expanded to false Cc: "freebsd-hackers@freebsd.org" X-BeenThere: freebsd-hackers@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Technical Discussions relating to FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 29 Mar 2015 15:59:04 -0000 On Sun, Mar 29, 2015 at 08:20:25AM -0700, Adrian Chadd wrote: > >> The other half of the network stack - the sending side - also needs to > >> be either on the same or nearby CPU, or you still end up with lock > >> contention and cache thrashing. > > > > For incoming connections this will be automatuc -- sending will be > > from CPU binding to receiving queue. > > > > Outgoing connections is more complex case, yes. > > Need to transfer FD (with re-binding) and signaling (from kernel to > > application) about prefered CPU. Prefered CPU is CPU give SYN-ACK. > > And this need assistance from application. But I am currently can't > > remember application massive servering outgouing connections. > > Or you realise you need to rewrite your userland application so it > doesn't have to do this, and instead uses an IOCP/libdispatch style IO > API to register for IO events and get IO completions to occur in any > given completion thread. nginx is multi-process application, not multi-thread, for example. > Then it doesn't have to care about moving descriptors around - it just > creates an outbound socket, and then the IO completion callbacks will > happen wherever they need to happen. If that needs to shuffle around > due to RSS rebalancing then it'll "just happen". > > And yeah, I know of plenty of applications doing massive outbound > connections - anything being an intermediary HTTP proxy. :) Hmm, yes and no :) Yes, proxy do outbound connections, but proxy crossover inbound and outbound connections and in general this connections pined to different CPU. Is this perfomance gain?..