From owner-freebsd-net@FreeBSD.ORG Tue Jun 7 05:24:16 2011 Return-Path: Delivered-To: freebsd-net@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 96669106564A; Tue, 7 Jun 2011 05:24:16 +0000 (UTC) (envelope-from luigi@onelab2.iet.unipi.it) Received: from onelab2.iet.unipi.it (onelab2.iet.unipi.it [131.114.59.238]) by mx1.freebsd.org (Postfix) with ESMTP id 5DAA48FC13; Tue, 7 Jun 2011 05:24:16 +0000 (UTC) Received: by onelab2.iet.unipi.it (Postfix, from userid 275) id 6AA6F7300A; Tue, 7 Jun 2011 07:24:00 +0200 (CEST) Date: Tue, 7 Jun 2011 07:24:00 +0200 From: Luigi Rizzo To: grarpamp Message-ID: <20110607052400.GC4840@onelab2.iet.unipi.it> References: Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.4.2.3i Cc: freebsd-hackers@freebsd.org, freebsd-net@freebsd.org Subject: Re: FreeBSD I/OAT (QuickData now?) driver X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 07 Jun 2011 05:24:16 -0000 On Mon, Jun 06, 2011 at 10:13:51PM -0400, grarpamp wrote: > Is this work part of what's needed to enable the FreeBSD > equivalent of TNAPI? > > I know we've got polling. And probably MSI-X in a couple drivers. > Pretty sure there is still one CPU doing the interrupt work? > And none of the multiple queue thread spreading tech exists? i have heard of some Gsoc work that addresses the problem for cards that have a single queue, but drivers for other cards with native multiqueue (e.g. ixgbe, e1000 drivers) seem to have the ability to use one cpu per queue. I'd argue that for many types of applications (basically all for which PF_RING/TNAPI were designed), spreading work across cores is a second order problem, you should first avoid doing useless work. Please have a look at http://info.iet.unipi.it/~luigi/netmap/ which addresses both issues. cheers luigi > http://www.ntop.org/blog > http://www.ntop.org/TNAPI.html > TNAPI attempts to solve the following problems: > * Distribute the traffic across cores (i.e. the more core the more > scalable is your networking application) for improving scalability. > * Poll packets simultaneously from each RX queue (contraty to > sequential NAPI polling) for fetching packets as fast as possible > hence improve performance. > * Through PF_RING, expose the RX queues to the userland so that > the application can spawn one thread per queue hence avoid using > semaphores at all. > TNAPI achieves all this by starting one thread per RX queue. Received > packets are then pushed to PF_RING (if available) or through the > standard Linux stack. However in order to fully exploit this > technology it is necessary to use PF_RING as it provides a straight > packet path from kernel to userland. Furthermore it allows to create a > virtual ethernet card per RX queue. > _______________________________________________ > freebsd-hackers@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-hackers > To unsubscribe, send any mail to "freebsd-hackers-unsubscribe@freebsd.org"