From owner-freebsd-hackers@FreeBSD.ORG Sat Jun 11 14:16:36 2011 Return-Path: Delivered-To: freebsd-hackers@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 3FC72106566B; Sat, 11 Jun 2011 14:16:36 +0000 (UTC) (envelope-from kmacybsd@gmail.com) Received: from mail-vw0-f54.google.com (mail-vw0-f54.google.com [209.85.212.54]) by mx1.freebsd.org (Postfix) with ESMTP id BBEEA8FC0C; Sat, 11 Jun 2011 14:16:35 +0000 (UTC) Received: by vws18 with SMTP id 18so4037151vws.13 for ; Sat, 11 Jun 2011 07:16:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=bB2CaWSNAlRPcxAKz9gyiGhtOACl2tO+LTtcA2ZOCAE=; b=oan3JYImdKyjrx1Jo38VVfXVgWx/XkCJfXXJInAUEgz69bfWt5ucyN6S8vf5sMLlt1 CYOSe4+euKp2ND7kpHcdZDyz8UDJb8DUcSaN6kWQfW7iUT0vnWhlH8TBgfmizX5zuCzi 8+cJ2YV3VHHDy4Niw9/GxL4D76wskZxSAELSU= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type :content-transfer-encoding; b=nzyYj+7nf9PWcTsCntqRW2bQ5TN6EWq+X0MfjhKrMGtA0vWe/2+OPD2PxYB7Xwn6Eq 0YqzX7zvBtoBL5AvRNSIvT2vHySLLrdO7XM5vOaIchyeuXODkRiuPKhGFubZxdVoLBRH Q5v14vy8hfIRjeFA8wrbllM7ZL1O47OC21I6Y= MIME-Version: 1.0 Received: by 10.52.175.133 with SMTP id ca5mr2848256vdc.82.1307801794681; Sat, 11 Jun 2011 07:16:34 -0700 (PDT) Sender: kmacybsd@gmail.com Received: by 10.52.187.74 with HTTP; Sat, 11 Jun 2011 07:16:34 -0700 (PDT) In-Reply-To: References: Date: Sat, 11 Jun 2011 16:16:34 +0200 X-Google-Sender-Auth: u4sdlcFDGWo5MmDrbVyztTQEw2c Message-ID: From: "K. Macy" To: "K. Macy" Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Cc: "freebsd-hackers@freebsd.org" , grarpamp , "freebsd-net@freebsd.org" Subject: Re: FreeBSD I/OAT (QuickData now?) driver X-BeenThere: freebsd-hackers@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Technical Discussions relating to FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 11 Jun 2011 14:16:36 -0000 Oops, second 10 GigE should obviously be 1GigE On Tuesday, June 7, 2011, K. Macy wrote: > All 10GigE NICs and some newer 10 GigE NICs have multiple hardware > queues with a separate MSI-x vector per queue, where each vector is > directed to a different CPU. The current operating model is to have a > separate interrupt thread per vector. This obviously gets bogged down > if one has multiple cards as the interrupt threads end up requiring > the scheduler to distribute work fairly between cards as multiple > threads will end up running on the same CPUs. Nokia had a reasonable > interface for coping with this that was reminiscent of NAPI whereby > cooperative sharing between interfaces was provided by having a single > taskqueue thread per-core and the cards would queue tasks (which would > be re-queued if more than a certain amount of work were required) as > interrupts were delivered. There has been talk off and on of porting > this "net_task" interface to freebsd. > > None of this addresses PF_RING's facility for pushing packets in to > userland - but presumably Rizzo's netmap work addresses those in need > of that sufficiently. > > Cheers, > Kip > > On Tue, Jun 7, 2011 at 4:13 AM, grarpamp wrote: >> Is this work part of what's needed to enable the FreeBSD >> equivalent of TNAPI? >> >> I know we've got polling. And probably MSI-X in a couple drivers. >> Pretty sure there is still one CPU doing the interrupt work? >> And none of the multiple queue thread spreading tech exists? >> >> http://www.ntop.org/blog >> http://www.ntop.org/TNAPI.html >> TNAPI attempts to solve the following problems: >> =A0 =A0* Distribute the traffic across cores (i.e. the more core the mor= e >> scalable is your networking application) for improving scalability. >> =A0 =A0* Poll packets simultaneously from each RX queue (contraty to >> sequential NAPI polling) for fetching packets as fast as possible >> hence improve performance. >> =A0 =A0* Through PF_RING, expose the RX queues to the userland so that >> the application can spawn one thread per queue hence avoid using >> semaphores at all. >> TNAPI achieves all this by starting one thread per RX queue. Received >> packets are then pushed to PF_RING (if available) or through the >> standard Linux stack. However in order to fully exploit this >> technology it is necessary to use PF_RING as it provides a straight >> packet path from kernel to userland. Furthermore it allows to create a >> virtual ethernet card per RX queue. >> _______________________________________________ >> freebsd-net@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-net >> To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org" >> >