From owner-svn-src-all@FreeBSD.ORG Tue May 5 09:38:23 2015 Return-Path: Delivered-To: svn-src-all@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 15CCBC01; Tue, 5 May 2015 09:38:23 +0000 (UTC) Received: from bigwig.baldwin.cx (bigwig.baldwin.cx [IPv6:2001:470:1f11:75::1]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id E4963167B; Tue, 5 May 2015 09:38:22 +0000 (UTC) Received: from ralph.baldwin.cx (pool-173-54-116-245.nwrknj.fios.verizon.net [173.54.116.245]) by bigwig.baldwin.cx (Postfix) with ESMTPSA id D4210B980; Tue, 5 May 2015 05:38:21 -0400 (EDT) From: John Baldwin To: Gleb Smirnoff Cc: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: Re: svn commit: r282280 - in head/sys/dev: e1000 ixgbe ixl Date: Tue, 05 May 2015 05:32:11 -0400 Message-ID: <1850166.lcXaWhCA6D@ralph.baldwin.cx> User-Agent: KMail/4.14.2 (FreeBSD/10.1-STABLE; KDE/4.14.2; amd64; ; ) In-Reply-To: <20150505064556.GM34544@FreeBSD.org> References: <201504301823.t3UINd74073186@svn.freebsd.org> <2463555.FfYUgqxYi8@ralph.baldwin.cx> <20150505064556.GM34544@FreeBSD.org> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.7 (bigwig.baldwin.cx); Tue, 05 May 2015 05:38:21 -0400 (EDT) X-BeenThere: svn-src-all@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: "SVN commit messages for the entire src tree \(except for " user" and " projects" \)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 05 May 2015 09:38:23 -0000 On Tuesday, May 05, 2015 09:45:56 AM Gleb Smirnoff wrote: > John, > > On Mon, May 04, 2015 at 04:01:28PM -0400, John Baldwin wrote: > J> > Your answer seems quite orthogonal to my question. I reread it couple of times, > J> > but still can't figure out how exactly do you prefet to fetch per-queue stats. > J> > Can you please explain in more detail? > J> > J> struct if_queue { > J> struct ifnet *ifq_parent; > J> void (*ifq_get_counter)(struct if_queue *, ift_counter); > J> ... > J> }; > J> > J> (Pretend that if_queue is a new object type and that each RX or TX queue on a > J> NIC has one.) > > This looks like a driver with 1024 queues would carry extra 1024 function pointers > per ifnet. Is it really worth? Could it be that queue #0 differs from queue #1? > Even, if a rare case when queue #N differs from queue #M do exist, they still > can share the pointer and the differentiating logic would be in the function > itself. Drivers with 1024 queues already have several pointers, however, you could have a "class" pointer (something ifnet doesn't have) like 'cdevsw' where you have a single structure containing the ops and the various queues have a pointer to that instead of having N duplicated function pointers. OTOH, you could probably keep this as an ifnet op, but accept a queue pointer as the argument still (that would give a similar effect). If you really want to trim function pointers though, fix ifnet to not duplicate them and use a shared ifnetsw among instances. :) > Right now, in the projects/ifnet branch, I'm developing in quite opposite > direction - many instances of the same driver share the set of interface > options. This is done to shrink struct ifnet. > > What's wrong with KPI when the queue number is parameter to an ifop? This > KPI would also hide the queue pointers from the stack, which are quite > driver specific. I think at some point we will want stack awareness in the stack for more than just stats. For example, the whole buf_ring/if_transmit thing has been non-ideal in terms of requiring drivers to duplicate a lot of code that was previously hidden from them making the drivers more complex (and fragile). Several of us would like to push the knowledge of the software ring (which is per-TX queue) back up out of drivers, but that will require some per-queue state stored outside of drivers. You could certainly do that with parallel arrays instead, but I'm not sure that is better than having a structure (at least I'm not sure that is as easy to reason about when you are working on the stack) -- John Baldwin