From owner-freebsd-current Sun Mar 16 23:44:27 2003 Delivered-To: freebsd-current@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 4565437B404 for ; Sun, 16 Mar 2003 23:44:26 -0800 (PST) Received: from rms21.rommon.net (rms21.rommon.net [193.64.42.200]) by mx1.FreeBSD.org (Postfix) with ESMTP id 51B6343F3F for ; Sun, 16 Mar 2003 23:44:24 -0800 (PST) (envelope-from pete@he.iki.fi) Received: from PHE (h93.vuokselantie10.fi [193.64.42.147]) by rms21.rommon.net (8.12.6/8.12.6) with SMTP id h2H7iFQB093220; Mon, 17 Mar 2003 09:44:20 +0200 (EET) (envelope-from pete@he.iki.fi) Message-ID: <048601c2ec59$0696dd30$932a40c1@PHE> From: "Petri Helenius" To: "Terry Lambert" Cc: References: <0ded01c2e295$cbef0940$932a40c1@PHE> <20030304164449.A10136@unixdaemons.com> <0e1b01c2e29c$d1fefdc0$932a40c1@PHE> <20030304173809.A10373@unixdaemons.com> <0e2b01c2e2a3$96fd3b40$932a40c1@PHE> <20030304182133.A10561@unixdaemons.com> <0e3701c2e2a7$aaa2b180$932a40c1@PHE> <20030304190851.A10853@unixdaemons.com> <001201c2e2ee$54eedfb0$932a40c1@PHE> <20030307093736.A18611@unixdaemons.com> <008101c2e4ba$53d875a0$932a40c1@PHE> <3E68ECBF.E7648DE8@mindspring.com> <3E70813B.7040504@he.iki.fi> <3E750D52.FFA28DA2@mindspring.com> Subject: Re: mbuf cache Date: Mon, 17 Mar 2003 09:44:16 +0200 MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2800.1106 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2800.1106 Sender: owner-freebsd-current@FreeBSD.ORG Precedence: bulk List-ID: List-Archive: (Web Archive) List-Help: (List Instructions) List-Subscribe: List-Unsubscribe: X-Loop: FreeBSD.ORG > You can get to this same point in -CURRENT, if you are using up to > date sources, by enabling direct dispatch, which disables NETISR. > This will help somewhat more than polling, since it will remove the > normal timer latency between receipt of a packet, and processing of > the packet through the networks stack. This should reduce overall > pool retention time for individual mbufs that don't end up on a > socket so_rcv queue. Because interrupts on the card are not > acknowledged until the code runs to completion, this also tends to > requlate interupt load. > My source seems to be a few days older than when this stuff went in, will update and try it out. > This also has the desirable side effect that stack processing will > occur on the same CPU as the interrupt processing occurred. This > avoids inter-CPU memory bus arbitration cycles, and ensures that > you won't engage in a lot of unnecessary L1 cache busting. Hence > I prefer this method to polling. > Anywhere I could read up on the associated overhead and how the whole stuff works out in the worst case where data is DMAd into memory, read up to CPU1 and then to CPU2 and then discarded and if there would be any roads that can be taken to optimize this. > > You will get much better load capacity scaling out of two cheaper > boxes, if you implement correctly, IMO. Synchronization of the unformatted data can probably never get as good as it gets if you optimize the system for your case. But I agree it should be better than it is now, however it does not really seem to get any better. (unless you consider the EV7 and Opteron approaches better than the current Intel approach) Pete To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-current" in the body of the message