From owner-freebsd-net@FreeBSD.ORG Wed Sep 15 02:27:53 2004 Return-Path: Delivered-To: freebsd-net@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 4A06716A4CF for ; Wed, 15 Sep 2004 02:27:53 +0000 (GMT) Received: from exchange.sandvine.com (sandvine.com [199.243.201.138]) by mx1.FreeBSD.org (Postfix) with ESMTP id D5C3C43D3F for ; Wed, 15 Sep 2004 02:27:52 +0000 (GMT) (envelope-from don@sandvine.com) X-MimeOLE: Produced By Microsoft Exchange V6.0.6556.0 content-class: urn:content-classes:message MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Date: Tue, 14 Sep 2004 22:27:52 -0400 Message-ID: X-MS-Has-Attach: X-MS-TNEF-Correlator: Thread-Topic: packet generator Thread-Index: AcSappBW23tNUwTeT+K79YLr6oB0ZwAJP9Gg From: "Don Bowman" To: "Andrew Gallatin" cc: freebsd-net@freebsd.org Subject: RE: packet generator X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 15 Sep 2004 02:27:53 -0000 From: Andrew Gallatin [mailto:gallatin@cs.duke.edu] > Andrew Gallatin writes: >=20 > > xmit routine was called 683441 times. This means that the=20 > queue was > > only a little over two packets deep on average, and vmstat=20 > shows idle > > time. I've tried piping additional packets to nghook mx0:orphans > > input, but that does not seem to increase the queue depth. > >=20 >=20 > The problem here seems to be that rather than just slapping the > packets onto the driver's queue, ng_source passes the mbuf down > to more of netgraph, where there is at least one spinlock, > and the driver's ifq lock is taken and released a zillion times > by ether_output_frame(), etc. >=20 > A quick hack (appended) to just slap the mbufs onto the if_snd queue > gets me from ~410Kpps to 1020Kpps. I also see very deep queues > with this (because I'm slamming 4K pkts onto the queue at once..). >=20 > This is nearly identical to the linux pktgen figure on the same > hardware, which makes me feel comfortable that there is a lot of > headroom in the driver/firmware API and I'm not botching something > in the FreeBSD driver. >=20 > BTW, did you see your 800Kpps on 4.x or 5.x? If it was 4.x, what do > you see on 5.x if you still have the same setup handy? >=20 > Thanks, 800Kpps was on 4.7. on a dual 2.8GHz Xeon with 100MHz PCI-X on em. I will try the 5.3. --don