From owner-freebsd-hackers Thu Feb 4 08:56:10 1999 Return-Path: Received: (from majordom@localhost) by hub.freebsd.org (8.8.8/8.8.8) id IAA15109 for freebsd-hackers-outgoing; Thu, 4 Feb 1999 08:56:10 -0800 (PST) (envelope-from owner-freebsd-hackers@FreeBSD.ORG) Received: from bright.fx.genx.net (bright.fx.genx.net [206.64.4.154]) by hub.freebsd.org (8.8.8/8.8.8) with ESMTP id IAA15015 for ; Thu, 4 Feb 1999 08:56:00 -0800 (PST) (envelope-from bright@hotjobs.com) Received: from localhost (bright@localhost) by bright.fx.genx.net (8.9.1/8.9.1) with ESMTP id MAA10659; Thu, 4 Feb 1999 12:00:25 -0500 (EST) (envelope-from bright@hotjobs.com) X-Authentication-Warning: bright.fx.genx.net: bright owned process doing -bs Date: Thu, 4 Feb 1999 12:00:25 -0500 (EST) From: Alfred Perlstein X-Sender: bright@bright.fx.genx.net To: "Dirk-Willem van Gulik (vaio)" cc: freebsd-hackers@FreeBSD.ORG Subject: Re: Irratic Curve In-Reply-To: <36B97B38.74FF0B68@webweaving.org> Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: owner-freebsd-hackers@FreeBSD.ORG Precedence: bulk X-Loop: FreeBSD.ORG On Thu, 4 Feb 1999, Dirk-Willem van Gulik (vaio) wrote: > Whilst playing with a small, but fast, berkely DB based transaction > server; which sits on a tcp/ip socket connection I ran into sometimes > unpredictable reply times. One of the major problem was solved by > increasing the MSIZE to 256 (the 103 bytes+ delayed ack problem). > > Now recently I came across: > > http://www.scl.ameslab.gov/Projects/Gigabit/performance/prelim.html > > Now could any one explain to me WHY freebsd appears so unpredicatable ? > i.e. not a nice S-curve ? Is that the way of measuring ? Some other > artifact, or real ? I think it is real, as I get the same sort of > holes in my graphs for the transaction server. > > Any chances on an expose.... Hmmm, i looked at the charts. The only thing i can say is that the initial drop off at the packet size of just above 100 is because mbufs are allocated on 108 byte boundries, suddenly FreeBSD has to switch to "high throughput mode" when you are hardly exceeding the boundry. It levels off because after a bit, the extra time taken for larger data clusters pays off at what seems to be 200 bytes. It is scary how odd it acts when the packet size is extremely large. Perhaps the driver isn't coded properly? You should also consider that they are using freebsd as of 2 years ago, there are probably major effeciency issues that have been worked on since then. -Alfred To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-hackers" in the body of the message