Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 2 Feb 2004 14:06:04 -0800
From:      John-Mark Gurney <gurney_j@efn.org>
To:        Dan Nelson <dnelson@allantgroup.com>
Cc:        current@freebsd.org
Subject:   Re: Is BUFSIZ too small ?
Message-ID:  <20040202220604.GE572@funkthat.com>
In-Reply-To: <20040122180918.GA94901@dan.emsphone.com>
References:  <98907.1074546817@critter.freebsd.dk> <E1AjcbI-00050I-00@hetzner.co.za> <20040122180918.GA94901@dan.emsphone.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Dan Nelson wrote this message on Thu, Jan 22, 2004 at 12:09 -0600:
> > > I share many of your doubts, but I would still like to see some
> > > benchmarks :-)
> > 
> > Perhaps ftp is one of those things that uses BUFSIZ for the actual
> > I/O ops.  All of it's reads and writes if you truss it are 1024 bytes
> > which impacts its performance (here at least).
> 
> Yeah, it's not so much stdio's use of BUFSIZ, it's other applications
> using it for their preferred I/O size.  I upped the buffer size in ftpd
> locally because of this.  There are a lot of references to BUFSIZ in
> the base system's code, but they're mainly just for reading in a config
> file, for example, or misused as sizing a filename buffer.  ftpd and
> lpr jumped out as really wanting larger I/O sizes.

I had a k5/90 or so that was seriously limited by lpr's 1k buffer size..

I did some experiments with dd on obtaining an ideal block size, and I
came up with about 16kb last time I ran it..  This is the standard
dd if=/dev/zero of=/dev/null bs=xK style testing...

gen,ttyp4,~,505$time dd if=/dev/zero of=/dev/null bs=2k count=500000
1024000000 bytes transferred in 1.778601 secs (575733345 bytes/sec)
        1.78 real         0.13 user         1.64 sys
hydrogen,ttyp4,~,507$time dd if=/dev/zero of=/dev/null bs=8k count=125000
1024000000 bytes transferred in 0.673048 secs (1521437302 bytes/sec)
        0.67 real         0.02 user         0.65 sys
hydrogen,ttyp4,~,508$time dd if=/dev/zero of=/dev/null bs=16k count=62500
1024000000 bytes transferred in 0.498403 secs (2054562956 bytes/sec)
        0.50 real         0.00 user         0.49 sys
hydrogen,ttyp4,~,509$time dd if=/dev/zero of=/dev/null bs=32k count=31250
1024000000 bytes transferred in 0.408496 secs (2506757087 bytes/sec)
        0.41 real         0.00 user         0.40 sys
hydrogen,ttyp4,~,510$time dd if=/dev/zero of=/dev/null bs=64k count=$((31250/2))
1024000000 bytes transferred in 0.425369 secs (2407321506 bytes/sec)
        0.42 real         0.00 user         0.41 sys

Looks like it's now 32k in size..  Sure this isn't very scientific, but
it does show how the overhead of syscalls effect performance...  This
was done on a:
CPU: AMD Duron(tm) Processor (1211.96-MHz 686-class CPU)

so, this might just be l1/l2 cache being stressed, but it does show that
read/writes of smaller block sizes significantly impact performance...

It's hard to choose a value, since larger values impact memory usage for
apps that may not benifit as much, but also in the days of cheap memory,
this isn't such a bad thing...  I'd say 16k is a pretty good number, but
it is very arbitrary..

-- 
  John-Mark Gurney				Voice: +1 415 225 5579

     "All that I will do, has been done, All that I have, has not."



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20040202220604.GE572>