Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 10 Jun 2006 10:06:51 +1000 (EST)
From:      Bruce Evans <bde@zeta.org.au>
To:        Scott Long <scottl@samsco.org>
Cc:        Mikhail Teterin <mi+mx@aldan.algebra.com>, fs@freebsd.org
Subject:   Re: heavy NFS writes lead to corrup summary in superblock
Message-ID:  <20060610091109.R14403@delplex.bde.org>
In-Reply-To: <4489A8CC.8030307@samsco.org>
References:  <20060609065656.31225.qmail@web30313.mail.mud.yahoo.com> <200606091253.37446.mi%2Bmx@aldan.algebra.com> <4489A8CC.8030307@samsco.org>

next in thread | previous in thread | raw e-mail | index | archive | help
On Fri, 9 Jun 2006, Scott Long wrote:

> Mikhail Teterin wrote:
>> When I tried to use the FS as a scratch for an unrelated thing, though, I 
>> noticed some processes hanging in nbufkv state. Google-ing led me to the:
>> 
>> 	http://lists.freebsd.org/pipermail/freebsd-current/2003-June/004702.html
>> 
>> Is this 3 year old advise *still* true? I rebuilt the kernel with BKVASIZE

Probably.

>> bumped to 64K (the block size on the FS in question) and am running another 
>> batch of dumps right now. When it is over, I'll check the df/du...
>
> Can you actually measure a performance difference with using the -b 65535 
> option on newfs?  All of the I/O is buffered anyways and contiguous data is 
> already going to be written in 64k blocks.

I can measure a performance loss with larger block sizes, but mainly
with small files, the default BKVASIZE, and larger fragment sizes too.
On a WDC ATA drive which is quite slow for small block sizes (4K, 8K
and 16K tansfer at 26MB/S and 32K+ at 49MB/S), a block/frag sizes of
32K/4K gives much the same throughput for copying /usr/src as does
16K/2K, but about half as much throughput for 32K/32K.  I stopped
benchmarking block sizes of 64K because old benchmarks showed that
they only gave performance losses for /usr/src.  With only large files,
the fragment size shouldn't matter, but the block size shouldn't matter
either once it is not too small, since files should be laid out
contiguously and small blocks should be clustered into large ones
efficiently.  However, contiguous layout and clustering don't work
perfectly and/or very efficiently, and using large block sizes like
the default of 16K is an easy way to increase contiguity and reduce
overheads for clustering.  Fragmentation (discontiguity, not the
fragmentation reported by fsck), tends to be very large for old,
active file systems and typically reduces efficiency of trees like
/home/ncvs by a factor of 5-10.

Bruce



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20060610091109.R14403>