Date: Wed, 13 Nov 2002 23:59:00 +1100 (EST) From: Bruce Evans <bde@zeta.org.au> To: David Schultz <dschultz@uclink.Berkeley.EDU> Cc: Terry Lambert <tlambert2@mindspring.com>, Tomas Pluskal <plusik@pohoda.cz>, <freebsd-fs@FreeBSD.ORG> Subject: Re: seeking help to rewrite the msdos filesystem Message-ID: <20021113232517.P381-100000@gamplex.bde.org> In-Reply-To: <20021113002807.GA4711@HAL9000.homeunix.com>
next in thread | previous in thread | raw e-mail | index | archive | help
On Tue, 12 Nov 2002, David Schultz wrote: > Thus spake Terry Lambert <tlambert2@mindspring.com>: > > This has more to do with sequential access. Technically, you can > > read a FAT cluster at a time instead of an FS block at a time, and > > you will achieve some multiplier on sequential access, but you will > > find that under load, that the fault rate for blocks will go up. FAST clusters _are_ FS blocks in msdosfs. > > Also, even if you read 64K at a time, you will end up LRU'ing out > > the data that you don't access. > > > > The issue is that UNIX files are accessed by offset, and FAT files > > are accessed by offset by chaining clusters from the start to the > > cluster of interest, and then reading blocks. > > Few people use FAT filesystems under heavy load as they do UFS. > Basically, I think what he wants to do is speed up sequential > reads for a single process doing, say, digital video editing. On I think so too. > a FAT FS that is relatively free of fragmentation, na=EFve > read-ahead is likely to improve performance for this type of load, > even though the next logical block in the file might not be the > next physical block on the disk. IIRC, SMARTDRV does this. This > approach is optimizing for the single-user case, but if you have > several people using a single FAT FS at a time, you have much > bigger problems. Strangely enough, msdosfs already does naive read-ahead. It uses essentially the old read-ahead code from the version of ffs that it was cloned from (approx. @(#)ufs_vnops.c 7.64 (Berkeley) 5/16/91 ("Net/2")). It doesn't do clustering, but clustering is relatively unimportant in many cases including (apparently) the one here. The problem here seems to be just that some drives don't have any significant buffering and/or have huge command overheads, so even the ffs default block size of 16K is too small. The msdosfs default block size of 2K for ZIP drives is far too small. Clustering increases the effective block size to 64K, which is large enough for most purposes, but mdosfs is missing the few lines of code needed to implement clustering, and read-ahead doesn't help since it is done in units of the too-small block size. This is an old problem, but mostly finished going away about 7 years when adequate buffering and/or firmware to manage it became normal in all ordinary disk drives. Bruce To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-fs" in the body of the message
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20021113232517.P381-100000>