Date: Thu, 7 Jan 2010 07:01:28 +1100 (EST) From: Bruce Evans <brde@optusnet.com.au> To: Ivan Voras <ivoras@freebsd.org> Cc: svn-src-head@freebsd.org, Alexander Motin <mav@freebsd.org>, src-committers@freebsd.org, svn-src-all@freebsd.org Subject: Re: svn commit: r201658 - head/sbin/geom/class/stripe Message-ID: <20100107065127.U55530@delplex.bde.org> In-Reply-To: <9bbcef731001061103u33fd289q727179454b21ce18@mail.gmail.com> References: <201001061712.o06HCICF087127@svn.freebsd.org> <9bbcef731001060938k2b0014a2m15eef911b9922b2c@mail.gmail.com> <4B44D8FA.2000608@FreeBSD.org> <9bbcef731001061103u33fd289q727179454b21ce18@mail.gmail.com>
next in thread | previous in thread | raw e-mail | index | archive | help
On Wed, 6 Jan 2010, Ivan Voras wrote: > 2010/1/6 Alexander Motin <mav@freebsd.org>: >> Ivan Voras wrote: > >>> I think there was one more reason - though I'm not sure if it is still >>> valid because of your current and future work - the MAXPHYS >>> limitation. If MAXPHYS is 128k, with 64k stripes data was only to be >>> read from maximum of 2 drives. With 4k stripes it would have been read >>> from 128/4=32 drives, though I agree 4k is too low in any case >>> nowadays. I usually choose 16k or 32k for my setups. >> >> While you are right about MAXPHYS influence, and I hope we can rise it >> not so far, IMHO it is file system business to manage deep enough >> read-ahead/write-back to make all drives busy, independently from >> MAXPHYS value. With small MAXPHYS value FS should just generate more >> requests in advance. Except some RAID3/5/6 cases, where short writes >> ineffective, MAXPHYS value should only affect processing overhead. > > Yes, my experience which lead to the post was mostly on UFS which, > while AFAIK it does read-ahead, it still does it serially (I think > this is implied by your experiments with NCQ and ZFS vs UFS) - so in > any case only 2 drives are hit with 64k stripe size at any moment in > time. fsck has no signifcant knowledge of read-ahead. Normally it uses vfs read clustering, which under the most favourable circumstances reduces to read-ahead of a maxiumum of MAXPHYS (less the initial size). If read clustering is disabled, then ffs does old-style read-ahead of a whole block (16K). Most file systems in FreeBSD are similar or worse (some support a block size of 512 and reading ahead by that amount gives interestingly slow behaviour). Bruce
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20100107065127.U55530>