Date: Wed, 26 Sep 2007 18:37:18 +1000 (EST) From: Bruce Evans <brde@optusnet.com.au> To: "Rick C. Petty" <rick-freebsd@kiwi-computer.com> Cc: freebsd-fs@FreeBSD.org Subject: Re: Writing contigiously to UFS2? Message-ID: <20070926175943.H58990@delplex.bde.org> In-Reply-To: <20070926031219.GB34186@keira.kiwi-computer.com> References: <46F3A64C.4090507@fluffles.net> <46F3B4B0.40606@freebsd.org> <fd0em7$8hn$1@sea.gmane.org> <20070921131919.GA46759@in-addr.com> <fd0gk8$f0d$2@sea.gmane.org> <20070921133127.GB46759@in-addr.com> <20070922022524.X43853@delplex.bde.org> <20070926031219.GB34186@keira.kiwi-computer.com>
next in thread | previous in thread | raw e-mail | index | archive | help
On Tue, 25 Sep 2007, Rick C. Petty wrote: > On Sat, Sep 22, 2007 at 04:10:19AM +1000, Bruce Evans wrote: >> >> of disk can be mapped. I get 180MB in practice, with an inode bitmap >> size of only 3K, so there is not much to be gained by tuning -i but > > I disagree. There is much to be gained by tuning -i: 224.50 MB per CG vs. > 183.77 MB.. that's a 22% difference. That's a 22% reduction in seeks where the cost of seeking every 187MB is a few mS every second. Say the disk speed 61MB/S and the seek cost is 15 mS. Then we waste 15 mS every 3 seconds with 183 MB cg's, or 2%. After saving 22%, we waste only 1.8%. These estimates are consistent with numbers I gave in previous mail. With the broken default of -e 2048 for 16K-blocks for ffs1, there was an unnecessary seek or 2 after only every 32MB. The disk speed was 52 MB/S (disk manufacturers's MB = 10^6 B). -e 2048 gave 50 MB/S and -e 8192 gave 51.5 MB/S. (52 MB/S was measured on the raw disk using dd. The raw disk tends to actually be slower than the file system due to not streaming.) Seeking after every 32MB (real MB) gives a seek every 645 mS, so if 2 seeks take 15 mS each the wastage was 4.7% so it was not surprising to get a speedup of 3% using -e 8192. Since I got to within 1% of the raw disk speed, there is little more to be gained in speed here. (The OP's problem was not speed.) (All this is for the benchmark "dd if=/dev/zero of=zz bs=1m count=N" where N = 200 or 1000.) >> more to be gained by tuning -b and -f (several doublings are reasonable). > > I completely agree with this. It's unfortunate that newfs doesn't scale > the defaults here based on the device size. Before someone dives in and > commits any adjustments, I hope they do sufficient testing and post their > results on this mailing list. Testing shows that only one doubling of -b and -f is reasonable for /usr/src but it makes little difference, so nothing should be changed. I'm still trying to make halving -b and -f back to 512/512 work right, so that it has the same disk speed as any/any, using contiguous layout and clustering so that physical disk i/o sizes are independent of the fs block sizes unless small i/o sizes are sufficient. Clustering already almost does this for data blocks provided the allocator manages to do a contiguous layout. Clustering already wastes a lot of CPU doing this by brute force, but CPU is relatively free. Bruce
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20070926175943.H58990>