From owner-freebsd-fs Mon Nov 24 11:29:55 1997 Return-Path: Received: (from root@localhost) by hub.freebsd.org (8.8.7/8.8.7) id LAA26634 for fs-outgoing; Mon, 24 Nov 1997 11:29:55 -0800 (PST) (envelope-from owner-freebsd-fs) Received: from godzilla.zeta.org.au (godzilla.zeta.org.au [203.2.228.19]) by hub.freebsd.org (8.8.7/8.8.7) with ESMTP id LAA26625 for ; Mon, 24 Nov 1997 11:29:51 -0800 (PST) (envelope-from bde@zeta.org.au) Received: (from bde@localhost) by godzilla.zeta.org.au (8.8.7/8.6.9) id GAA27999; Tue, 25 Nov 1997 06:24:35 +1100 Date: Tue, 25 Nov 1997 06:24:35 +1100 From: Bruce Evans Message-Id: <199711241924.GAA27999@godzilla.zeta.org.au> To: bde@zeta.org.au, tlambert@primenet.com Subject: Re: ufs slowness Cc: fs@FreeBSD.ORG Sender: owner-freebsd-fs@FreeBSD.ORG X-Loop: FreeBSD.org Precedence: bulk >> ext2fs ffs >> seeks 372 144 >> xfers 372 145 >> blks 2751 1395 >> msps 0.5 4.1 >o Is this a ZBR disk? If not, are you using FreeBSD's > default settings, which pessimize geometry optimizations > for these disks? Of course it's ZBR. FreeBSD's default settings haven't done any significant geometry optimizations for several years. >o Was the FFS optimizing for space or time when writing? Time of course. The ufs disk wasn't very full (53% actually). Ther ext2fs disk was 93% full. >o Did you set a reasonable reserve so the hash-to-disk was > efficient on the FFS writes, or did you take FreeBSD's > politically motivated defaults (Hey! I'm "wasting" almost > 1G of my 9G disk!). Irrelevant, since it wasn't very full. >o ext2fs is extent based, so it's probably not dealing with > indirect blocks. Not important for small files. The speed is about the same for large (> memory size) sequential files. >o You are engaging in an atypical usage pattern by doing > a "tar" as your test. First, there is zero locality of This is not atypical for me. > reference, and second, the way tar traverses means that > on a tree that large, you've effectively disable the name > cache for FFS (you've damaged it for ext2fs as well, but > not to the same degree of fairness, given the relative > costs of directory operations and ext2fs's use of extent > based files for storing directory data). I used a large enough directory to damage the (data) cache on purpose. There are 3739 files. This is apparently enough to also damage directory caches >I'd say the *vast* majority of time spent is in directory operations, >rather than actual file data reading (ie: I think the hit from going >to indirect blocks in FFS is small). I agree. Perhaps it's just ext2fs hanging on to directory blocks better. >I'm also betting that you created the ext2fs by tarring up the >FFS and untarring it onto the ext2fs. Do the same to recreate an I actually used `cp -pR' from ext2fs to ufs. Bruce