Date: Wed, 2 Aug 2000 23:11:37 +0200 (CEST) From: Marius Bendiksen <mbendiks@eunet.no> To: Zhihui Zhang <zzhang@cs.binghamton.edu> Cc: Steve Carlson <stevec@nbci.com>, freebsd-fs@FreeBSD.ORG Subject: Re: FFS performance for large directories? Message-ID: <Pine.BSF.4.05.10008022306370.34912-100000@login-1.eunet.no> In-Reply-To: <Pine.SOL.4.21.0007311534090.1395-100000@sol.cs.binghamton.edu>
next in thread | previous in thread | raw e-mail | index | archive | help
> you want. Other solutions exist, such as B*-Tree or Hash table. They will > speed up directories look up time. On a side note, as to B-trees or such, I think this could be done as a hack to UFS, but I'm not very intrigued by the prospect of looking into it, nor the idea of hacking it up even further. > Having said this, you can try to put all directory file into the > memory. This is the idea of matt's VMIO directory. You can definitely > find discussions on this in the mailing list archive. This will improve efficiency for situations where you reuse names or at least access stuff in a somewhat non-random manner. You may not want to use this in cases where the expected lifetime of a cache entry is low. It does not, of course, help the seek issue. Striping your raid array correctly to keep the disks from tending to be bound to eachother would probably alleviate some of the problem, but not much, I think. > A third thing is that FFS performs poor accessing /usr/ports. This has Actually, it performs poorly on any complex directory hierarchy, especially when traversed depth-first. > something to do with how FFS layout directory inode (not file inode). The It (IIRC) tries to spread the directory inodes evenly across the cylinder groups, while trying to keep the file inodes in the same cylinder group as the directory inode. Marius To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-fs" in the body of the message
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?Pine.BSF.4.05.10008022306370.34912-100000>