Date: Mon, 10 Nov 2008 17:19:49 +1030 From: Ian <no-spam@people.net.au> To: Matthew Seaman <m.seaman@infracaninophile.co.uk> Cc: Jeremy Chadwick <koitsu@freebsd.org>, freebsd-questions@freebsd.org Subject: Re: UFS2 limits Message-ID: <200811101719.56495.no-spam@people.net.au> In-Reply-To: <4916D492.5040406@infracaninophile.co.uk> References: <50261.1226194851@people.net.au> <20081109024046.GB27423@icarus.home.lan> <4916D492.5040406@infracaninophile.co.uk>
next in thread | previous in thread | raw e-mail | index | archive | help
[-- Attachment #1 --] On Sun, 9 Nov 2008 22:46:18 Matthew Seaman wrote: > Jeremy Chadwick wrote: > > I don't want to change the topic of discussion, but I *highly* recommend > > you ***stop*** whatever it is you're doing that is creating such a > > directory structure. Software which has to iterate through that > > directory using opendir() and readdir() will get slower and slower as > > time goes on. > > With the implementation of UFS_DIRHASH the practical limit on the > size of directories is now a great deal larger. In particular > the slow down caused by linear search through the contents has been > eliminated. See ffs(7). 10,000 files or sub-directories, whist > not a particularly elegant setup, is actually not unworkable > nowadays. Well that's certainly been my experience so far. Still, I now know we will run into problems when we hit the 32,768 limit, so I'll start designing something better. Cheers, -- Ian gpg key: http://home.swiftdsl.com.au/~imoore/no-spam.asc [-- Attachment #2 --] -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.9 (GNU/Linux) iEYEABECAAYFAkkX2ZQACgkQPUlnmbKkJ6DoXgCfW1Wsj7a1bpjAqLAZlrhyRjyB /pEAoIx/xe8LNh1pj1SKUg6ukVMOU6zI =Q4kt -----END PGP SIGNATURE-----
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200811101719.56495.no-spam>
