Date: Sat, 4 May 2002 00:19:36 -0400 From: utsl@quic.net To: Terry Lambert <tlambert2@mindspring.com> Cc: Bakul Shah <bakul@bitblocks.com>, Scott Hess <scott@avantgo.com>, "Vladimir B. Grebenschikov" <vova@sw.ru>, fs@FreeBSD.ORG Subject: Re: Filesystem Message-ID: <20020504041936.GA19646@quic.net> In-Reply-To: <3CD32F43.327CDA46@mindspring.com> References: <200205040019.UAA13780@illustrious.cnchost.com> <3CD32F43.327CDA46@mindspring.com>
next in thread | previous in thread | raw e-mail | index | archive | help
On Fri, May 03, 2002 at 05:45:55PM -0700, Terry Lambert wrote: > > If you build scalable solutions people will use them. If > > enough Unix variants provide fast dir search, others will > > have to pick it up. > > Fast dir search won't be picked up until important applications > start to rely on it. And important applications won't rely on > it until it's generally available. So the only real way to break > the log-jam is to come up with a killer app, which relies on some > feature you want to proselytize. > > The main enemy of new features like this is that there is always > more than one way to solve a problem. 8-). In this particular case, most sane people try to rewrite the application to avoid this kind of situation in the first place. Most people add some directory heirarchy, like squid does, or use abuse a database. There are also a few masochistic types that roll their own filesystem in userspace, and use raw disk. OTOH, I've seen a very large application (it ran on a Sun E10K) that did absolutely nothing about it. It was designed to put some ~1-2k files into a spool directory, and rotate every day. Unfortunately, the application didn't ever get redesigned to handle the scale it was being used for. So when I dealt with it, they had a filesystem that had 800,000 to 1M files in 15-16 directories. (Varied from day to day.) I found out about it when I was asked to figure out why the incremental backups for that filesystem never completed. They would run for ~35-40 hours and then crash. If I remember right, the backup program was running out of address space. 8-) Even if the filesystem had used btrees, the backup program would still have crashed. It was trying to make a list in memory of all the files it needed to backup. It never actually wrote anything to tape... I don't know if all backup software does incrementals that way, but I'd bet most of them do. So there can be other disadvantages to having lots of files in a directory besides slow directory lookups. ---Nathan To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-fs" in the body of the message
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20020504041936.GA19646>