From owner-freebsd-questions Tue Jul 23 15:12:55 1996 Return-Path: owner-questions Received: (from root@localhost) by freefall.freebsd.org (8.7.5/8.7.3) id PAA19024 for questions-outgoing; Tue, 23 Jul 1996 15:12:55 -0700 (PDT) Received: from phs.k12.ar.us (garman@phs.k12.ar.us [165.29.117.2]) by freefall.freebsd.org (8.7.5/8.7.3) with SMTP id PAA19013 for ; Tue, 23 Jul 1996 15:12:52 -0700 (PDT) Received: from localhost (garman@localhost) by phs.k12.ar.us (8.6.12/8.6.12) with SMTP id RAA20314; Tue, 23 Jul 1996 17:12:32 -0500 Date: Tue, 23 Jul 1996 17:12:32 -0500 (CDT) From: Jason Garman To: "G. Jin" cc: questions@freebsd.org Subject: Re: one large file or many small files In-Reply-To: <31F5227B.652@swanlake.com> Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: owner-questions@freebsd.org X-Loop: FreeBSD.org Precedence: bulk On Tue, 23 Jul 1996, G. Jin wrote: > > Can anybody tell me which way is better? > > I have the following considerations so far: > > 1. Can Linux/FreeBSD support 100K files? > I don't see why not... > b. In case many individual files, I will assign his data file to > "f12345.dat". > with 100K files in place, will accessing to a certain file > "f12345.dat" > cause too slow a directory search to find its address? > Why don't you use a hash directory structure instead of stuffing 100,000 files in one directory? It'll be much faster than looking up each file in one directory. Look at any decent http proxy-cache for examples of how such a hash scheme would work. Enjoy, -- Jason Garman http://www.nesc.k12.ar.us/~garman/ Student, Eleanor Roosevelt High School garman@phs.k12.ar.us