Date: Wed, 13 Aug 2008 16:44:05 +0200 From: Laszlo Nagy <gandalf@shopzeus.com> To: Bill Moran <wmoran@potentialtech.com>, freebsd-questions@freebsd.org Subject: Re: Max. number of opened files, efficiency Message-ID: <48A2F335.6010206@shopzeus.com> In-Reply-To: <20080813103244.d9c76715.wmoran@potentialtech.com> References: <48A2EBD7.9000903@shopzeus.com> <20080813103244.d9c76715.wmoran@potentialtech.com>
next in thread | previous in thread | raw e-mail | index | archive | help
> Directories generally start to perform poorly when you put too many files > in them (i.e. the time required to add a new directory entry or find > an existing name in the entry goes up) > > If you're going to be making 10s of 1000s of files, I'd recommend making > a tree of directories. I.e., make directories 1 - 10, then put files > 0-999 in directory 1 and files 1000-1999 in directory 2, etc > In fact I do not need any name associated with the file. I just need a temporary file object, I would like to access it in read write mode and then throw it. For some reason, this kind of temporary file is implemented this way (at least in Python): 1. create file name with mkstemp 2. create file object with that name 3. save the file handle number 4. unlink the file name (remove directory entry) 5. return the file handle (that can be closed later) This is executed each time I create a temporary file. As you can see, the number of entries in the tmp directory won't increase at all. (If it would be possible, I would create a file without a name for the first time.) When I close the file handle, the OS will hopefully deallocate the disk space because from that point, nothing references the file. Another interesting (offtopic) question is that I could not open 10 000 files under Windows XP. Error was "too many open file". How to overcome this? Thanks, Laszlo
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?48A2F335.6010206>