From owner-freebsd-questions@FreeBSD.ORG Wed Aug 13 14:27:47 2008 Return-Path: Delivered-To: freebsd-questions@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 8FD98106564A for ; Wed, 13 Aug 2008 14:27:47 +0000 (UTC) (envelope-from gandalf@shopzeus.com) Received: from viefep25-int.chello.at (viefep25-int.chello.at [62.179.121.45]) by mx1.freebsd.org (Postfix) with ESMTP id E29B38FC1C for ; Wed, 13 Aug 2008 14:27:46 +0000 (UTC) (envelope-from gandalf@shopzeus.com) Received: from edge04.upc.biz ([192.168.13.239]) by viefep18-int.chello.at (InterMail vM.7.08.02.00 201-2186-121-20061213) with ESMTP id <20080813141242.IBHB10672.viefep18-int.chello.at@edge04.upc.biz> for ; Wed, 13 Aug 2008 16:12:42 +0200 Received: from [192.168.1.103] ([89.134.230.13]) by edge04.upc.biz with edge04 id 1qCg1a01i0Hzgy104qChPb; Wed, 13 Aug 2008 16:12:42 +0200 X-SourceIP: 89.134.230.13 Message-ID: <48A2EBD7.9000903@shopzeus.com> Date: Wed, 13 Aug 2008 16:12:39 +0200 From: Laszlo Nagy User-Agent: Thunderbird 2.0.0.16 (X11/20080724) MIME-Version: 1.0 To: freebsd-questions@freebsd.org Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: Max. number of opened files, efficiency X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 13 Aug 2008 14:27:47 -0000 How many files can I open under FreeBSD, at the same time? Problem: I'm making a pivot table, and when I drill down the facts, I would like to create a new temporary file for each possible dimension value. In most cases, there will be less than 1000 dimension values. I tried to open 1000 temporary files and I could do so within one second. But how efficient is that? What happens when I open 1000 temporary files, and write data into them randomly, 10 million times. (avg. 10 000 write operations per file) Will this be handled efficiently by the OS? Is efficiency affected by the underlying filesystem? I also tried to create 10 000 temporary files, but performance dropped down. Example in Python: import tempfile import time N = 10000 start = time.time() files = [ tempfile.TemporaryFile() for i in range(N)] stop = time.time() print "created %s files/second" % ( int(N/(stop-start)) ) On my computer this program prints "3814 files/second" for N=1000, and "1561 files/second" for N=10000. Thanks, Laszlo