Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 28 May 2012 12:13:00 -0500 (CDT)
From:      Bob Friesenhahn <bfriesen@simple.dallas.tx.us>
To:        Mark Felder <feld@feld.me>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: Millions of small files: best filesystem / best options
Message-ID:  <alpine.GSO.2.01.1205281207440.21691@freddy.simplesystems.org>
In-Reply-To: <op.we00kvjq34t2sn@tech304>
References:  <2134924725.5040.1338211317460.JavaMail.root@zimbra.interconnessioni.it> <op.we00kvjq34t2sn@tech304>

next in thread | previous in thread | raw e-mail | index | archive | help
On Mon, 28 May 2012, Mark Felder wrote:

> ZFS is heavy, but if you have the resources it could possibly fit your needs 
> if tuned correctly. You can change the blocksize for any ZFS filesystem which 
> might help. It also deals with filesystems that have lots of files quite well 
> -- we have some customer backups that sprawl to 20 million+ files and ZFS 
> doesn't seem to care.

ZFS will work but its metadata size requirements will likely be twice 
the amount required by the actual file data.  It is not necessary to 
change the ZFS blocksize in order for it to work.

I have a directory in ZFS with 1 million files in one directory (not 
broken out into some heirarchy) that I use for testing application 
software.  ZFS does not mind a million files in one directory but 
applications can take quite a long time to obtain a file listing 
(fault of the application, not ZFS), especially if they want to sort 
that listing.

Bob
-- 
Bob Friesenhahn
bfriesen@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?alpine.GSO.2.01.1205281207440.21691>