Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 29 May 2012 03:45:18 -0700
From:      Doug Barton <dougb@FreeBSD.org>
To:        Alessio Focardi <alessiof@gmail.com>
Cc:        freebsd-fs@FreeBSD.org
Subject:   Re: Millions of small files: best filesystem / best options
Message-ID:  <4FC4A8BE.2000507@FreeBSD.org>
In-Reply-To: <49722655.1520.1338282954302.JavaMail.root@zimbra.interconnessioni.it>
References:  <49722655.1520.1338282954302.JavaMail.root@zimbra.interconnessioni.it>

next in thread | previous in thread | raw e-mail | index | archive | help
On 5/29/2012 2:15 AM, Alessio Focardi wrote:
>>> I ran a Usenet server this way for quite a while with fairly
>>> good results, though the average file size was a bit bigger,
>>> about 2K or so. I found that if I didn't use "-o space" that
>>> space optimization wouldn't kick in soon enough and I'd tend to
>>> run out of full blocks that would be needed for larger files.
> 
> Fragmentation is not a problem for me, mostly I will have a write
> once-read many situation, still is not clear to me if "-o space"
> works in the constraints of the block/fragment ratio, that in my case
> it would still mean that I will have to use a 512 bytes subblock for
> every 200 byte files.

TMK you can't have more than one file per fragment, but due to metadata
you're not really wasting as much space as it sounds.

If your data is truly WORM, I'd definitely give -o space a try. You'll
probably want to benchmark various combinations anyway.

Doug

-- 

    This .signature sanitized for your protection



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4FC4A8BE.2000507>