Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 29 May 2012 11:15:54 +0200 (CEST)
From:      Alessio Focardi <alessiof@gmail.com>
To:        freebsd-fs@FreeBSD.org
Subject:   Re: Millions of small files: best filesystem / best options
Message-ID:  <49722655.1520.1338282954302.JavaMail.root@zimbra.interconnessioni.it>
In-Reply-To: <4FC486BC.3050808@FreeBSD.org>

next in thread | previous in thread | raw e-mail | index | archive | help
> > I ran a Usenet server this way for quite a while with fairly good
> > results, though the average file size was a bit bigger, about 2K or
> > so.
> > I found that if I didn't use "-o space" that space optimization
> > wouldn't
> > kick in soon enough and I'd tend to run out of full blocks that
> > would be
> > needed for larger files.  

Fragmentation is not a problem for me, mostly I will have a write once-read many situation, still is not clear to me if "-o space" works in the constraints of the block/fragment ratio, that in my case it would still mean that I will have to use a 512 bytes subblock for every 200 byte files. 

ps

really thank you for all of your help!



Alessio Focardi
------------------





Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?49722655.1520.1338282954302.JavaMail.root>