Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 1 Jun 2012 10:59:59 +0200 (CEST)
From:      Wojciech Puchar <wojtek@wojtek.tensor.gdynia.pl>
To:        Oscar Hodgson <oscar.hodgson@gmail.com>
Cc:        freebsd-questions@freebsd.org
Subject:   Re: Anyone using freebsd ZFS for large storage servers?
Message-ID:  <alpine.BSF.2.00.1206011048010.2497@wojtek.tensor.gdynia.pl>
In-Reply-To: <CACxnZKM__Lt9LMabyUC_HOCg2zsMT=3bpqwVrGj16py1A=qffg@mail.gmail.com>
References:  <CACxnZKM__Lt9LMabyUC_HOCg2zsMT=3bpqwVrGj16py1A=qffg@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
> 48TB each, roughly.  There would be a couple of units.  The pizza
> boxes would be used for computational tasks, and nominally would have
> 8 cores and 96G+ RAM.
>
> Obvious questions are hardware compatibility and stability.  I've set
> up small FreeBSD 9 machines with ZFS roots and simple mirrors for
> other tasks here, and those have been successful so far.
>
> Observations would be appreciated.
>
you idea of using disks in JBOD style (no "hardware" RAID) is good, but of 
using ZFS is bad.


i would recommend you to do some real performance testing of ZFS on any 
config  under real load (workload doesn't fit cache, there are many 
different things done by many users/programs)  and compare it to 
PROPERLY done UFS config on such config (with the help of gmirror/gstripe)

if you will have better result you certainly didn't configure the latter 
case (UFS,Gmirror,gstripe) properly :)

in spite of large scale hype and promotion of this free software (which by 
itself should be red alert for you), i strongly recommend to stay away from it.

and definitely do not use it if you will not have regular backups of all 
data, as in case of failures (yes they do happen) you will just have no 
chance to repair it.

There is NO fsck_zfs! And ZFS is promoted as it "doesn't need" it.

Assuming that filesystem doesn't need offline filesystem check utility 
because it "never crash" is funny.

In the other hand i never ever heard of UFS failsystem failure that was 
not a result of physical disk failure and resulted in bad damage.
in worst case some files or one/few subdirectory landed in lost+found, and 
some recently (minutes at most) done things wasn't here.


if you still like to use it, do not forget it uses many times more CPU 
power than UFS in handling filesystem, leaving much to computation 
you want to do.

As of memory you may limit it's memory (ab)usage by adding proper 
statements to loader.conf but still it uses enormous amount of it.

with 96GB it may not be a problem for you, or it may depends how much 
memory you need for computation.



if you need help in properly configuring large storage with UFS and 
gmirror/gstripe tools then feel free to ask



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?alpine.BSF.2.00.1206011048010.2497>