Date: Fri, 1 Jun 2012 17:16:03 +0200 (CEST) From: Wojciech Puchar <wojtek@wojtek.tensor.gdynia.pl> To: Daniel Feenberg <feenberg@nber.org> Cc: Kaya Saman <kayasaman@gmail.com>, freebsd-questions@freebsd.org, Oscar Hodgson <oscar.hodgson@gmail.com> Subject: Re: Anyone using freebsd ZFS for large storage servers? Message-ID: <alpine.BSF.2.00.1206011708310.3457@wojtek.tensor.gdynia.pl> In-Reply-To: <alpine.DEB.2.00.1206010952050.9474@sas1.nber.org> References: <CACxnZKM__Lt9LMabyUC_HOCg2zsMT=3bpqwVrGj16py1A=qffg@mail.gmail.com> <alpine.BSF.2.00.1206011048010.2497@wojtek.tensor.gdynia.pl> <CAPj0R5%2BLcKUGijT17W6RXBz_KQxz5nZYP0vfPY3HNxNEyw0Eaw@mail.gmail.com> <alpine.BSF.2.00.1206011435430.20357@wojtek.tensor.gdynia.pl> <alpine.DEB.2.00.1206010952050.9474@sas1.nber.org>
next in thread | previous in thread | raw e-mail | index | archive | help
> As for ZFS being dangerous, we have a score of drive-years with no loss of > data. The lack of fsck is considered in this intelligently written piece you are just lucky. before i would start using anything new in such important part as filesystem, i do extreme test, ssimulate hardware faults, random overwrites etc. I did it for ZFS not once, and it fails miserably ending with unrecoverable filesystem that - at best - is without data in some subdirectory. at worst - that crashes at mount and are inaccessible forever. under FFS the worst thing i can get is loss of overwritten data only. overwritten inode - lost file. overwrite data blocks - overwritten files. nothing more! what i don't talk about is ZFS performance which is just terribly bad, except some few special cases when it is slightly faster than UFS+softupdates. It is even worse with RAID-5 style layout which ZFS do "better" with RAID-Z. Better=random read performance of single drive.
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?alpine.BSF.2.00.1206011708310.3457>