Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 24 Jan 2013 15:54:32 +0100
From:      Adam Nowacki <nowakpl@platinum.linux.pl>
To:        Wojciech Puchar <wojtek@wojtek.tensor.gdynia.pl>
Cc:        freebsd-fs@freebsd.org, freebsd-hackers@freebsd.org
Subject:   Re: ZFS regimen: scrub, scrub, scrub and scrub again.
Message-ID:  <51014B28.8070404@platinum.linux.pl>
In-Reply-To: <alpine.BSF.2.00.1301241523570.5666@wojtek.tensor.gdynia.pl>
References:  <CACpH0Mf6sNb8JOsTzC%2BWSfQRB62%2BZn7VtzEnihEKmEV2aO2p%2Bw@mail.gmail.com> <alpine.BSF.2.00.1301211201570.9447@wojtek.tensor.gdynia.pl> <20130122073641.GH30633@server.rulingia.com> <alpine.BSF.2.00.1301232121430.1659@wojtek.tensor.gdynia.pl> <51013345.8010701@platinum.linux.pl> <alpine.BSF.2.00.1301241523570.5666@wojtek.tensor.gdynia.pl>

next in thread | previous in thread | raw e-mail | index | archive | help
On 2013-01-24 15:24, Wojciech Puchar wrote:
>> For me the reliability ZFS offers is far more important than pure
>> performance.
> Except it is on paper reliability.

This "on paper" reliability in practice saved a 20TB pool. See one of my 
previous emails. Any other filesystem or hardware/software raid without 
per-disk checksums would have failed. Silent corruption of non-important 
files would be the best case, complete filesystem death by important 
metadata corruption as the worst case.

I've been using ZFS for 3 years in many systems. Biggest one has 44 
disks and 4 ZFS pools - this one survived SAS expander disconnects, a 
few kernel panics and countless power failures (UPS only holds for a few 
hours).

So far I've not lost a single ZFS pool or any data stored.




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?51014B28.8070404>