Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 17 May 2016 08:24:22 -0500 (CDT)
From:      Bob Friesenhahn <bfriesen@simple.dallas.tx.us>
To:        Ben RUBSON <ben.rubson@gmail.com>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: Best practice for high availability ZFS pool
Message-ID:  <alpine.GSO.2.20.1605170819220.7756@freddy.simplesystems.org>
In-Reply-To: <AB71607F-7048-404E-AFE3-D448823BB768@gmail.com>
References:  <5E69742D-D2E0-437F-B4A9-A71508C370F9@FreeBSD.org> <alpine.GSO.2.20.1605162034170.7756@freddy.simplesystems.org> <AB71607F-7048-404E-AFE3-D448823BB768@gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Tue, 17 May 2016, Ben RUBSON wrote:
>>
>> Without completely isolated systems there is always the risk of total failure.  Even with zfs send there is the risk of total failure if the sent data results in corruption on the receiving side.
>
> In this case rollback one of the previous snapshots on the receiving side ?
> Did you mean the sent data can totally brake the receiving pool making it unusable / unable to import ? Did we already see this ?

There is at least one case of zfs send propagating a problem into the 
receiving pool. I don't know if it broke the pool.  Corrupt data may 
be sent from one pool to another if it passes checksums.  With any 
solution, there is the possibility of software bugs.

Adding more parallel hardware decreases the chance of data loss but it 
increases the chance of hardware failure.

Bob
-- 
Bob Friesenhahn
bfriesen@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?alpine.GSO.2.20.1605170819220.7756>