Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 18 May 2016 09:27:48 +0200
From:      Ben RUBSON <ben.rubson@gmail.com>
To:        freebsd-fs@freebsd.org
Subject:   Re: Best practice for high availability ZFS pool
Message-ID:  <5F874CA9-A8D9-4A09-A4BD-95466AB7D165@gmail.com>
In-Reply-To: <alpine.GSO.2.20.1605171201040.14628@freddy.simplesystems.org>
References:  <5E69742D-D2E0-437F-B4A9-A71508C370F9@FreeBSD.org> <alpine.GSO.2.20.1605162034170.7756@freddy.simplesystems.org> <AB71607F-7048-404E-AFE3-D448823BB768@gmail.com> <alpine.GSO.2.20.1605170819220.7756@freddy.simplesystems.org> <40C35566-B7FB-4F59-BB41-D43BC0362C26@gmail.com> <alpine.GSO.2.20.1605171201040.14628@freddy.simplesystems.org>

next in thread | previous in thread | raw e-mail | index | archive | help
> On 17 may 2016 at 19:06, Bob Friesenhahn =
<bfriesen@simple.dallas.tx.us> wrote:
>=20
> On Tue, 17 May 2016, Ben RUBSON wrote:
>=20
>>> On 17 may 2016 at 15:24, Bob Friesenhahn =
<bfriesen@simple.dallas.tx.us> wrote:
>>>=20
>>> There is at least one case of zfs send propagating a problem into =
the receiving pool. I don't know if it broke the pool.  Corrupt data may =
be sent from one pool to another if it passes checksums.
>>=20
>> Do you have any link to this problem ? Would be interesting to know =
if it was possible to come-back to a previous snapshot / consistent =
pool.
>=20
> I don't have a link but I recall that it had something to do with the =
ability to send file 'holes' in the stream.

OK, just for reference : =
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D207714

>> I think that making ZFS send/receive has a higher security level than =
mirroring to a second (or third) JBOD box.
>> With mirroring you will still have only one ZFS pool.
>=20
> This is a reasonable assumption.
>=20
>> However, if send/receive makes the receiving pool the exact 1:1 copy =
of the sending pool, then the thing which made the sending pool to =
corrupt could reach (and corrupt) the receiving pool... I don't know =
whether or not this could occur, and if ever it occurs, if we have the =
chance to revert to a previous snapshot, at least on the receiving =
side...
>=20
> Zfs receive does not result in a 1:1 copy.  The underlying data =
organization can be completely different and compression or other =
options can be changed.

Yes, so if we assume ZFS send/receive bug-free, having a second pool =
which receives data of the first one (mirrored to different JBOD boxes), =
makes sense.

For the first pool, we could think about the following :
- server1 with its JBOD as a iSCSI target ;
- server2 with the exact same JBOD, iSCSI initiator, hosts a ZFS pool =
which mirrors each of server2's disks with one of the server1's disks.
If ever server2 fails, server1 imports the pool and brings the service =
back up.
When server2 comes back, it acts as the new iSCSI target and gives its =
disks to server1 which reconstructs the mirror.
Disks redundancy, and hardware redundancy.

And regularly, this pool is sent/received to a different pool on =
server3, we never know...

Sounds good (to me at least :)

Ben=



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?5F874CA9-A8D9-4A09-A4BD-95466AB7D165>