Date: Tue, 17 May 2016 00:36:51 +0200 From: Palle Girgensohn <girgen@FreeBSD.org> To: Borja Marcos <borjam@sarenet.es> Cc: freebsd-fs@freebsd.org Subject: Re: Best practice for high availability ZFS pool Message-ID: <89D73122-FAC7-4449-AAB3-C4BBE74B960A@FreeBSD.org> In-Reply-To: <F3716A47-BC73-4C51-BF7C-911BCFE4D29F@sarenet.es> References: <5E69742D-D2E0-437F-B4A9-A71508C370F9@FreeBSD.org> <F3716A47-BC73-4C51-BF7C-911BCFE4D29F@sarenet.es>
next in thread | previous in thread | raw e-mail | index | archive | help
[-- Attachment #1 --] > 16 maj 2016 kl. 15:51 skrev Borja Marcos <borjam@sarenet.es>: > > >> On 16 May 2016, at 12:08, Palle Girgensohn <girgen@freebsd.org> wrote: >> >> Hi, >> >> We need to set up a ZFS pool with redundance. The main goal is high availability - uptime. >> >> I can see a few of paths to follow. >> >> 1. HAST + ZFS > > Which means that a possible corruption causing bug in ZFS would vaporize the data of both replicas. > >> 3. ZFS replication (zfs snapshot + zfs send | ssh | zfs receive) > > If you don’t have a hard requirement for synchronous replication (and, in that case, I would opt for a more application > aware approach) it’s the best method in my opinion. That was exactly my thought 18 months ago, and we set up two systems with zfs snapshot + zfs send | ssh | zfs receive. It works, but the problem is it just too slow and a complete sync takes like 10 minutes for all the file systems. We are forced to sync the file systems one at a time to get the kind of control and separation we need. Even if we could speed that up somehow, we are really looking for a more recilient system. Also, constant snapshotting and writing makes scrub very slow so we need to tune down the amount of syncing every fourth week-end to scrub. It's OK but not optimal, so we're pondering for something better. My first choice is really HAST at the moment, but I also dont find much written in the last couple of years, apart from some articles about setting it up in very minimal testbeds or posts about performance and stability troubles. This makes me wonder, is HAST actively maintained? Is it stable, used and loved by the community? I'd love to hear some success stories with farily large installations of at least 20 TB or so. Palle [-- Attachment #2 --] -----BEGIN PGP SIGNATURE----- Comment: GPGTools - http://gpgtools.org iQEcBAEBCAAGBQJXOkuDAAoJEDQn0sf36Uls++IIAIGX1yPZt2BdPB9rly71u+TV 9jap9c0ZtUagYcwUNnUbKuShoEKr1FCyIv5trIB13CC7UieBV3f8AAprCa7fohb3 Hc5nENqjyqaG2udppYg7J5mXs1so5W6F9SdmSuIh2RSCvtV+aKm5ofmF+Ef7ZiEo zvR8jJzVcLEHm5RnpzQm1oU17U0eHwfF5fdWtaw69roHCWMk08MkQcJBocXORAh5 /+L7zzPxezQh4YeYfDnj9rC7vaerU8iyEQsw8MV6tY6gD+JiW1dfjZK6p0AwwkKk W876vHi+rbxpWt4bLYDBPbRsnRGYaL9AuX1bGSgAvXlhZS2Rod5DdnpoX5ez/+E= =kVC+ -----END PGP SIGNATURE-----
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?89D73122-FAC7-4449-AAB3-C4BBE74B960A>
