Date: Wed, 17 Aug 2016 10:54:13 +0200 From: Julien Cigar <julien@perdition.city> To: InterNetX - Juergen Gotteswinter <juergen.gotteswinter@internetx.com> Cc: Borja Marcos <borjam@sarenet.es>, freebsd-fs@freebsd.org Subject: Re: HAST + ZFS + NFS + CARP Message-ID: <20160817085413.GE22506@mordor.lan> In-Reply-To: <472bc879-977f-8c4c-c91a-84cc61efcd86@internetx.com> References: <65906F84-CFFC-40E9-8236-56AFB6BE2DE1@ixsystems.com> <B48FB28E-30FA-477F-810E-DF4F575F5063@gmail.com> <61283600-A41A-4A8A-92F9-7FAFF54DD175@ixsystems.com> <20160704183643.GI41276@mordor.lan> <AE372BF0-02BE-4BF3-9073-A05DB4E7FE34@ixsystems.com> <20160704193131.GJ41276@mordor.lan> <E7D42341-D324-41C7-B03A-2420DA7A7952@sarenet.es> <20160811091016.GI70364@mordor.lan> <1AA52221-9B04-4CF6-97A3-D2C2B330B7F9@sarenet.es> <472bc879-977f-8c4c-c91a-84cc61efcd86@internetx.com>
index | next in thread | previous in thread | raw e-mail
[-- Attachment #1 --] On Wed, Aug 17, 2016 at 09:25:30AM +0200, InterNetX - Juergen Gotteswinter wrote: > > > Am 11.08.2016 um 11:24 schrieb Borja Marcos: > > > >> On 11 Aug 2016, at 11:10, Julien Cigar <julien@perdition.city> wrote: > >> > >> As I said in a previous post I tested the zfs send/receive approach (with > >> zrep) and it works (more or less) perfectly.. so I concur in all what you > >> said, especially about off-site replicate and synchronous replication. > >> > >> Out of curiosity I'm also testing a ZFS + iSCSI + CARP at the moment, > >> I'm in the early tests, haven't done any heavy writes yet, but ATM it > >> works as expected, I havent' managed to corrupt the zpool. > > > > I must be too old school, but I don’t quite like the idea of using an essentially unreliable transport > > (Ethernet) for low-level filesystem operations. > > > > In case something went wrong, that approach could risk corrupting a pool. Although, frankly, > > ZFS is extremely resilient. One of mine even survived a SAS HBA problem that caused some > > silent corruption. > > try dual split import :D i mean, zpool -f import on 2 machines hooked up > to the same disk chassis. Yes this is the first thing on the list to avoid .. :) I'm still busy to test the whole setup here, including the MASTER -> BACKUP failover script (CARP), but I think you can prevent that thanks to: - As long as ctld is running on the BACKUP the disks are locked and you can't import the pool (even with -f) for ex (filer2 is the BACKUP): https://gist.github.com/silenius/f9536e081d473ba4fddd50f59c56b58f - The shared pool should not be mounted at boot, and you should ensure that the failover script is not executed during boot time too: this is to handle the case wherein both machines turn off and/or re-ignite at the same time. Indeed, the CARP interface can "flip" it's status if both machines are powered on at the same time, for ex: https://gist.github.com/silenius/344c3e998a1889f988fdfc3ceba57aaf and you will have a split-brain scenario - Sometimes you'll need to reboot the MASTER for some $reasons (freebsd-update, etc) and the MASTER -> BACKUP switch should not happen, this can be handled with a trigger file or something like that - I've still have to check if the order is OK, but I think that as long as you shutdown the replication interface and that you adapt the advskew (including the config file) of the CARP interface before the zpool import -f in the failover script you can be relatively confident that nothing will be written on the iSCSI targets - A zpool scrub should be run at regular intervals This is my MASTER -> BACKUP CARP script ATM https://gist.github.com/silenius/7f6ee8030eb6b923affb655a259bfef7 Julien > > kaboom, really ugly kaboom. thats what is very likely to happen sooner > or later especially when it comes to homegrown automatism solutions. > even the commercial parts where much more time/work goes into such > solutions fail in a regular manner > > > > > The advantage of ZFS send/receive of datasets is, however, that you can consider it > > essentially atomic. A transport corruption should not cause trouble (apart from a failed > > "zfs receive") and with snapshot retention you can even roll back. You can’t roll back > > zpool replications :) > > > > ZFS receive does a lot of sanity checks as well. As long as your zfs receive doesn’t involve a rollback > > to the latest snapshot, it won’t destroy anything by mistake. Just make sure that your replica datasets > > aren’t mounted and zfs receive won’t complain. > > > > > > Cheers, > > > > > > > > > > Borja. > > > > > > > > _______________________________________________ > > freebsd-fs@freebsd.org mailing list > > https://lists.freebsd.org/mailman/listinfo/freebsd-fs > > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" -- Julien Cigar Belgian Biodiversity Platform (http://www.biodiversity.be) PGP fingerprint: EEF9 F697 4B68 D275 7B11 6A25 B2BB 3710 A204 23C0 No trees were killed in the creation of this message. However, many electrons were terribly inconvenienced. [-- Attachment #2 --] -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAABCgAGBQJXtCYyAAoJELK7NxCiBCPAjjgQAOF0zl3cvzfi6jXRoSS141wK lWv3WeLLzjnzdq7k45i1LKRypyC8RRP4AlqCTcKIO/gbVWcKqTXb4VwTymyGhXvW 3dOYOcu38NIwzWZ95dEDT1dqCwKCvtlPzG+VJJ93Kr2jbCeoMxmZTZIgWGibjU46 ES7ozWvj9tMLWrg5blqiTVgsmR1OCEBhiahJvWPHHhOJmm8LAAh/HciT8tLM1Dd1 6skOIawLuGVKnGth12O9TpakuqBds8Ru3jry+1+EeERP6xDZRtJh0IUT2I57gJ2X H8kyB4e4Dg9pVwtvLj7QLZcq7vK821pRrmvKkWo5OIQt8qPRjy2UxXoUbft1nPpK RrMpo0J1Zb0riZoCLaVBkPSXNor9DXqwN2ExfxCq9WUBBYClBLdgxn1EAW0dmVwv LearQLK4BdlCJrIJIQI2hpMiu0qAIfBuNlCsbifZQzbtjEPwk9s1MNDihMhydshc PvSlqNIh1LkfQ4ka7FiYvGzaLfWTi7ZYYVl+SL4UvMX8YmvCdOGOUBf5bOjZkjRI +0SHWic0JDM7R4chYGmTL9WFSFuBnqtNoQyy97c8bimqM2oV4pF7pEN1GfxR9w8Y 2pQ2ghSC40lhCTOUv8tGS3XKzkBp5J4BUSpu7fhhMSI52WJzIvNOwkTLmbnCoEku hMfj6gWoa0TEYf6tj3di =355Z -----END PGP SIGNATURE-----help
Want to link to this message? Use this
URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20160817085413.GE22506>
