Date: Fri, 1 Jul 2016 12:57:35 +0200 From: Julien Cigar <julien@perdition.city> To: InterNetX - Juergen Gotteswinter <jg@internetx.com> Cc: Ben RUBSON <ben.rubson@gmail.com>, freebsd-fs@freebsd.org Subject: Re: HAST + ZFS + NFS + CARP Message-ID: <20160701105735.GG5695@mordor.lan> In-Reply-To: <f74627e3-604e-da71-c024-7e4e71ff36cb@internetx.com> References: <71b8da1e-acb2-9d4e-5d11-20695aa5274a@internetx.com> <AD42D8FD-D07B-454E-B79D-028C1EC57381@gmail.com> <20160630153747.GB5695@mordor.lan> <63C07474-BDD5-42AA-BF4A-85A0E04D3CC2@gmail.com> <20160630163541.GC5695@mordor.lan> <50BF1AEF-3ECC-4C30-B8E1-678E02735BB5@gmail.com> <20160701084717.GE5695@mordor.lan> <47c7e1a5-6ae8-689c-9c2d-bb92f659ea43@internetx.com> <20160701101524.GF5695@mordor.lan> <f74627e3-604e-da71-c024-7e4e71ff36cb@internetx.com>
next in thread | previous in thread | raw e-mail | index | archive | help
--Qf1oXS95uex85X0R Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Fri, Jul 01, 2016 at 12:18:39PM +0200, InterNetX - Juergen Gotteswinter = wrote: > Am 01.07.2016 um 12:15 schrieb Julien Cigar: > > On Fri, Jul 01, 2016 at 11:42:13AM +0200, InterNetX - Juergen Gotteswin= ter wrote: > >>> > >>> Thank you very much for those "advices", it is much appreciated!=20 > >>> > >>> I'll definitively go with iSCSI (for which I haven't that much=20 > >>> experience) over HAST. > >> > >> good luck, i rather cut one of my fingers than using something like th= is > >> in production. but its probably a quick way if one targets to find a n= ew > >> opportunity ;) > >=20 > > why...? I guess iSCSI is slower but should be safer than HAST, no? >=20 > do your testing, please. even with simulated short network cuts. 10-20 > secs are way enaugh to give you a picture of what is going to happen of course I'll test everything properly :) I don't have the hardware yet so ATM I'm just looking for all the possible "candidates", and I'm=20 aware that a redundant storage is not that easy to implement ... but what solutions do we have? It's either CARP + ZFS + (HAST|iSCSI),=20 either zfs send|ssh zfs receive as you suggest (but it's not realtime), either a distributed FS (which I avoid like the plague..) >=20 > >> > >>> > >>> Maybe a stupid question but, assuming on the MASTER with ada{0,1} the= =20 > >>> local disks and da{0,1} the exported iSCSI disks from the SLAVE, woul= d=20 > >>> you go with: > >>> > >>> $> zpool create storage mirror /dev/ada0s1 /dev/ada1s1 mirror /dev/da0 > >>> /dev/da1 > >>> > >>> or rather: > >>> > >>> $> zpool create storage mirror /dev/ada0s1 /dev/da0 mirror /dev/ada1s1 > >>> /dev/da1 > >>> > >>> I guess the former is better, but it's just to be sure .. (or maybe i= t's > >>> better to iSCSI export a ZVOL from the SLAVE?) > >>> > >> > >> are you really sure you understand what you trying to do? even if its > >> currently so, i bet in a desaster case you will be lost. > >> > >> > >=20 > > well this is pretty new to me, but I don't see what could be wrong with: > >=20 > > $> zpool create storage mirror /dev/ada0s1 /dev/da0 mirror /dev/ada1s1 > > /dev/da1 > >=20 > > Let's take some use-cases: > > - MASTER and SLAVE are alive, the data is "replicated" on both > > nodes. As iSCSI is used, ZFS will see all the details of the > > underlying disks and we can be sure that no corruption will occur > > (contrary to HAST) > > - SLAVE die, correct me if I'm wrong the but pool is still available, > > fix the SLAVE, resilver and that's it ..? > > - MASTER die, CARP will notice it and SLAVE will take the VIP, the > > failover script will be executed with a $> zpool import -f > >=20 > >>> Correct me if I'm wrong but, from a safety point of view this setup i= s=20 > >>> also the safest as you'll get the "fullsync" equivalent mode of HAST > >>> (but but it's also the slowest), so I can be 99,99% confident that the > >>> pool on the SLAVE will never be corrupted, even in the case where the > >>> MASTER suddently die (power outage, etc), and that a zpool import -f > >>> storage will always work? > >> > >> 99,99% ? optimistic, very optimistic. > >=20 > > the only situation where corruption could occur is some sort of network > > corruption (bug in the driver, broken network card, etc), or a bug in > > ZFS ... but you'll have the same with a zfs send|ssh zfs receive > >=20 > >> >=20 > optimistic >=20 > >> we are playing with recovery of a test pool which has been imported on > >> two nodes at the same time. looks pretty messy > >> > >>> > >>> One last thing: this "storage" pool will be exported through NFS on t= he=20 > >>> clients, and when a failover occur they should, in theory, not notice > >>> it. I know that it's pretty hypothetical but I wondered if pfsync cou= ld > >>> play a role in this area (active connections)..? > >>> > >> > >> they will notice, and they will stuck or worse (reboot) > >=20 > > this is something that should be properly tested I agree.. > >=20 >=20 > do your testing, and keep your clients under load while testing. do > writes onto the nfs mounts and then cut. you will be surprised about the > impact. >=20 > >> >=20 >=20 >=20 > >>> Thanks! > >>> Julien > >>> > >>>> > >>>>>>>> ZFS would then know as soon as a disk is failing. > >>>>>>>> And if the master fails, you only have to import (-f certainly, = in case of a master power failure) on the slave. > >>>>>>>> > >>>>>>>> Ben > >>>> _______________________________________________ > >>>> freebsd-fs@freebsd.org mailing list > >>>> https://lists.freebsd.org/mailman/listinfo/freebsd-fs > >>>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > >>> > >=20 >=20 --=20 Julien Cigar Belgian Biodiversity Platform (http://www.biodiversity.be) PGP fingerprint: EEF9 F697 4B68 D275 7B11 6A25 B2BB 3710 A204 23C0 No trees were killed in the creation of this message. However, many electrons were terribly inconvenienced. --Qf1oXS95uex85X0R Content-Type: application/pgp-signature; name="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAABCgAGBQJXdkybAAoJELK7NxCiBCPAXy4QAKb/OswvhYZVN/0FcWRLdrPk X7oBaNKYTXnVa2+CWpkKWIXWzLuYjtU7mO8YpDeTRR97HwWaz39hQlC86Rtwvl+K bU91c5h/ykCD9pBe7IOA8bu86ejGJyW619C3VjbPFhRIpslNRULu5lGFaGSfka2S 7JiGXwicr8AhQd8TAj9DJCQcK/pr8u9k3KYgqU4MxxFrr14+++HzRKk9m/fFpQBy LhIK4DIzc32Wbta0bIfOeIhNS7ATJhXcyfoWEKHEy5JUq5j78p779An76UuyJLil 63H9s3pKT1QdEFiiVBjIEz02Y8XHDXEpqDgA9TRWLw+jGngXhGs79Syjdzo0RPlT DyV5TIgjcXoDyONvxFxcBsiEgI25LzKFwNQeTOWYEQzST1hAolzAgMEK0cKiQfde 3OBp3jvOocT0/7XfTE/Il8UWMdad2fZS6VAJGX0Ei9TXHCHCMhaTkCoqZ20py//7 4k4gBV3yeQpyMkHKHDlokRV63SnDON+V5/XyAyvbPS0df3zmhMVe2v0d0ODDxAis MQBkfnBtzQ6MT3OZWwL+xQgXA1u+iXkZsbWklGwcxTCy52wCLlBNtYjEyEoxX6Yg xEV1KG8m2u2e/U/uoBW9/aKKJC8wLmsDjMkss/ReO/aNIX+cZsTUQcq0dV73kW3O 6Z8kXmiz6O2xK5VmEksm =64Uc -----END PGP SIGNATURE----- --Qf1oXS95uex85X0R--
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20160701105735.GG5695>