Date: Fri, 1 Jul 2016 17:11:47 +0200 From: Julien Cigar <julien@perdition.city> To: jg@internetx.com Cc: freebsd-fs@freebsd.org Subject: Re: HAST + ZFS + NFS + CARP Message-ID: <20160701151146.GD41276@mordor.lan> In-Reply-To: <4d13f123-de18-693a-f98b-d02c8864f02e@internetx.com> References: <47c7e1a5-6ae8-689c-9c2d-bb92f659ea43@internetx.com> <20160701101524.GF5695@mordor.lan> <f74627e3-604e-da71-c024-7e4e71ff36cb@internetx.com> <20160701105735.GG5695@mordor.lan> <3d8c7c89-b24e-9810-f3c2-11ec1e15c948@internetx.com> <93E50E6B-8248-43B5-BE94-D94D53050E06@getsomewhere.net> <bbaf14e2-4ec6-545c-ba67-a1084100b05c@internetx.com> <20160701143917.GB41276@mordor.lan> <01b8a61e-739e-c41e-45bc-a84af0a9d8ab@internetx.com> <4d13f123-de18-693a-f98b-d02c8864f02e@internetx.com>
next in thread | previous in thread | raw e-mail | index | archive | help
--9UV9rz0O2dU/yYYn Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Fri, Jul 01, 2016 at 04:44:24PM +0200, InterNetX - Juergen Gotteswinter = wrote: > dont get me wrong, what i try to say is that imho you are trying to > reach something which looks great until something goes wrong. I agree..! :) >=20 > keep it simple, stupid simple, without much moving parts and avoid > automagic voodoo wherever possible. >=20 to be honnest I've always been relunctant to "automatic failover", as I think the problem is always not "how" to do it but "when".. and as Rick said "The simpler/reliable way would be done manually be a sysadmin".. > Am 01.07.2016 um 16:41 schrieb InterNetX - Juergen Gotteswinter: > > Am 01.07.2016 um 16:39 schrieb Julien Cigar: > >> On Fri, Jul 01, 2016 at 03:44:36PM +0200, InterNetX - Juergen Gotteswi= nter wrote: > >>> > >>> > >>> Am 01.07.2016 um 15:18 schrieb Joe Love: > >>>> > >>>>> On Jul 1, 2016, at 6:09 AM, InterNetX - Juergen Gotteswinter <jg@in= ternetx.com> wrote: > >>>>> > >>>>> Am 01.07.2016 um 12:57 schrieb Julien Cigar: > >>>>>> On Fri, Jul 01, 2016 at 12:18:39PM +0200, InterNetX - Juergen Gott= eswinter wrote: > >>>>>> > >>>>>> of course I'll test everything properly :) I don't have the hardwa= re yet > >>>>>> so ATM I'm just looking for all the possible "candidates", and I'm= =20 > >>>>>> aware that a redundant storage is not that easy to implement ... > >>>>>> > >>>>>> but what solutions do we have? It's either CARP + ZFS + (HAST|iSCS= I),=20 > >>>>>> either zfs send|ssh zfs receive as you suggest (but it's > >>>>>> not realtime), either a distributed FS (which I avoid like the pla= gue..) > >>>>> > >>>>> zfs send/receive can be nearly realtime. > >>>>> > >>>>> external jbods with cross cabled sas + commercial cluster solution = like > >>>>> rsf-1. anything else is a fragile construction which begs for desas= ter. > >>>> > >>>> This sounds similar to the CTL-HA code that went in last year, for w= hich I haven=E2=80=99t seen any sort of how-to. The RSF-1 stuff sounds lik= e it has more scaling options, though. Which it probably should, given its= commercial operation. > >>> > >>> rsf is what pacemaker / heartbeat tries to be, judge me for linking > >>> whitepapers but in this case its not such evil marketing blah > >>> > >>> http://www.high-availability.com/wp-content/uploads/2013/01/RSF-1-HA-= PLUGIN-ZFS-STORAGE-CLUSTER.pdf > >>> > >>> > >>> @ Julien > >>> > >>> seems like you take availability really serious, so i guess you also = got > >>> plans how to accomplish network problems like dead switches, flaky > >>> cables and so on. > >>> > >>> like using multiple network cards in the boxes, cross cabling between > >>> the hosts (rs232 and ethernet of course, using proved reliable network > >>> switches in a stacked configuration for example cisco 3750 stacked). = not > >>> to forget redundant power feeds to redundant power supplies. > >> > >> the only thing that is not redundant (yet?) is our switch, an HP Pro= =20 > >> Curve 2530-24G) .. it's the next step :) > >=20 > > Arubas, okay, a quick view in the spec sheet does not seem to list > > stacking option. > >=20 > > what about power? > >=20 > >> > >>> > >>> if not, i whould start again from scratch. > >>> > >>>> > >>>> -Joe > >>>> > >>>> _______________________________________________ > >>>> freebsd-fs@freebsd.org mailing list > >>>> https://lists.freebsd.org/mailman/listinfo/freebsd-fs > >>>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > >>>> > >>> _______________________________________________ > >>> freebsd-fs@freebsd.org mailing list > >>> https://lists.freebsd.org/mailman/listinfo/freebsd-fs > >>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > >> > >=20 > > _______________________________________________ > > freebsd-fs@freebsd.org mailing list > > https://lists.freebsd.org/mailman/listinfo/freebsd-fs > > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > >=20 --=20 Julien Cigar Belgian Biodiversity Platform (http://www.biodiversity.be) PGP fingerprint: EEF9 F697 4B68 D275 7B11 6A25 B2BB 3710 A204 23C0 No trees were killed in the creation of this message. However, many electrons were terribly inconvenienced. --9UV9rz0O2dU/yYYn Content-Type: application/pgp-signature; name="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAABCgAGBQJXdogyAAoJELK7NxCiBCPAT+MP/isQyDWA4tjDW8UZCsQ0QJmt gghjcskRnKsSqbGcEiS1lT/hY4rGIlErYG2XpKCjUsF9VxVnfd+ILtn4VB720vpd 3LAqutUR5VSkeYeFYNBlwUH07rauig6TXcLf1N8XSKaR+ppdCz8ff1GDtM52/nWM SirI8Au5SojToFb7QJnPoo4smlWwml48dSQ9ErEF9nR9h4YgPuGIdA+G4vS8jy+t zkGkLfmyQwl6BZfhRBW9xz2u2n1EvRGjnt0mObPL/3vulzA9yA1AKzZq0Zl28B47 w3GdkTPTgYSyikU0lYzG+vvnLKLgXkIN98Z8i+BLYWucleJOhsDmHwzfcYv/M6R+ SNiRHWt0ogQ5YdrNJf8mkSiR1uuQGc8OYbfdHj0jEBGH/rL+sCaE0k3UmebNYXN9 ZCGHhjTQH1n60H+iE+JcU051iDI8zNoKUlSNyOjoNC4f9VyyoqqpIDm9tJ7of0ab su5A/JOh3fxLg7xtvIBTC5ZnJLrdikopKQeBc6qKGDkLHRrqzLaVOqnv+2t6orta mxbFyfyjTrdcrosNzk3VEjdGS/AS+0KA722mcPkmCklSnyMWjfwwoLxAnLNwD2tG NeDwoCPHDbR3nkD5R3FFoaAjfvm26OIyifXkmJmBEv+45swhsyZ996NaNZyiKcAq +vfPMAnF7dhabhTHLpuQ =Ia4F -----END PGP SIGNATURE----- --9UV9rz0O2dU/yYYn--
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20160701151146.GD41276>