Date: Wed, 17 Aug 2016 14:33:56 +0100 From: krad <kraduk@gmail.com> To: Julien Cigar <julien@perdition.city> Cc: InterNetX - Juergen Gotteswinter <juergen.gotteswinter@internetx.com>, FreeBSD FS <freebsd-fs@freebsd.org> Subject: Re: HAST + ZFS + NFS + CARP Message-ID: <CALfReye_f0_3kFF08KS0fCB5wbTKdZ5=ymh8WM5S18YEfbHqNg@mail.gmail.com> In-Reply-To: <20160817113339.GH22506@mordor.lan> References: <20160704183643.GI41276@mordor.lan> <AE372BF0-02BE-4BF3-9073-A05DB4E7FE34@ixsystems.com> <20160704193131.GJ41276@mordor.lan> <E7D42341-D324-41C7-B03A-2420DA7A7952@sarenet.es> <20160811091016.GI70364@mordor.lan> <1AA52221-9B04-4CF6-97A3-D2C2B330B7F9@sarenet.es> <472bc879-977f-8c4c-c91a-84cc61efcd86@internetx.com> <20160817085413.GE22506@mordor.lan> <465bdec5-45b7-8a1d-d580-329ab6d4881b@internetx.com> <20160817095222.GG22506@mordor.lan> <20160817113339.GH22506@mordor.lan>
next in thread | previous in thread | raw e-mail | index | archive | help
What are peoples experiences on running something like moosfs on top of zfs? It looks really compelling on certain levels, but i'm not sure about the reality in a production network yet. On 17 August 2016 at 12:33, Julien Cigar <julien@perdition.city> wrote: > On Wed, Aug 17, 2016 at 11:52:22AM +0200, Julien Cigar wrote: > > On Wed, Aug 17, 2016 at 11:05:46AM +0200, InterNetX - Juergen > Gotteswinter wrote: > > > > > > > > > Am 17.08.2016 um 10:54 schrieb Julien Cigar: > > > > On Wed, Aug 17, 2016 at 09:25:30AM +0200, InterNetX - Juergen > Gotteswinter wrote: > > > >> > > > >> > > > >> Am 11.08.2016 um 11:24 schrieb Borja Marcos: > > > >>> > > > >>>> On 11 Aug 2016, at 11:10, Julien Cigar <julien@perdition.city> > wrote: > > > >>>> > > > >>>> As I said in a previous post I tested the zfs send/receive > approach (with > > > >>>> zrep) and it works (more or less) perfectly.. so I concur in all > what you > > > >>>> said, especially about off-site replicate and synchronous > replication. > > > >>>> > > > >>>> Out of curiosity I'm also testing a ZFS + iSCSI + CARP at the > moment, > > > >>>> I'm in the early tests, haven't done any heavy writes yet, but > ATM it > > > >>>> works as expected, I havent' managed to corrupt the zpool. > > > >>> > > > >>> I must be too old school, but I don=E2=80=99t quite like the idea= of using > an essentially unreliable transport > > > >>> (Ethernet) for low-level filesystem operations. > > > >>> > > > >>> In case something went wrong, that approach could risk corrupting > a pool. Although, frankly, > > > >>> ZFS is extremely resilient. One of mine even survived a SAS HBA > problem that caused some > > > >>> silent corruption. > > > >> > > > >> try dual split import :D i mean, zpool -f import on 2 machines > hooked up > > > >> to the same disk chassis. > > > > > > > > Yes this is the first thing on the list to avoid .. :) > > > > > > > > I'm still busy to test the whole setup here, including the > > > > MASTER -> BACKUP failover script (CARP), but I think you can preven= t > > > > that thanks to: > > > > > > > > - As long as ctld is running on the BACKUP the disks are locked > > > > and you can't import the pool (even with -f) for ex (filer2 is the > > > > BACKUP): > > > > https://gist.github.com/silenius/f9536e081d473ba4fddd50f59c56b58f > > > > > > > > - The shared pool should not be mounted at boot, and you should > ensure > > > > that the failover script is not executed during boot time too: this > is > > > > to handle the case wherein both machines turn off and/or re-ignite = at > > > > the same time. Indeed, the CARP interface can "flip" it's status if > both > > > > machines are powered on at the same time, for ex: > > > > https://gist.github.com/silenius/344c3e998a1889f988fdfc3ceba57aaf > and > > > > you will have a split-brain scenario > > > > > > > > - Sometimes you'll need to reboot the MASTER for some $reasons > > > > (freebsd-update, etc) and the MASTER -> BACKUP switch should not > > > > happen, this can be handled with a trigger file or something like > that > > > > > > > > - I've still have to check if the order is OK, but I think that as > long > > > > as you shutdown the replication interface and that you adapt the > > > > advskew (including the config file) of the CARP interface before th= e > > > > zpool import -f in the failover script you can be relatively > confident > > > > that nothing will be written on the iSCSI targets > > > > > > > > - A zpool scrub should be run at regular intervals > > > > > > > > This is my MASTER -> BACKUP CARP script ATM > > > > https://gist.github.com/silenius/7f6ee8030eb6b923affb655a259bfef7 > > > > > > > > Julien > > > > > > > > > > 100=E2=82=AC question without detailed looking at that script. yes fr= om a first > > > view its super simple, but: why are solutions like rsf-1 such more > > > powerful / featurerich. Theres a reason for, which is that they try t= o > > > cover every possible situation (which makes more than sense for this)= . > > > > I've never used "rsf-1" so I can't say much more about it, but I have > > no doubts about it's ability to handle "complex situations", where > > multiple nodes / networks are involved. > > BTW for simple cases (two nodes, same network, one active node, ...) we > could use both: ZFS + iSCSI + CARP on the two nodes, and > zfs send|zfs receive on a third one > > > > > > > > > That script works for sure, within very limited cases imho > > > > > > >> > > > >> kaboom, really ugly kaboom. thats what is very likely to happen > sooner > > > >> or later especially when it comes to homegrown automatism solution= s. > > > >> even the commercial parts where much more time/work goes into such > > > >> solutions fail in a regular manner > > > >> > > > >>> > > > >>> The advantage of ZFS send/receive of datasets is, however, that > you can consider it > > > >>> essentially atomic. A transport corruption should not cause > trouble (apart from a failed > > > >>> "zfs receive") and with snapshot retention you can even roll back= . > You can=E2=80=99t roll back > > > >>> zpool replications :) > > > >>> > > > >>> ZFS receive does a lot of sanity checks as well. As long as your > zfs receive doesn=E2=80=99t involve a rollback > > > >>> to the latest snapshot, it won=E2=80=99t destroy anything by mist= ake. Just > make sure that your replica datasets > > > >>> aren=E2=80=99t mounted and zfs receive won=E2=80=99t complain. > > > >>> > > > >>> > > > >>> Cheers, > > > >>> > > > >>> > > > >>> > > > >>> > > > >>> Borja. > > > >>> > > > >>> > > > >>> > > > >>> _______________________________________________ > > > >>> freebsd-fs@freebsd.org mailing list > > > >>> https://lists.freebsd.org/mailman/listinfo/freebsd-fs > > > >>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@ > freebsd.org" > > > >>> > > > >> _______________________________________________ > > > >> freebsd-fs@freebsd.org mailing list > > > >> https://lists.freebsd.org/mailman/listinfo/freebsd-fs > > > >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@ > freebsd.org" > > > > > > > > -- > > Julien Cigar > > Belgian Biodiversity Platform (http://www.biodiversity.be) > > PGP fingerprint: EEF9 F697 4B68 D275 7B11 6A25 B2BB 3710 A204 23C0 > > No trees were killed in the creation of this message. > > However, many electrons were terribly inconvenienced. > > > > -- > Julien Cigar > Belgian Biodiversity Platform (http://www.biodiversity.be) > PGP fingerprint: EEF9 F697 4B68 D275 7B11 6A25 B2BB 3710 A204 23C0 > No trees were killed in the creation of this message. > However, many electrons were terribly inconvenienced. >
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CALfReye_f0_3kFF08KS0fCB5wbTKdZ5=ymh8WM5S18YEfbHqNg>