From owner-freebsd-fs@freebsd.org Thu Aug 18 10:38:27 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id DE733BBEE1F for ; Thu, 18 Aug 2016 10:38:27 +0000 (UTC) (envelope-from kraduk@gmail.com) Received: from mail-wm0-x22d.google.com (mail-wm0-x22d.google.com [IPv6:2a00:1450:400c:c09::22d]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 3DFA31F95 for ; Thu, 18 Aug 2016 10:38:27 +0000 (UTC) (envelope-from kraduk@gmail.com) Received: by mail-wm0-x22d.google.com with SMTP id o80so25465911wme.1 for ; Thu, 18 Aug 2016 03:38:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=+3AjPVPLnOhyawDcIISBb8aMrf1BvLFfNZiEPdPdpP4=; b=CxbHxOaE7ZeV3bXOb3qex96WD7LPYUHbOiYEdw/hOmdCdTReviRLvHNv7v1yBg1EPg 72F+4KhaIhYhBWE84wxJc/gNiodwcEF54cK9558GQnUf4kRVzBH/WhhCz/QGtcmBlpEl +w0fAZjcTQnjqMQ5VUteKB+BExuEdq4Qt024qPZd33GAlNYPn08exo18rC4XzNYAv0N2 A+3rm0NQZLbgK1AkPpXxPnzHTFIpVwtjqeok59uQ9DXOkPXpPt1oKPqTRvblUwV9GpnB afSwE0/6m86Dy4Yhuy+7fYLn5kFTnP4NoIAL4/wJm/UgTeU9RL4FbLKpCnZQsGWzpgZe scuw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=+3AjPVPLnOhyawDcIISBb8aMrf1BvLFfNZiEPdPdpP4=; b=BvNC4q9jAAy4AmXHZme0RPMnZD2PqszmnQuU95J6S9/c51948rCie6K+s4cMleg/bU 3GNRKVVq7y5kcDtpsEBMKRW0iqDpU2de7FV0ZgzNOJ3uOtgBu+TilHjLJJ5IHZBoW3kn K7pJd7F9/99D08Wz0PaLgJa2F25S7zc4eEkFbXJ0cduq+q2PFEq1vObzdIqlYYYIP5T2 sYT/KAZgsjwTHTj0jwJ6kox1NpD3FwjKFWhL+y+T4CGJWcMw5aRUTrPURIDKSBs6y3A0 urqS3TXTwBlm1EKVvHPq5uS2lWIL1DpfxTpjSXJqpu4YbIWQNacbD60jU8SfT52RTdAx P//w== X-Gm-Message-State: AEkoout2TAmVhEQ3/9qPQ6dF+4flAjMR0ffLr93Cyn1TbplpFgj3F6pNzJ3WVmNvetyGqWwdG379ZGVWUVEURw== X-Received: by 10.194.127.37 with SMTP id nd5mr1504881wjb.156.1471516705391; Thu, 18 Aug 2016 03:38:25 -0700 (PDT) MIME-Version: 1.0 Received: by 10.194.54.202 with HTTP; Thu, 18 Aug 2016 03:38:24 -0700 (PDT) In-Reply-To: <69234c7d-cda9-2d56-b5e0-bb5e3961cc19@internetx.com> References: <61283600-A41A-4A8A-92F9-7FAFF54DD175@ixsystems.com> <20160704183643.GI41276@mordor.lan> <20160704193131.GJ41276@mordor.lan> <20160811091016.GI70364@mordor.lan> <1AA52221-9B04-4CF6-97A3-D2C2B330B7F9@sarenet.es> <472bc879-977f-8c4c-c91a-84cc61efcd86@internetx.com> <20160817085413.GE22506@mordor.lan> <465bdec5-45b7-8a1d-d580-329ab6d4881b@internetx.com> <20160817095222.GG22506@mordor.lan> <52d5b687-1351-9ec5-7b67-bfa0be1c8415@kateley.com> <92F4BE3D-E4C1-4E5C-B631-D8F124988A83@gmail.com> <6b866b6e-1ab3-bcc5-151b-653e401742bd@kateley.com> <7468cc18-85e8-3765-2b2b-a93ef73ca05a@internetx.com> <409301a7-ce03-aaa3-c4dc-fa9f9ba66e01@internetx.com> <69234c7d-cda9-2d56-b5e0-bb5e3961cc19@internetx.com> From: krad Date: Thu, 18 Aug 2016 11:38:24 +0100 Message-ID: Subject: Re: HAST + ZFS + NFS + CARP To: InterNetX - Juergen Gotteswinter Cc: Ben RUBSON , FreeBSD FS Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.22 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 18 Aug 2016 10:38:28 -0000 "new day, new things learned :)" job done for today then, it must be beer o clock? On 18 August 2016 at 09:02, InterNetX - Juergen Gotteswinter < juergen.gotteswinter@internetx.com> wrote: > new day, new things learned :) > > thanks! > > but like said, zrep does its on locking in zfs properties. so even this > is fine > > while true; do zrep sync all; done > > > see > > http://www.bolthole.com/solaris/zrep/ > > the properties look like this > > tank/vmail redundant_metadata all default > tank/vmail zrep:savecount 5 local > tank/vmail zrep:lock-time 20160620101703 local > tank/vmail zrep:master yes local > tank/vmail zrep:src-fs tank/vmail local > tank/vmail zrep:dest-host stor1 local > tank/vmail zrep:src-host stor2 local > tank/vmail zrep:dest-fs tank/vmail local > tank/vmail zrep:lock-pid 10887 local > > > it also takes care of the replication partner, the replicated datasets > are read only until you tell zrep "go go go, become master" > > Simple usage summary: > zrep (init|-i) ZFS/fs remotehost remoteZFSpool/fs > zrep (sync|-S) [-q seconds] ZFS/fs > zrep (sync|-S) [-q seconds] all > zrep (sync|-S) ZFS/fs@snapshot -- temporary retroactive sync > zrep (status|-s) [-v] [(-a|ZFS/fs)] > zrep refresh ZFS/fs -- pull version of sync > zrep (list|-l) [-Lv] > zrep (expire|-e) [-L] (ZFS/fs ...)|(all)|() > zrep (changeconfig|-C) [-f] ZFS/fs remotehost remoteZFSpool/fs > zrep (changeconfig|-C) [-f] [-d] ZFS/fs srchost srcZFSpool/fs > zrep failover [-L] ZFS/fs > zrep takeover [-L] ZFS/fs > > > zrep failover pool/ds -> master sets pool read only, connects to slave, > sets pool on slave rw > > should be easy to combine with carp/devd, but this is the land of vodoo > automagic again which i dont trust that much. > > > Am 18.08.2016 um 09:40 schrieb Ben RUBSON: > > Yep this is better : > > > > if mkdir > > then > > do_your_job > > rm -rf > > fi > > > > > > > >> On 18 Aug 2016, at 09:38, InterNetX - Juergen Gotteswinter < > juergen.gotteswinter@internetx.com> wrote: > >> > >> uhm, dont really investigated if it is or not. add a "sync" after that= ? > >> or replace it? > >> > >> but anyway, thanks for the hint. will dig into this! > >> > >> Am 18.08.2016 um 09:36 schrieb krad: > >>> I didnt think touch was atomic, mkdir is though > >>> > >>> On 18 August 2016 at 08:32, InterNetX - Juergen Gotteswinter > >>> >>> > wrote: > >>> > >>> > >>> > >>> Am 17.08.2016 um 20:03 schrieb Linda Kateley: > >>>> I just do consulting so I don't always get to see the end of the > >>>> project. Although we are starting to do more ongoing support so we c= an > >>>> see the progress.. > >>>> > >>>> I have worked with some of the guys from high-availability.com < > http://high-availability.com> for maybe > >>>> 20 years. RSF-1 is the cluster that is bundled with nexenta. Does wo= rk > >>>> beautifully with omni/illumos. The one customer I have running it in > >>>> prod is an isp in south america running openstack and zfs on freebsd > as > >>>> iscsi. Big boxes, 90+ drives per frame. If someone would like try > it, i > >>>> have some contacts there. Ping me offlist. > >>> > >>> no offense, but it sounds a bit like marketing. > >>> > >>> here: running nexenta ha setup since several years with one > catastrophic > >>> failure due to split brain > >>> > >>>> > >>>> You do risk losing data if you batch zfs send. It is very hard to ru= n > >>>> that real time. > >>> > >>> depends on how much data changes aka delta size > >>> > >>> > >>> You have to take the snap then send the snap. Most > >>>> people run in cron, even if it's not in cron, you would want one to > >>>> finish before you started the next. > >>> > >>> thats the reason why lock files where invented, tools like zrep > handle > >>> that themself via additional zfs properties > >>> > >>> or, if one does not trust a single layer > >>> > >>> -- snip -- > >>> #!/bin/sh > >>> if [ ! -f /var/run/replic ] ; then > >>> touch /var/run/replic > >>> /blah/path/zrep sync all >> /var/log/zfsrepli.log > >>> rm -f /var/run/replic > >>> fi > >>> -- snip -- > >>> > >>> something like this, simple > >>> > >>> If you lose the sending host before > >>>> the receive is complete you won't have a full copy. > >>> > >>> if rsf fails, and you end up in split brain you loose way more. be= en > >>> there, seen that. > >>> > >>> With zfs though you > >>>> will probably still have the data on the sending host, however long = it > >>>> takes to bring it back up. RSF-1 runs in the zfs stack and send the > >>>> writes to the second system. It's kind of pricey, but actually much > less > >>>> expensive than commercial alternatives. > >>>> > >>>> Anytime you run anything sync it adds latency but makes things safer= .. > >>> > >>> not surprising, it all depends on the usecase > >>> > >>>> There is also a cool tool I like, called zerto for vmware that sits = in > >>>> the hypervisor and sends a sync copy of a write locally and then an > >>>> async remotely. It's pretty cool. Although I haven't run it myself, > have > >>>> a bunch of customers running it. I believe it works with proxmox too= . > >>>> > >>>> Most people I run into (these days) don't mind losing 5 or even 30 > >>>> minutes of data. Small shops. > >>> > >>> you talk about minutes, what delta size are we talking here about? > why > >>> not using zrep in a loop for example > >>> > >>> They usually have a copy somewhere else. > >>>> Or the cost of 5-30 minutes isn't that great. I used work as a > >>>> datacenter architect for sun/oracle with only fortune 500. There > losing > >>>> 1 sec could put large companies out of business. I worked with banks > and > >>>> exchanges. > >>> > >>> again, usecase. i bet 99% on this list are not operating fortune 5= 00 > >>> bank filers > >>> > >>> They couldn't ever lose a single transaction. Most people > >>>> nowadays do the replication/availability in the application though a= nd > >>>> don't care about underlying hardware, especially disk. > >>>> > >>>> > >>>> On 8/17/16 11:55 AM, Chris Watson wrote: > >>>>> Of course, if you are willing to accept some amount of data loss th= at > >>>>> opens up a lot more options. :) > >>>>> > >>>>> Some may find that acceptable though. Like turning off fsync with > >>>>> PostgreSQL to get much higher throughput. As little no as you are > >>> made > >>>>> *very* aware of the risks. > >>>>> > >>>>> It's good to have input in this thread from one with more experienc= e > >>>>> with RSF-1 than the rest of us. You confirm what others have that > >>> said > >>>>> about RSF-1, that it's stable and works well. What were you deployi= ng > >>>>> it on? > >>>>> > >>>>> Chris > >>>>> > >>>>> Sent from my iPhone 5 > >>>>> > >>>>> On Aug 17, 2016, at 11:18 AM, Linda Kateley >>> > >>>>> >> wrote: > >>>>> > >>>>>> The question I always ask, as an architect, is "can you lose 1 > >>> minute > >>>>>> worth of data?" If you can, then batched replication is perfect. I= f > >>>>>> you can't.. then HA. Every place I have positioned it, rsf-1 has > >>>>>> worked extremely well. If i remember right, it works at the dmu. I > >>>>>> would suggest try it. They have been trying to have a full freebsd > >>>>>> solution, I have several customers running it well. > >>>>>> > >>>>>> linda > >>>>>> > >>>>>> > >>>>>> On 8/17/16 4:52 AM, Julien Cigar wrote: > >>>>>>> On Wed, Aug 17, 2016 at 11:05:46AM +0200, InterNetX - Juergen > >>>>>>> Gotteswinter wrote: > >>>>>>>> > >>>>>>>> Am 17.08.2016 um 10:54 schrieb Julien Cigar: > >>>>>>>>> On Wed, Aug 17, 2016 at 09:25:30AM +0200, InterNetX - Juergen > >>>>>>>>> Gotteswinter wrote: > >>>>>>>>>> > >>>>>>>>>> Am 11.08.2016 um 11:24 schrieb Borja Marcos: > >>>>>>>>>>>> On 11 Aug 2016, at 11:10, Julien Cigar >>>>>>>>>>>> >>> >> wrote: > >>>>>>>>>>>> > >>>>>>>>>>>> As I said in a previous post I tested the zfs send/receive > >>>>>>>>>>>> approach (with > >>>>>>>>>>>> zrep) and it works (more or less) perfectly.. so I concur in > >>>>>>>>>>>> all what you > >>>>>>>>>>>> said, especially about off-site replicate and synchronous > >>>>>>>>>>>> replication. > >>>>>>>>>>>> > >>>>>>>>>>>> Out of curiosity I'm also testing a ZFS + iSCSI + CARP at th= e > >>>>>>>>>>>> moment, > >>>>>>>>>>>> I'm in the early tests, haven't done any heavy writes yet, b= ut > >>>>>>>>>>>> ATM it > >>>>>>>>>>>> works as expected, I havent' managed to corrupt the zpool. > >>>>>>>>>>> I must be too old school, but I don=E2=80=99t quite like the = idea of > >>>>>>>>>>> using an essentially unreliable transport > >>>>>>>>>>> (Ethernet) for low-level filesystem operations. > >>>>>>>>>>> > >>>>>>>>>>> In case something went wrong, that approach could risk > >>>>>>>>>>> corrupting a pool. Although, frankly, > >>>>>>>>>>> ZFS is extremely resilient. One of mine even survived a SAS H= BA > >>>>>>>>>>> problem that caused some > >>>>>>>>>>> silent corruption. > >>>>>>>>>> try dual split import :D i mean, zpool -f import on 2 machines > >>>>>>>>>> hooked up > >>>>>>>>>> to the same disk chassis. > >>>>>>>>> Yes this is the first thing on the list to avoid .. :) > >>>>>>>>> > >>>>>>>>> I'm still busy to test the whole setup here, including the > >>>>>>>>> MASTER -> BACKUP failover script (CARP), but I think you can > >>> prevent > >>>>>>>>> that thanks to: > >>>>>>>>> > >>>>>>>>> - As long as ctld is running on the BACKUP the disks are locked > >>>>>>>>> and you can't import the pool (even with -f) for ex (filer2 > >>> is the > >>>>>>>>> BACKUP): > >>>>>>>>> > >>> https://gist.github.com/silenius/f9536e081d473ba4fddd50f59c56b58f > >>> > >>>>>>>>> > >>>>>>>>> - The shared pool should not be mounted at boot, and you should > >>>>>>>>> ensure > >>>>>>>>> that the failover script is not executed during boot time too: > >>>>>>>>> this is > >>>>>>>>> to handle the case wherein both machines turn off and/or > >>> re-ignite at > >>>>>>>>> the same time. Indeed, the CARP interface can "flip" it's statu= s > >>>>>>>>> if both > >>>>>>>>> machines are powered on at the same time, for ex: > >>>>>>>>> > >>> https://gist.github.com/silenius/344c3e998a1889f988fdfc3ceba57aaf > >>> > and > >>>>>>>>> you will have a split-brain scenario > >>>>>>>>> > >>>>>>>>> - Sometimes you'll need to reboot the MASTER for some $reasons > >>>>>>>>> (freebsd-update, etc) and the MASTER -> BACKUP switch should no= t > >>>>>>>>> happen, this can be handled with a trigger file or something li= ke > >>>>>>>>> that > >>>>>>>>> > >>>>>>>>> - I've still have to check if the order is OK, but I think > >>> that as > >>>>>>>>> long > >>>>>>>>> as you shutdown the replication interface and that you adapt th= e > >>>>>>>>> advskew (including the config file) of the CARP interface > >>> before the > >>>>>>>>> zpool import -f in the failover script you can be relatively > >>>>>>>>> confident > >>>>>>>>> that nothing will be written on the iSCSI targets > >>>>>>>>> > >>>>>>>>> - A zpool scrub should be run at regular intervals > >>>>>>>>> > >>>>>>>>> This is my MASTER -> BACKUP CARP script ATM > >>>>>>>>> > >>> https://gist.github.com/silenius/7f6ee8030eb6b923affb655a259bfef7 > >>> > >>>>>>>>> > >>>>>>>>> Julien > >>>>>>>>> > >>>>>>>> 100=E2=82=AC question without detailed looking at that script. y= es from a > >>>>>>>> first > >>>>>>>> view its super simple, but: why are solutions like rsf-1 such mo= re > >>>>>>>> powerful / featurerich. Theres a reason for, which is that > >>> they try to > >>>>>>>> cover every possible situation (which makes more than sense > >>> for this). > >>>>>>> I've never used "rsf-1" so I can't say much more about it, but > >>> I have > >>>>>>> no doubts about it's ability to handle "complex situations", wher= e > >>>>>>> multiple nodes / networks are involved. > >>>>>>> > >>>>>>>> That script works for sure, within very limited cases imho > >>>>>>>> > >>>>>>>>>> kaboom, really ugly kaboom. thats what is very likely to happe= n > >>>>>>>>>> sooner > >>>>>>>>>> or later especially when it comes to homegrown automatism > >>> solutions. > >>>>>>>>>> even the commercial parts where much more time/work goes > >>> into such > >>>>>>>>>> solutions fail in a regular manner > >>>>>>>>>> > >>>>>>>>>>> The advantage of ZFS send/receive of datasets is, however, th= at > >>>>>>>>>>> you can consider it > >>>>>>>>>>> essentially atomic. A transport corruption should not cause > >>>>>>>>>>> trouble (apart from a failed > >>>>>>>>>>> "zfs receive") and with snapshot retention you can even roll > >>>>>>>>>>> back. You can=E2=80=99t roll back > >>>>>>>>>>> zpool replications :) > >>>>>>>>>>> > >>>>>>>>>>> ZFS receive does a lot of sanity checks as well. As long as > >>> your > >>>>>>>>>>> zfs receive doesn=E2=80=99t involve a rollback > >>>>>>>>>>> to the latest snapshot, it won=E2=80=99t destroy anything by = mistake. > >>>>>>>>>>> Just make sure that your replica datasets > >>>>>>>>>>> aren=E2=80=99t mounted and zfs receive won=E2=80=99t complain= . > >>>>>>>>>>> > >>>>>>>>>>> > >>>>>>>>>>> Cheers, > >>>>>>>>>>> > >>>>>>>>>>> > >>>>>>>>>>> > >>>>>>>>>>> > >>>>>>>>>>> Borja. > >>>>>>>>>>> > >>>>>>>>>>> > >>>>>>>>>>> > >>>>>>>>>>> _______________________________________________ > >>>>>>>>>>> freebsd-fs@freebsd.org > >>> > > >>> mailing list > >>>>>>>>>>> https://lists.freebsd.org/mailman/listinfo/freebsd-fs > >>> > >>>>>>>>>>> To unsubscribe, send any mail to > >>>>>>>>>>> "freebsd-fs-unsubscribe@freebsd.org > >>> > >>>>>>>>>>> >>> >" > >>>>>>>>>>> > >>>>>>>>>> _______________________________________________ > >>>>>>>>>> freebsd-fs@freebsd.org > >>> > > >>> mailing list > >>>>>>>>>> https://lists.freebsd.org/mailman/listinfo/freebsd-fs > >>> > >>>>>>>>>> To unsubscribe, send any mail to > >>>>>>>>>> "freebsd-fs-unsubscribe@freebsd.org > >>> > >>>>>>>>>> >>> >" > >>>>>> > >>>>>> _______________________________________________ > >>>>>> freebsd-fs@freebsd.org > >>> > > >>> mailing list > >>>>>> https://lists.freebsd.org/mailman/listinfo/freebsd-fs > >>> > >>>>>> To unsubscribe, send any mail to > >>> "freebsd-fs-unsubscribe@freebsd.org > >>> > >>>>>> >>> >" > >>>> > >>>> _______________________________________________ > >>>> freebsd-fs@freebsd.org mailing list > >>>> https://lists.freebsd.org/mailman/listinfo/freebsd-fs > >>> > >>>> To unsubscribe, send any mail to > >>> "freebsd-fs-unsubscribe@freebsd.org > >>> " > >>> _______________________________________________ > >>> freebsd-fs@freebsd.org mailing lis= t > >>> https://lists.freebsd.org/mailman/listinfo/freebsd-fs > >>> > >>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@ > freebsd.org > >>> " > >>> > >>> > >> _______________________________________________ > >> freebsd-fs@freebsd.org mailing list > >> https://lists.freebsd.org/mailman/listinfo/freebsd-fs > >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > > > > _______________________________________________ > > freebsd-fs@freebsd.org mailing list > > https://lists.freebsd.org/mailman/listinfo/freebsd-fs > > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >