From owner-freebsd-fs@freebsd.org Thu Aug 18 07:36:28 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id AB55DBBE275 for ; Thu, 18 Aug 2016 07:36:28 +0000 (UTC) (envelope-from kraduk@gmail.com) Received: from mail-wm0-x232.google.com (mail-wm0-x232.google.com [IPv6:2a00:1450:400c:c09::232]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 15E3A1C24 for ; Thu, 18 Aug 2016 07:36:28 +0000 (UTC) (envelope-from kraduk@gmail.com) Received: by mail-wm0-x232.google.com with SMTP id f65so227964671wmi.0 for ; Thu, 18 Aug 2016 00:36:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=NcF8jKkpWLdAdzRFFppe2SYWjpmHImGWrvZiVxsktUc=; b=a3ZYPJG8FFMs5RwYItLWeRGIrh/E/VEoB/k3gapwvsnpn8a/OFgD7ENh+W5M27HJeZ b/fTIJukHY8RRrRIr5VrH2aSuivNG0pxcWhNz030HbiTVz9SmEwfd6a8u+saJpVxfgVq RfvxwtzoAsIODMZbNVPqOSGNCL6mU36BN+Z6ux1Jw51BJC1qQeEpuqEvuHLUT5C/Pee/ RUSsK4OtRHPxtswhqEmBIegBt4QInX1mAPGKvEoTN60voRbRT/kZkVAu43sJ8e0uTNHa NU7emyxpjhlHa3GwJvMYTin6bhlEWdcNZPn+UkKeluiaKYfl2T11qbmZZq5ZMcgZN1Ou IQSA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=NcF8jKkpWLdAdzRFFppe2SYWjpmHImGWrvZiVxsktUc=; b=lWced+U/egwcJoSht+CHRN2eVawX66XKpEQE1FaLWu7XLSsDCY2A72habXx/BNpDDT Fp0eKr5UsMqgdvqdH8V/jkbjfqZ/Sg6ChSZEFFBdSPdWP/Gckg0Dl8QLKtjm851GYweX iQD4W02f9gyzsrJd5NZxvYOjFI+u7e16IQpz+U+ntqnSoXH4SU+ma+qj4EyRgvCkyYAe IC7Nfl0m1AiOhEYShaJI5Ig5+z5nUi3HRmfbxyKtVOQHtFuWLOVzvMfTy+gQDC9+ngEd 7vQFrYLIv+WZqJp4w6yr5R/vv+LNCBj6/WxxbHCicMfRBE2z8uXqm6IN3z4ejmm+CId7 YuWQ== X-Gm-Message-State: AEkooutrj5TjLikhaF8aG7M1kcwLc/O7Z1JFFcVDvrBrcpy+R0QKly5gTk3gtddy0WORkJvjPcXE0ENMsFf/TA== X-Received: by 10.28.139.144 with SMTP id n138mr1116835wmd.71.1471505785461; Thu, 18 Aug 2016 00:36:25 -0700 (PDT) MIME-Version: 1.0 Received: by 10.194.54.202 with HTTP; Thu, 18 Aug 2016 00:36:24 -0700 (PDT) In-Reply-To: <7468cc18-85e8-3765-2b2b-a93ef73ca05a@internetx.com> References: <61283600-A41A-4A8A-92F9-7FAFF54DD175@ixsystems.com> <20160704183643.GI41276@mordor.lan> <20160704193131.GJ41276@mordor.lan> <20160811091016.GI70364@mordor.lan> <1AA52221-9B04-4CF6-97A3-D2C2B330B7F9@sarenet.es> <472bc879-977f-8c4c-c91a-84cc61efcd86@internetx.com> <20160817085413.GE22506@mordor.lan> <465bdec5-45b7-8a1d-d580-329ab6d4881b@internetx.com> <20160817095222.GG22506@mordor.lan> <52d5b687-1351-9ec5-7b67-bfa0be1c8415@kateley.com> <92F4BE3D-E4C1-4E5C-B631-D8F124988A83@gmail.com> <6b866b6e-1ab3-bcc5-151b-653e401742bd@kateley.com> <7468cc18-85e8-3765-2b2b-a93ef73ca05a@internetx.com> From: krad Date: Thu, 18 Aug 2016 08:36:24 +0100 Message-ID: Subject: Re: HAST + ZFS + NFS + CARP To: InterNetX - Juergen Gotteswinter Cc: linda@kateley.com, Chris Watson , FreeBSD FS Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.22 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 18 Aug 2016 07:36:28 -0000 I didnt think touch was atomic, mkdir is though On 18 August 2016 at 08:32, InterNetX - Juergen Gotteswinter < juergen.gotteswinter@internetx.com> wrote: > > > Am 17.08.2016 um 20:03 schrieb Linda Kateley: > > I just do consulting so I don't always get to see the end of the > > project. Although we are starting to do more ongoing support so we can > > see the progress.. > > > > I have worked with some of the guys from high-availability.com for mayb= e > > 20 years. RSF-1 is the cluster that is bundled with nexenta. Does work > > beautifully with omni/illumos. The one customer I have running it in > > prod is an isp in south america running openstack and zfs on freebsd as > > iscsi. Big boxes, 90+ drives per frame. If someone would like try it, = i > > have some contacts there. Ping me offlist. > > no offense, but it sounds a bit like marketing. > > here: running nexenta ha setup since several years with one catastrophic > failure due to split brain > > > > > You do risk losing data if you batch zfs send. It is very hard to run > > that real time. > > depends on how much data changes aka delta size > > > You have to take the snap then send the snap. Most > > people run in cron, even if it's not in cron, you would want one to > > finish before you started the next. > > thats the reason why lock files where invented, tools like zrep handle > that themself via additional zfs properties > > or, if one does not trust a single layer > > -- snip -- > #!/bin/sh > if [ ! -f /var/run/replic ] ; then > touch /var/run/replic > /blah/path/zrep sync all >> /var/log/zfsrepli.log > rm -f /var/run/replic > fi > -- snip -- > > something like this, simple > > If you lose the sending host before > > the receive is complete you won't have a full copy. > > if rsf fails, and you end up in split brain you loose way more. been > there, seen that. > > With zfs though you > > will probably still have the data on the sending host, however long it > > takes to bring it back up. RSF-1 runs in the zfs stack and send the > > writes to the second system. It's kind of pricey, but actually much les= s > > expensive than commercial alternatives. > > > > Anytime you run anything sync it adds latency but makes things safer.. > > not surprising, it all depends on the usecase > > > There is also a cool tool I like, called zerto for vmware that sits in > > the hypervisor and sends a sync copy of a write locally and then an > > async remotely. It's pretty cool. Although I haven't run it myself, hav= e > > a bunch of customers running it. I believe it works with proxmox too. > > > > Most people I run into (these days) don't mind losing 5 or even 30 > > minutes of data. Small shops. > > you talk about minutes, what delta size are we talking here about? why > not using zrep in a loop for example > > They usually have a copy somewhere else. > > Or the cost of 5-30 minutes isn't that great. I used work as a > > datacenter architect for sun/oracle with only fortune 500. There losing > > 1 sec could put large companies out of business. I worked with banks an= d > > exchanges. > > again, usecase. i bet 99% on this list are not operating fortune 500 > bank filers > > They couldn't ever lose a single transaction. Most people > > nowadays do the replication/availability in the application though and > > don't care about underlying hardware, especially disk. > > > > > > On 8/17/16 11:55 AM, Chris Watson wrote: > >> Of course, if you are willing to accept some amount of data loss that > >> opens up a lot more options. :) > >> > >> Some may find that acceptable though. Like turning off fsync with > >> PostgreSQL to get much higher throughput. As little no as you are made > >> *very* aware of the risks. > >> > >> It's good to have input in this thread from one with more experience > >> with RSF-1 than the rest of us. You confirm what others have that said > >> about RSF-1, that it's stable and works well. What were you deploying > >> it on? > >> > >> Chris > >> > >> Sent from my iPhone 5 > >> > >> On Aug 17, 2016, at 11:18 AM, Linda Kateley >> > wrote: > >> > >>> The question I always ask, as an architect, is "can you lose 1 minute > >>> worth of data?" If you can, then batched replication is perfect. If > >>> you can't.. then HA. Every place I have positioned it, rsf-1 has > >>> worked extremely well. If i remember right, it works at the dmu. I > >>> would suggest try it. They have been trying to have a full freebsd > >>> solution, I have several customers running it well. > >>> > >>> linda > >>> > >>> > >>> On 8/17/16 4:52 AM, Julien Cigar wrote: > >>>> On Wed, Aug 17, 2016 at 11:05:46AM +0200, InterNetX - Juergen > >>>> Gotteswinter wrote: > >>>>> > >>>>> Am 17.08.2016 um 10:54 schrieb Julien Cigar: > >>>>>> On Wed, Aug 17, 2016 at 09:25:30AM +0200, InterNetX - Juergen > >>>>>> Gotteswinter wrote: > >>>>>>> > >>>>>>> Am 11.08.2016 um 11:24 schrieb Borja Marcos: > >>>>>>>>> On 11 Aug 2016, at 11:10, Julien Cigar >>>>>>>>> > wrote: > >>>>>>>>> > >>>>>>>>> As I said in a previous post I tested the zfs send/receive > >>>>>>>>> approach (with > >>>>>>>>> zrep) and it works (more or less) perfectly.. so I concur in > >>>>>>>>> all what you > >>>>>>>>> said, especially about off-site replicate and synchronous > >>>>>>>>> replication. > >>>>>>>>> > >>>>>>>>> Out of curiosity I'm also testing a ZFS + iSCSI + CARP at the > >>>>>>>>> moment, > >>>>>>>>> I'm in the early tests, haven't done any heavy writes yet, but > >>>>>>>>> ATM it > >>>>>>>>> works as expected, I havent' managed to corrupt the zpool. > >>>>>>>> I must be too old school, but I don=E2=80=99t quite like the ide= a of > >>>>>>>> using an essentially unreliable transport > >>>>>>>> (Ethernet) for low-level filesystem operations. > >>>>>>>> > >>>>>>>> In case something went wrong, that approach could risk > >>>>>>>> corrupting a pool. Although, frankly, > >>>>>>>> ZFS is extremely resilient. One of mine even survived a SAS HBA > >>>>>>>> problem that caused some > >>>>>>>> silent corruption. > >>>>>>> try dual split import :D i mean, zpool -f import on 2 machines > >>>>>>> hooked up > >>>>>>> to the same disk chassis. > >>>>>> Yes this is the first thing on the list to avoid .. :) > >>>>>> > >>>>>> I'm still busy to test the whole setup here, including the > >>>>>> MASTER -> BACKUP failover script (CARP), but I think you can preve= nt > >>>>>> that thanks to: > >>>>>> > >>>>>> - As long as ctld is running on the BACKUP the disks are locked > >>>>>> and you can't import the pool (even with -f) for ex (filer2 is the > >>>>>> BACKUP): > >>>>>> https://gist.github.com/silenius/f9536e081d473ba4fddd50f59c56b58f > >>>>>> > >>>>>> - The shared pool should not be mounted at boot, and you should > >>>>>> ensure > >>>>>> that the failover script is not executed during boot time too: > >>>>>> this is > >>>>>> to handle the case wherein both machines turn off and/or re-ignite > at > >>>>>> the same time. Indeed, the CARP interface can "flip" it's status > >>>>>> if both > >>>>>> machines are powered on at the same time, for ex: > >>>>>> https://gist.github.com/silenius/344c3e998a1889f988fdfc3ceba57aaf > and > >>>>>> you will have a split-brain scenario > >>>>>> > >>>>>> - Sometimes you'll need to reboot the MASTER for some $reasons > >>>>>> (freebsd-update, etc) and the MASTER -> BACKUP switch should not > >>>>>> happen, this can be handled with a trigger file or something like > >>>>>> that > >>>>>> > >>>>>> - I've still have to check if the order is OK, but I think that as > >>>>>> long > >>>>>> as you shutdown the replication interface and that you adapt the > >>>>>> advskew (including the config file) of the CARP interface before t= he > >>>>>> zpool import -f in the failover script you can be relatively > >>>>>> confident > >>>>>> that nothing will be written on the iSCSI targets > >>>>>> > >>>>>> - A zpool scrub should be run at regular intervals > >>>>>> > >>>>>> This is my MASTER -> BACKUP CARP script ATM > >>>>>> https://gist.github.com/silenius/7f6ee8030eb6b923affb655a259bfef7 > >>>>>> > >>>>>> Julien > >>>>>> > >>>>> 100=E2=82=AC question without detailed looking at that script. yes = from a > >>>>> first > >>>>> view its super simple, but: why are solutions like rsf-1 such more > >>>>> powerful / featurerich. Theres a reason for, which is that they try > to > >>>>> cover every possible situation (which makes more than sense for > this). > >>>> I've never used "rsf-1" so I can't say much more about it, but I hav= e > >>>> no doubts about it's ability to handle "complex situations", where > >>>> multiple nodes / networks are involved. > >>>> > >>>>> That script works for sure, within very limited cases imho > >>>>> > >>>>>>> kaboom, really ugly kaboom. thats what is very likely to happen > >>>>>>> sooner > >>>>>>> or later especially when it comes to homegrown automatism > solutions. > >>>>>>> even the commercial parts where much more time/work goes into suc= h > >>>>>>> solutions fail in a regular manner > >>>>>>> > >>>>>>>> The advantage of ZFS send/receive of datasets is, however, that > >>>>>>>> you can consider it > >>>>>>>> essentially atomic. A transport corruption should not cause > >>>>>>>> trouble (apart from a failed > >>>>>>>> "zfs receive") and with snapshot retention you can even roll > >>>>>>>> back. You can=E2=80=99t roll back > >>>>>>>> zpool replications :) > >>>>>>>> > >>>>>>>> ZFS receive does a lot of sanity checks as well. As long as your > >>>>>>>> zfs receive doesn=E2=80=99t involve a rollback > >>>>>>>> to the latest snapshot, it won=E2=80=99t destroy anything by mis= take. > >>>>>>>> Just make sure that your replica datasets > >>>>>>>> aren=E2=80=99t mounted and zfs receive won=E2=80=99t complain. > >>>>>>>> > >>>>>>>> > >>>>>>>> Cheers, > >>>>>>>> > >>>>>>>> > >>>>>>>> > >>>>>>>> > >>>>>>>> Borja. > >>>>>>>> > >>>>>>>> > >>>>>>>> > >>>>>>>> _______________________________________________ > >>>>>>>> freebsd-fs@freebsd.org mailing > list > >>>>>>>> https://lists.freebsd.org/mailman/listinfo/freebsd-fs > >>>>>>>> To unsubscribe, send any mail to > >>>>>>>> "freebsd-fs-unsubscribe@freebsd.org > >>>>>>>> " > >>>>>>>> > >>>>>>> _______________________________________________ > >>>>>>> freebsd-fs@freebsd.org mailing > list > >>>>>>> https://lists.freebsd.org/mailman/listinfo/freebsd-fs > >>>>>>> To unsubscribe, send any mail to > >>>>>>> "freebsd-fs-unsubscribe@freebsd.org > >>>>>>> " > >>> > >>> _______________________________________________ > >>> freebsd-fs@freebsd.org mailing list > >>> https://lists.freebsd.org/mailman/listinfo/freebsd-fs > >>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org > >>> " > > > > _______________________________________________ > > freebsd-fs@freebsd.org mailing list > > https://lists.freebsd.org/mailman/listinfo/freebsd-fs > > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > _______________________________________________ > freebsd-fs@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >