From owner-freebsd-fs@freebsd.org Fri Jul 1 11:09:58 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id AD004B8E6E6 for ; Fri, 1 Jul 2016 11:09:58 +0000 (UTC) (envelope-from jg@internetx.com) Received: from mx1.internetx.com (mx1.internetx.com [62.116.129.39]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 395DA2524 for ; Fri, 1 Jul 2016 11:09:57 +0000 (UTC) (envelope-from jg@internetx.com) Received: from localhost (localhost [127.0.0.1]) by mx1.internetx.com (Postfix) with ESMTP id 5032445FC0E4; Fri, 1 Jul 2016 13:09:55 +0200 (CEST) X-Virus-Scanned: InterNetX GmbH amavisd-new at ix-mailer.internetx.de Received: from mx1.internetx.com ([62.116.129.39]) by localhost (ix-mailer.internetx.de [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id X+0YQe7FwG9s; Fri, 1 Jul 2016 13:09:52 +0200 (CEST) Received: from [192.168.100.26] (pizza.internetx.de [62.116.129.3]) (using TLSv1 with cipher AES128-SHA (128/128 bits)) (No client certificate requested) by mx1.internetx.com (Postfix) with ESMTPSA id A320D4C4C54B; Fri, 1 Jul 2016 13:09:52 +0200 (CEST) Reply-To: jg@internetx.com Subject: Re: HAST + ZFS + NFS + CARP References: <71b8da1e-acb2-9d4e-5d11-20695aa5274a@internetx.com> <20160630153747.GB5695@mordor.lan> <63C07474-BDD5-42AA-BF4A-85A0E04D3CC2@gmail.com> <20160630163541.GC5695@mordor.lan> <50BF1AEF-3ECC-4C30-B8E1-678E02735BB5@gmail.com> <20160701084717.GE5695@mordor.lan> <47c7e1a5-6ae8-689c-9c2d-bb92f659ea43@internetx.com> <20160701101524.GF5695@mordor.lan> <20160701105735.GG5695@mordor.lan> To: Julien Cigar Cc: Ben RUBSON , freebsd-fs@freebsd.org From: InterNetX - Juergen Gotteswinter Message-ID: <3d8c7c89-b24e-9810-f3c2-11ec1e15c948@internetx.com> Date: Fri, 1 Jul 2016 13:09:52 +0200 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.1.1 MIME-Version: 1.0 In-Reply-To: <20160701105735.GG5695@mordor.lan> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 01 Jul 2016 11:09:58 -0000 Am 01.07.2016 um 12:57 schrieb Julien Cigar: > On Fri, Jul 01, 2016 at 12:18:39PM +0200, InterNetX - Juergen Gotteswinter wrote: >> Am 01.07.2016 um 12:15 schrieb Julien Cigar: >>> On Fri, Jul 01, 2016 at 11:42:13AM +0200, InterNetX - Juergen Gotteswinter wrote: >>>>> >>>>> Thank you very much for those "advices", it is much appreciated! >>>>> >>>>> I'll definitively go with iSCSI (for which I haven't that much >>>>> experience) over HAST. >>>> >>>> good luck, i rather cut one of my fingers than using something like this >>>> in production. but its probably a quick way if one targets to find a new >>>> opportunity ;) >>> >>> why...? I guess iSCSI is slower but should be safer than HAST, no? >> >> do your testing, please. even with simulated short network cuts. 10-20 >> secs are way enaugh to give you a picture of what is going to happen > > of course I'll test everything properly :) I don't have the hardware yet > so ATM I'm just looking for all the possible "candidates", and I'm > aware that a redundant storage is not that easy to implement ... > > but what solutions do we have? It's either CARP + ZFS + (HAST|iSCSI), > either zfs send|ssh zfs receive as you suggest (but it's > not realtime), either a distributed FS (which I avoid like the plague..) zfs send/receive can be nearly realtime. external jbods with cross cabled sas + commercial cluster solution like rsf-1. anything else is a fragile construction which begs for desaster. > >> >>>> >>>>> >>>>> Maybe a stupid question but, assuming on the MASTER with ada{0,1} the >>>>> local disks and da{0,1} the exported iSCSI disks from the SLAVE, would >>>>> you go with: >>>>> >>>>> $> zpool create storage mirror /dev/ada0s1 /dev/ada1s1 mirror /dev/da0 >>>>> /dev/da1 >>>>> >>>>> or rather: >>>>> >>>>> $> zpool create storage mirror /dev/ada0s1 /dev/da0 mirror /dev/ada1s1 >>>>> /dev/da1 >>>>> >>>>> I guess the former is better, but it's just to be sure .. (or maybe it's >>>>> better to iSCSI export a ZVOL from the SLAVE?) >>>>> >>>> >>>> are you really sure you understand what you trying to do? even if its >>>> currently so, i bet in a desaster case you will be lost. >>>> >>>> >>> >>> well this is pretty new to me, but I don't see what could be wrong with: >>> >>> $> zpool create storage mirror /dev/ada0s1 /dev/da0 mirror /dev/ada1s1 >>> /dev/da1 >>> >>> Let's take some use-cases: >>> - MASTER and SLAVE are alive, the data is "replicated" on both >>> nodes. As iSCSI is used, ZFS will see all the details of the >>> underlying disks and we can be sure that no corruption will occur >>> (contrary to HAST) >>> - SLAVE die, correct me if I'm wrong the but pool is still available, >>> fix the SLAVE, resilver and that's it ..? >>> - MASTER die, CARP will notice it and SLAVE will take the VIP, the >>> failover script will be executed with a $> zpool import -f >>> >>>>> Correct me if I'm wrong but, from a safety point of view this setup is >>>>> also the safest as you'll get the "fullsync" equivalent mode of HAST >>>>> (but but it's also the slowest), so I can be 99,99% confident that the >>>>> pool on the SLAVE will never be corrupted, even in the case where the >>>>> MASTER suddently die (power outage, etc), and that a zpool import -f >>>>> storage will always work? >>>> >>>> 99,99% ? optimistic, very optimistic. >>> >>> the only situation where corruption could occur is some sort of network >>> corruption (bug in the driver, broken network card, etc), or a bug in >>> ZFS ... but you'll have the same with a zfs send|ssh zfs receive >>> >>>> >> >> optimistic >> >>>> we are playing with recovery of a test pool which has been imported on >>>> two nodes at the same time. looks pretty messy >>>> >>>>> >>>>> One last thing: this "storage" pool will be exported through NFS on the >>>>> clients, and when a failover occur they should, in theory, not notice >>>>> it. I know that it's pretty hypothetical but I wondered if pfsync could >>>>> play a role in this area (active connections)..? >>>>> >>>> >>>> they will notice, and they will stuck or worse (reboot) >>> >>> this is something that should be properly tested I agree.. >>> >> >> do your testing, and keep your clients under load while testing. do >> writes onto the nfs mounts and then cut. you will be surprised about the >> impact. >> >>>> >> >> >> >>>>> Thanks! >>>>> Julien >>>>> >>>>>> >>>>>>>>>> ZFS would then know as soon as a disk is failing. >>>>>>>>>> And if the master fails, you only have to import (-f certainly, in case of a master power failure) on the slave. >>>>>>>>>> >>>>>>>>>> Ben >>>>>> _______________________________________________ >>>>>> freebsd-fs@freebsd.org mailing list >>>>>> https://lists.freebsd.org/mailman/listinfo/freebsd-fs >>>>>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >>>>> >>> >> >