Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 1 Jul 2016 12:18:39 +0200
From:      InterNetX - Juergen Gotteswinter <jg@internetx.com>
To:        Julien Cigar <julien@perdition.city>
Cc:        Ben RUBSON <ben.rubson@gmail.com>, freebsd-fs@freebsd.org
Subject:   Re: HAST + ZFS + NFS + CARP
Message-ID:  <f74627e3-604e-da71-c024-7e4e71ff36cb@internetx.com>
In-Reply-To: <20160701101524.GF5695@mordor.lan>
References:  <20160630144546.GB99997@mordor.lan> <71b8da1e-acb2-9d4e-5d11-20695aa5274a@internetx.com> <AD42D8FD-D07B-454E-B79D-028C1EC57381@gmail.com> <20160630153747.GB5695@mordor.lan> <63C07474-BDD5-42AA-BF4A-85A0E04D3CC2@gmail.com> <20160630163541.GC5695@mordor.lan> <50BF1AEF-3ECC-4C30-B8E1-678E02735BB5@gmail.com> <20160701084717.GE5695@mordor.lan> <47c7e1a5-6ae8-689c-9c2d-bb92f659ea43@internetx.com> <20160701101524.GF5695@mordor.lan>

next in thread | previous in thread | raw e-mail | index | archive | help
Am 01.07.2016 um 12:15 schrieb Julien Cigar:
> On Fri, Jul 01, 2016 at 11:42:13AM +0200, InterNetX - Juergen Gotteswinter wrote:
>>>
>>> Thank you very much for those "advices", it is much appreciated! 
>>>
>>> I'll definitively go with iSCSI (for which I haven't that much 
>>> experience) over HAST.
>>
>> good luck, i rather cut one of my fingers than using something like this
>> in production. but its probably a quick way if one targets to find a new
>> opportunity ;)
> 
> why...? I guess iSCSI is slower but should be safer than HAST, no?

do your testing, please. even with simulated short network cuts. 10-20
secs are way enaugh to give you a picture of what is going to happen

>>
>>>
>>> Maybe a stupid question but, assuming on the MASTER with ada{0,1} the 
>>> local disks and da{0,1} the exported iSCSI disks from the SLAVE, would 
>>> you go with:
>>>
>>> $> zpool create storage mirror /dev/ada0s1 /dev/ada1s1 mirror /dev/da0
>>> /dev/da1
>>>
>>> or rather:
>>>
>>> $> zpool create storage mirror /dev/ada0s1 /dev/da0 mirror /dev/ada1s1
>>> /dev/da1
>>>
>>> I guess the former is better, but it's just to be sure .. (or maybe it's
>>> better to iSCSI export a ZVOL from the SLAVE?)
>>>
>>
>> are you really sure you understand what you trying to do? even if its
>> currently so, i bet in a desaster case you will be lost.
>>
>>
> 
> well this is pretty new to me, but I don't see what could be wrong with:
> 
> $> zpool create storage mirror /dev/ada0s1 /dev/da0 mirror /dev/ada1s1
> /dev/da1
> 
> Let's take some use-cases:
> - MASTER and SLAVE are alive, the data is "replicated" on both
>   nodes. As iSCSI is used, ZFS will see all the details of the
>   underlying disks and we can be sure that no corruption will occur
>   (contrary to HAST)
> - SLAVE die, correct me if I'm wrong the but pool is still available,
>   fix the SLAVE, resilver and that's it ..?
> - MASTER die, CARP will notice it and SLAVE will take the VIP, the
>   failover script will be executed with a $> zpool import -f
> 
>>> Correct me if I'm wrong but, from a safety point of view this setup is 
>>> also the safest as you'll get the "fullsync" equivalent mode of HAST
>>> (but but it's also the slowest), so I can be 99,99% confident that the
>>> pool on the SLAVE will never be corrupted, even in the case where the
>>> MASTER suddently die (power outage, etc), and that a zpool import -f
>>> storage will always work?
>>
>> 99,99% ? optimistic, very optimistic.
> 
> the only situation where corruption could occur is some sort of network
> corruption (bug in the driver, broken network card, etc), or a bug in
> ZFS ... but you'll have the same with a zfs send|ssh zfs receive
> 
>>

optimistic

>> we are playing with recovery of a test pool which has been imported on
>> two nodes at the same time. looks pretty messy
>>
>>>
>>> One last thing: this "storage" pool will be exported through NFS on the 
>>> clients, and when a failover occur they should, in theory, not notice
>>> it. I know that it's pretty hypothetical but I wondered if pfsync could
>>> play a role in this area (active connections)..?
>>>
>>
>> they will notice, and they will stuck or worse (reboot)
> 
> this is something that should be properly tested I agree..
> 

do your testing, and keep your clients under load while testing. do
writes onto the nfs mounts and then cut. you will be surprised about the
impact.

>>



>>> Thanks!
>>> Julien
>>>
>>>>
>>>>>>>> ZFS would then know as soon as a disk is failing.
>>>>>>>> And if the master fails, you only have to import (-f certainly, in case of a master power failure) on the slave.
>>>>>>>>
>>>>>>>> Ben
>>>> _______________________________________________
>>>> freebsd-fs@freebsd.org mailing list
>>>> https://lists.freebsd.org/mailman/listinfo/freebsd-fs
>>>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"
>>>
> 




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?f74627e3-604e-da71-c024-7e4e71ff36cb>