Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 24 Apr 2018 14:27:38 +0300
From:      Mikhail Zakharov <zmey20000@yahoo.com>
To:        karli@inparadise.se
Cc:        "freebsd-fs@freebsd.org" <freebsd-fs@freebsd.org>
Subject:   Re: ctl_isc_lun_sync: Received conflicting HA LUN
Message-ID:  <56E4773F-4EAD-47EB-A803-38BFCD8C63F8@yahoo.com>
In-Reply-To: <1524567842.9560.66.camel@inparadise.se>
References:  <4cb4aa83-bd49-0c20-4e41-c11c682b0570@sentex.net> <F908B78A-DD9B-4204-BA1E-24CE38059ACF@yahoo.com> <1e1e7cd5-0797-c168-fbce-a36edc6a432e@sentex.net> <1524550160.1130.6.camel@inparadise.se> <615DFFBB-239A-4350-B961-FD10D0C9A8DD@yahoo.com> <1524567621.9560.65.camel@inparadise.se> <1524567842.9560.66.camel@inparadise.se>

next in thread | previous in thread | raw e-mail | index | archive | help
Ah, and unfortunately CTL HA is two-node cluster, as I remember, there is no=
 possibility to add the third one. So the third node is an external arbiter i=
n that case.


> 24 =D0=B0=D0=BF=D1=80. 2018 =D0=B3., =D0=B2 14:04, Karli Sj=C3=B6berg <kar=
li@inparadise.se> =D0=BD=D0=B0=D0=BF=D0=B8=D1=81=D0=B0=D0=BB(=D0=B0):
>=20
>> On Tue, 2018-04-24 at 13:00 +0200, Karli Sj=C3=B6berg via freebsd-fs wrot=
e:
>>> On Tue, 2018-04-24 at 12:32 +0300, Mikhail Zakharov wrote:
>>> Hi Karli,
>>>=20
>>> Thank you, I=E2=80=99m just exploring the storage abilities of my prefer=
red
>>> OS - FreeBSD.=20
>>>=20
>>> Three nodes are preferable to choose the quorum for sure, but my
>>> idea
>>> was not to establish contacts between nodes. Instead of it, BQ uses
>>> a
>>> small partition for the =E2=80=9Cquorum=E2=80=9D on the same space where=
 data
>>> volume
>>> is located.=20
>>=20
>> Yes, of course. But there=C2=B4s nothing you from having three nodes
>=20
> 's/nothing you/nothing stopping you/'
>=20
>> connected to the same partition and being able to make more accurate
>> choices on when to take over?
>>=20
>> If one node stops updating stamps, take over. If two nodes stops
>> updating, then the problem is likely network-related and _must not_
>> take over to avoid split brain. Something like that?
>>=20
>> /K
>>=20
>>> And if a node looses access to the quorum it means, it looses
>>> access
>>> to the data volume too. Now, BQ runs on both nodes and both BQ
>>> instances write stamps to the quorum partition. If for any reason
>>> BQ
>>> on one node detects, the other node stops updating it=E2=80=99s stamps, i=
t
>>> performs failover procedure. It=E2=80=99s quite a questionable, rude way=
, I
>>> can agree, and that=E2=80=99s why I always write a warning to use the Be=
aST
>>> for testing only purposes.=20
>>>=20
>>> Best regards,
>>> Mike
>>>=20
>>>> 24 =D0=B0=D0=BF=D1=80. 2018 =D0=B3., =D0=B2 9:09, Karli Sj=C3=B6berg <k=
arli@inparadise.se>
>>>> =D0=BD=D0=B0=D0=BF=D0=B8=D1=81=D0=B0=D0=BB(=D0=B0):
>>>>=20
>>>>>> On Mon, 2018-04-23 at 13:11 -0400, Mike Tancsa wrote:
>>>>>> On 4/23/2018 12:59 PM, Mikhail Zakharov wrote:
>>>>>>=20
>>>>>> Hello Mike,
>>>>>>=20
>>>>>> Thank you for your interest to my paper. I appreciate it very
>>>>>> much!
>>>>>> Your error may be a consequence of the initial HA
>>>>>> misconfiguration.
>>>>>> What is in your /boot/loader.conf? Although the described
>>>>>> config is
>>>>>> quite simple, I can recheck the instruction in my paper in a
>>>>>> couple
>>>>>> of weeks only, unfortunately I=E2=80=99m on vacation right now.
>>>>=20
>>>> [snip]
>>>>=20
>>>> I read your articles on CTL HA, BQ and BeaST, and just wanted to
>>>> say
>>>> they are amazing, good job!
>>>>=20
>>>> One thing I=C2=B4m wondering about though is if you can claim HA with
>>>> just
>>>> two nodes, usually you need at least three, where the third is a
>>>> tie-
>>>> breaker. Otherwise with your current setup, both systems may
>>>> loose
>>>> contact with each other while both still being powered on,
>>>> leading
>>>> to
>>>> potential split brain situations. What are your thoughts about
>>>> that?
>>>>=20
>>>> /K




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?56E4773F-4EAD-47EB-A803-38BFCD8C63F8>