Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 1 Jul 2016 17:03:36 +0200
From:      Julien Cigar <julien@perdition.city>
To:        InterNetX - Juergen Gotteswinter <jg@internetx.com>
Cc:        Joe Love <joe@getsomewhere.net>, freebsd-fs@freebsd.org
Subject:   Re: HAST + ZFS + NFS + CARP
Message-ID:  <20160701150336.GC41276@mordor.lan>
In-Reply-To: <01b8a61e-739e-c41e-45bc-a84af0a9d8ab@internetx.com>
References:  <20160701084717.GE5695@mordor.lan> <47c7e1a5-6ae8-689c-9c2d-bb92f659ea43@internetx.com> <20160701101524.GF5695@mordor.lan> <f74627e3-604e-da71-c024-7e4e71ff36cb@internetx.com> <20160701105735.GG5695@mordor.lan> <3d8c7c89-b24e-9810-f3c2-11ec1e15c948@internetx.com> <93E50E6B-8248-43B5-BE94-D94D53050E06@getsomewhere.net> <bbaf14e2-4ec6-545c-ba67-a1084100b05c@internetx.com> <20160701143917.GB41276@mordor.lan> <01b8a61e-739e-c41e-45bc-a84af0a9d8ab@internetx.com>

next in thread | previous in thread | raw e-mail | index | archive | help

--+xNpyl7Qekk2NvDX
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Fri, Jul 01, 2016 at 04:41:59PM +0200, InterNetX - Juergen Gotteswinter =
wrote:
> Am 01.07.2016 um 16:39 schrieb Julien Cigar:
> > On Fri, Jul 01, 2016 at 03:44:36PM +0200, InterNetX - Juergen Gotteswin=
ter wrote:
> >>
> >>
> >> Am 01.07.2016 um 15:18 schrieb Joe Love:
> >>>
> >>>> On Jul 1, 2016, at 6:09 AM, InterNetX - Juergen Gotteswinter <jg@int=
ernetx.com> wrote:
> >>>>
> >>>> Am 01.07.2016 um 12:57 schrieb Julien Cigar:
> >>>>> On Fri, Jul 01, 2016 at 12:18:39PM +0200, InterNetX - Juergen Gotte=
swinter wrote:
> >>>>>
> >>>>> of course I'll test everything properly :) I don't have the hardwar=
e yet
> >>>>> so ATM I'm just looking for all the possible "candidates", and I'm=
=20
> >>>>> aware that a redundant storage is not that easy to implement ...
> >>>>>
> >>>>> but what solutions do we have? It's either CARP + ZFS + (HAST|iSCSI=
),=20
> >>>>> either zfs send|ssh zfs receive as you suggest (but it's
> >>>>> not realtime), either a distributed FS (which I avoid like the plag=
ue..)
> >>>>
> >>>> zfs send/receive can be nearly realtime.
> >>>>
> >>>> external jbods with cross cabled sas + commercial cluster solution l=
ike
> >>>> rsf-1. anything else is a fragile construction which begs for desast=
er.
> >>>
> >>> This sounds similar to the CTL-HA code that went in last year, for wh=
ich I haven=E2=80=99t seen any sort of how-to.  The RSF-1 stuff sounds like=
 it has more scaling options, though.  Which it probably should, given its =
commercial operation.
> >>
> >> rsf is what pacemaker / heartbeat tries to be, judge me for linking
> >> whitepapers but in this case its not such evil marketing blah
> >>
> >> http://www.high-availability.com/wp-content/uploads/2013/01/RSF-1-HA-P=
LUGIN-ZFS-STORAGE-CLUSTER.pdf
> >>
> >>
> >> @ Julien
> >>
> >> seems like you take availability really serious, so i guess you also g=
ot
> >> plans how to accomplish network problems like dead switches, flaky
> >> cables and so on.
> >>
> >> like using multiple network cards in the boxes, cross cabling between
> >> the hosts (rs232 and ethernet of course, using proved reliable network
> >> switches in a stacked configuration for example cisco 3750 stacked). n=
ot
> >> to forget redundant power feeds to redundant power supplies.
> >=20
> > the only thing that is not redundant (yet?) is our switch, an HP Pro=20
> > Curve 2530-24G) .. it's the next step :)
>=20
> Arubas, okay, a quick view in the spec sheet does not seem to list
> stacking option.
>=20
> what about power?

there is a "diesel generator" for the server room, and redundant power
supply on "most critical" servers (our PostgreSQL servers for example).=20

Router and Firewall (Soekris 6501) runs CARP / PFSync, same for=20
the load balancer (HAProxy), local Unbound, etc

(Everything is jailed and managed by SaltStack, so in worst case scenario
I could always redeploy "something" (service, webapp, etc) in minutes on
$somemachine ..)

No, the really SPOF that should be fixed quickly ATM is the shared files=20
storage, it's a NFS mount on a single machine (with redundant power=20
supply) and if the hardware die we're in troubles (...)

>=20
> >=20
> >>
> >> if not, i whould start again from scratch.
> >>
> >>>
> >>> -Joe
> >>>
> >>> _______________________________________________
> >>> freebsd-fs@freebsd.org mailing list
> >>> https://lists.freebsd.org/mailman/listinfo/freebsd-fs
> >>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"
> >>>
> >> _______________________________________________
> >> freebsd-fs@freebsd.org mailing list
> >> https://lists.freebsd.org/mailman/listinfo/freebsd-fs
> >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"
> >=20
>=20

--=20
Julien Cigar
Belgian Biodiversity Platform (http://www.biodiversity.be)
PGP fingerprint: EEF9 F697 4B68 D275 7B11  6A25 B2BB 3710 A204 23C0
No trees were killed in the creation of this message.
However, many electrons were terribly inconvenienced.

--+xNpyl7Qekk2NvDX
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2

iQIcBAABCgAGBQJXdoZEAAoJELK7NxCiBCPAOO4QAOpHnQ/QIsIMS7+JZpGqF39A
ZIUduicxuT/THwFzthPUObqlBUCBvXcnsEobzrR9iXAP/o8ysR+7guvA3+5KL1T8
enP58wb+ZLzCdOyB7zrdNAoL3pE7lpiYN5YgeqwpK0z7+57pCevFM37sgvSIqoE3
b5Zv+2Uc/C+uzEV2AkOgwM7F8zSLlVsIfZ79pEaEX8wWhihruXrme4QOvD/wGMnw
vX1DdS+OKyV+NxeBpOTMUmdCg0ybYh7B8wxX44L2mGtbG0QaxFkbeAYo+ue0sBfn
6X9SZfE9wVB6ezYRUHYlq5pPcbtkwvUQQyJNI3RnA36Ow6SMrfhNoPiF48RB0d72
xSGvsT95rbBNLPkAxKLju+d13k54+YHdNT6KNxkzVxxFsZt5lq4eiVrxlGyRcUnK
OpeU0TjCJP06koMB838yXMkfwtWmZMzVuIy0SKxTmww9CmxUqmC1CCBMOOSbRTU0
3OAaUQB+Xhza20d9zjlKm8ZrU0q6BTEQqSUnCMEkbEq5bq42r/9N+HsQqiIoLEcj
BYgUGWzVVastPOb/NOggE5W5PNBG7M9cUm2Ylw+J4mcnxz2nKHKf+aB5y4X+jxvh
r7Sq5HjTiMPzlunNlEk/CsBbX3NCp5C7AJGiVSGVORpFEtQRi5lcud6WLYjfttp2
AYUq6if4i/qS8JcXpuy/
=LsG7
-----END PGP SIGNATURE-----

--+xNpyl7Qekk2NvDX--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20160701150336.GC41276>