Date: Fri, 3 Jun 2016 17:01:20 +0200 From: Julien Cigar <julien.cigar@gmail.com> To: Valeri Galtsev <galtsev@kicp.uchicago.edu> Cc: Steve O'Hara-Smith <steve@sohara.org>, freebsd-questions@freebsd.org Subject: Re: redundant storage Message-ID: <20160603150120.GP95511@mordor.lan> In-Reply-To: <61821.128.135.52.6.1464964464.squirrel@cosmo.uchicago.edu> References: <20160603083843.GK95511@mordor.lan> <20160603104138.fdf3c0ac4be93769be6da401@sohara.org> <20160603101446.GM95511@mordor.lan> <20160603114746.6b75e6e79ecd51fe14311e40@sohara.org> <20160603115020.GN95511@mordor.lan> <61821.128.135.52.6.1464964464.squirrel@cosmo.uchicago.edu>
next in thread | previous in thread | raw e-mail | index | archive | help
--Zfao1/4IORAeFOVj Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Fri, Jun 03, 2016 at 09:34:24AM -0500, Valeri Galtsev wrote: >=20 > On Fri, June 3, 2016 6:50 am, Julien Cigar wrote: > > On Fri, Jun 03, 2016 at 11:47:46AM +0100, Steve O'Hara-Smith wrote: > >> On Fri, 3 Jun 2016 12:14:46 +0200 > >> Julien Cigar <julien.cigar@gmail.com> wrote: > >> > >> > On Fri, Jun 03, 2016 at 10:41:38AM +0100, Steve O'Hara-Smith wrote: > >> > > Hi, > >> > > > >> > > Just one change - don't use RAID1 use ZFS mirrors. ZFS does > >> > > better RAID than any hardware controller. > >> > > >> > right.. I must admit that I haven't looked at ZFS yet (I'm still usi= ng > >> > UFS + gmirror), but it will be the opportunity to do so..! > >> > > >> > Does ZFS play well with HAST? > >> > >> Never tried it but it should work well enough, ZFS sits on top of > >> geom providers so it should be possible to use the pool on the primary. > >> > >> One concern would be that since all reads come from local storage > >> the secondary machine never gets scrubbed and silent corruption never > >> gets > >> detected on the secondary. A periodic (say weekly) switch over and scr= ub > >> takes care of this concern. Silent corruption is rare, but the bigger > >> the > >> pool and the longer it's used the more likely it is to happen > >> eventually, > >> detection and repair of this is one of ZFSs advantages over hardware > >> RAID > >> so it's good not to defeat it. > > > > Thanks, I'll read a bit on ZFS this week-end ..! > > > > My ultimate goal would be that the HAST storage survives an hard reboot/ > > unplugged network cable/... during an heavy I/O write, and that the > > switch between the two nodes is transparent to the clients, without any > > data loss of course ... feasible or utopian? Needless to say that what > > I want to avoid at all cost is that the storage becomes corrupted and > > unrecoverable..! >=20 > Sounds pretty much like distributed file system solution. I tried one > (moosefs) which I gave up on, and after I asked (on this list) for advise > about other options, next candidate for me emerged: glusterfs, which I > hadn't chance to set up yet. You may want to search this list archives, > those were really good advises that experts gave me. sorry but: I avoid distributed FS like the plague :) >=20 > Valeri >=20 > > > >> > >> Drive failures on the primary will wind up causing both the primary > >> and the secondary to be rewritten when the drive is replaced - this > >> could > >> probably be avoided by switching primaries and letting HAST deal with > >> the > >> replacement. > >> > >> Another very minor issue would be that any corrective rewrites (for > >> detected corruption) will happen on both copies but that's harmless and > >> there really should be *very* few of these. > >> > >> One final concern, but it's HAST purely and not really ZFS. Writing > >> a large file flat out will likely saturate your LAN with half the > >> capacity > >> going to copying the data for HAST. A private backend link between the > >> two > >> boxes would be a good idea (or 10 gigabit ethernet). > > > > yep, that's what I had in mind..! one nic for the replication between > > the two HAST node, and one (CARP) nic by which clients access to > > storage.. > > > >> > >> > > On Fri, 3 Jun 2016 10:38:43 +0200 > >> > > Julien Cigar <julien.cigar@gmail.com> wrote: > >> > > > >> > > > Hello, > >> > > > > >> > > > I'm looking for a low-cost redundant HA storage solution for our > >> > > > (small) team here (~30 people). It will be used to store files > >> > > > generated by some webapps, to provide a redundant dovecot (imap) > >> > > > server, etc. > >> > > > > >> > > > For the hardware I have to go with HP (no choice), so I planned = to > >> buy > >> > > > 2 x HP ProLiant DL320e Gen8 v2 E3-1241v3 (768645-421) with > >> > > > 4 x WD Hard Drive Re SATA 4TB 3.5in 6gb/s 7200rpm 64MB Buffer > >> > > > (WD4000FYYZ) in a RAID1 config (the machine has a smartarray P222 > >> > > > controller, which is apparently supported by the ciss driver) > >> > > > > >> > > > On the FreeBSD side I plan to use HAST with CARP, and the volumes > >> > > > will be exported through NFS4. > >> > > > > >> > > > Any comments on this setup (or other recommendations) ? :) > >> > > > > >> > > > Thanks! > >> > > > Julien > >> > > > > >> > > > >> > > > >> > > -- > >> > > Steve O'Hara-Smith <steve@sohara.org> > >> > > > >> > > >> > >> > >> -- > >> Steve O'Hara-Smith | Directable Mirror Arra= ys > >> C:>WIN | A better way to focus the > >> sun > >> The computer obeys and wins. | licences available see > >> You lose and Bill collects. | http://www.sohara.org/ > >> > > > > -- > > Julien Cigar > > Belgian Biodiversity Platform (http://www.biodiversity.be) > > PGP fingerprint: EEF9 F697 4B68 D275 7B11 6A25 B2BB 3710 A204 23C0 > > No trees were killed in the creation of this message. > > However, many electrons were terribly inconvenienced. > > >=20 >=20 > ++++++++++++++++++++++++++++++++++++++++ > Valeri Galtsev > Sr System Administrator > Department of Astronomy and Astrophysics > Kavli Institute for Cosmological Physics > University of Chicago > Phone: 773-702-4247 > ++++++++++++++++++++++++++++++++++++++++ --=20 Julien Cigar Belgian Biodiversity Platform (http://www.biodiversity.be) PGP fingerprint: EEF9 F697 4B68 D275 7B11 6A25 B2BB 3710 A204 23C0 No trees were killed in the creation of this message. However, many electrons were terribly inconvenienced. --Zfao1/4IORAeFOVj Content-Type: application/pgp-signature; name="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAABCgAGBQJXUZu9AAoJELK7NxCiBCPAtKkP/17yPdnVNZ3nPpRnZrOp/mWp rlRXanexejYcEJm0aRjfG0l1aWUivVGu6Q/+5KXWsK5fy5oMWkrMLVGq333adS69 E1t+a0hovRWs6gKAQNKpeATO99112QCFHghjJgeH1NSutok1tSAJrjLRsyXg+3SN t029iFK6T2snZFpTCsBozJKuOFAOryUQ6AnFPIi+WFkj5wAL/ydOCziKM/90W6U3 R0JxDAwteCeFXXorMFQbV8dXEJInZISdWusEl94LN3th3GaOrr+aUzO3WfN3RpE4 RDBpnUIaCacYyV2LHgTvIJSD2HZfQf5VkJR692eC/JbsRf+Xzb3gfJUGjXOgJb64 WO1r5T+9PiL31+P4cQXtGug3u/KdrjFND5K6qbrJf5ZCAsrcbLRSydUD+2MLjxaK Y/oGkgYp0M4kroXHv8TFlmsiPlfVk3vDed2EnkKzvaQLLXPqGeLNyChmK932SrzE emMQBdWR+Y+2SLEZOtSPp1T8L5bHscA8RC2Zt70X5X8N/byBaLBalf2ipYgp302H F2aXZatBOo+g9jQBWCXc/KsW/AXJRFxzChAHJktNw+2t2NQBMjC9KVNbuIuzm6II TOZTMj5Zr3CH9VtBO5swCCLhnYJGugmqWB+xaNQ5Cnmr2yV8U2bEwKEfaC1jnKe8 9zJgyigdPYag68iw5fla =3rDv -----END PGP SIGNATURE----- --Zfao1/4IORAeFOVj--
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20160603150120.GP95511>