Date: Fri, 3 Jun 2016 13:50:20 +0200 From: Julien Cigar <julien.cigar@gmail.com> To: Steve O'Hara-Smith <steve@sohara.org> Cc: freebsd-questions@freebsd.org Subject: Re: redundant storage Message-ID: <20160603115020.GN95511@mordor.lan> In-Reply-To: <20160603114746.6b75e6e79ecd51fe14311e40@sohara.org> References: <20160603083843.GK95511@mordor.lan> <20160603104138.fdf3c0ac4be93769be6da401@sohara.org> <20160603101446.GM95511@mordor.lan> <20160603114746.6b75e6e79ecd51fe14311e40@sohara.org>
next in thread | previous in thread | raw e-mail | index | archive | help
--SgT04PEqo/+yUDw3 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Fri, Jun 03, 2016 at 11:47:46AM +0100, Steve O'Hara-Smith wrote: > On Fri, 3 Jun 2016 12:14:46 +0200 > Julien Cigar <julien.cigar@gmail.com> wrote: >=20 > > On Fri, Jun 03, 2016 at 10:41:38AM +0100, Steve O'Hara-Smith wrote: > > > Hi, > > >=20 > > > Just one change - don't use RAID1 use ZFS mirrors. ZFS does > > > better RAID than any hardware controller. > >=20 > > right.. I must admit that I haven't looked at ZFS yet (I'm still using > > UFS + gmirror), but it will be the opportunity to do so..! > >=20 > > Does ZFS play well with HAST? >=20 > Never tried it but it should work well enough, ZFS sits on top of > geom providers so it should be possible to use the pool on the primary. >=20 > One concern would be that since all reads come from local storage > the secondary machine never gets scrubbed and silent corruption never gets > detected on the secondary. A periodic (say weekly) switch over and scrub > takes care of this concern. Silent corruption is rare, but the bigger the > pool and the longer it's used the more likely it is to happen eventually, > detection and repair of this is one of ZFSs advantages over hardware RAID > so it's good not to defeat it. Thanks, I'll read a bit on ZFS this week-end ..! My ultimate goal would be that the HAST storage survives an hard reboot/ unplugged network cable/... during an heavy I/O write, and that the switch between the two nodes is transparent to the clients, without any data loss of course ... feasible or utopian? Needless to say that what=20 I want to avoid at all cost is that the storage becomes corrupted and unrecoverable..! >=20 > Drive failures on the primary will wind up causing both the primary > and the secondary to be rewritten when the drive is replaced - this could > probably be avoided by switching primaries and letting HAST deal with the > replacement. >=20 > Another very minor issue would be that any corrective rewrites (for > detected corruption) will happen on both copies but that's harmless and > there really should be *very* few of these. >=20 > One final concern, but it's HAST purely and not really ZFS. Writing > a large file flat out will likely saturate your LAN with half the capacity > going to copying the data for HAST. A private backend link between the two > boxes would be a good idea (or 10 gigabit ethernet). yep, that's what I had in mind..! one nic for the replication between the two HAST node, and one (CARP) nic by which clients access to storage.. >=20 > > > On Fri, 3 Jun 2016 10:38:43 +0200 > > > Julien Cigar <julien.cigar@gmail.com> wrote: > > >=20 > > > > Hello, > > > >=20 > > > > I'm looking for a low-cost redundant HA storage solution for our > > > > (small) team here (~30 people). It will be used to store files > > > > generated by some webapps, to provide a redundant dovecot (imap) > > > > server, etc. > > > >=20 > > > > For the hardware I have to go with HP (no choice), so I planned to = buy > > > > 2 x HP ProLiant DL320e Gen8 v2 E3-1241v3 (768645-421) with=20 > > > > 4 x WD Hard Drive Re SATA 4TB 3.5in 6gb/s 7200rpm 64MB Buffer=20 > > > > (WD4000FYYZ) in a RAID1 config (the machine has a smartarray P222 > > > > controller, which is apparently supported by the ciss driver) > > > >=20 > > > > On the FreeBSD side I plan to use HAST with CARP, and the volumes > > > > will be exported through NFS4. > > > >=20 > > > > Any comments on this setup (or other recommendations) ? :) > > > >=20 > > > > Thanks! > > > > Julien > > > >=20 > > >=20 > > >=20 > > > --=20 > > > Steve O'Hara-Smith <steve@sohara.org> > > >=20 > >=20 >=20 >=20 > --=20 > Steve O'Hara-Smith | Directable Mirror Arrays > C:>WIN | A better way to focus the s= un > The computer obeys and wins. | licences available see > You lose and Bill collects. | http://www.sohara.org/ >=20 --=20 Julien Cigar Belgian Biodiversity Platform (http://www.biodiversity.be) PGP fingerprint: EEF9 F697 4B68 D275 7B11 6A25 B2BB 3710 A204 23C0 No trees were killed in the creation of this message. However, many electrons were terribly inconvenienced. --SgT04PEqo/+yUDw3 Content-Type: application/pgp-signature; name="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAABCgAGBQJXUW74AAoJELK7NxCiBCPArZoP/3tjdm2aWTQZhMsiMJRTedkW 22yEyKv3vjCQ8r5C09JceRntYNK5HaNf8uXHUdSW8ksEW2b5yZWrb79GXPqmeS3u H0lQqhbSSMibekrSWnNAwfiC5gj1uhj63l1aA11iYpbhnMgKutoyBXX0OMIGRlwY OI1JUaVNeAy7qrZ4ycjY0JNpyPBLOJw2kVr9sopncluDWJbIkoA1sTgs/EipQfj3 GWHqk/BD7I57e/Pud5QdfzadCfOJMFc0c9x9MVOiPBXzB1tWl7f5hWrvGISLQFz1 rVLXbRDBu7B2c1yNvBkUVa4pImNJ6bkP5DdpjnyNScZ9Y39Mp7MISRc9x+SIMLNc yPbF3T4mM/1MO4e/3lNImkUjUeEYCGxsYKNnbTTmLO1whbZ/5e4fbsaylqiO3g7A AMcZ7JgGdJGqXKJvrxovG181/+CGUGcBbY6pDLNeHSM/LnGlwX15+hcB5jRaKVSY AUgYNUBzci+e2N9QCPMuQcuSjt22QYylR/G+CdmUg5Ssncuiz2Zs2sso2v3ajmzF ubsDA6m4kGT56+LtlVvVNjSKqIXunZwJNChJNmXSbmzxhzLiYcWngbJCLsGOfSFL xaw9IJnNHPXqMWlKLk0rLDX7jKe76DzcgsmxfMqF8N3fx9ZuubyOaSL8Zgztc3Nx fXIe2CLC37Zbrg+6mDps =zSNT -----END PGP SIGNATURE----- --SgT04PEqo/+yUDw3--
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20160603115020.GN95511>