From owner-freebsd-fs@freebsd.org Mon Jul 4 19:31:40 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id D7C3AB91C58 for ; Mon, 4 Jul 2016 19:31:40 +0000 (UTC) (envelope-from julien@perdition.city) Received: from relay-b03.edpnet.be (relay-b03.edpnet.be [212.71.1.220]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "edpnet.email", Issuer "Go Daddy Secure Certificate Authority - G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 847F629EF for ; Mon, 4 Jul 2016 19:31:40 +0000 (UTC) (envelope-from julien@perdition.city) X-ASG-Debug-ID: 1467660692-0a88181ce58d0490001-3nHGF7 Received: from mordor.lan (213.219.165.225.bro01.dyn.edpnet.net [213.219.165.225]) by relay-b03.edpnet.be with ESMTP id Y0nG9NvMVrKnJfqV (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Mon, 04 Jul 2016 21:31:33 +0200 (CEST) X-Barracuda-Envelope-From: julien@perdition.city X-Barracuda-Effective-Source-IP: 213.219.165.225.bro01.dyn.edpnet.net[213.219.165.225] X-Barracuda-Apparent-Source-IP: 213.219.165.225 Date: Mon, 4 Jul 2016 21:31:32 +0200 From: Julien Cigar To: Jordan Hubbard Cc: Ben RUBSON , freebsd-fs@freebsd.org Subject: Re: HAST + ZFS + NFS + CARP Message-ID: <20160704193131.GJ41276@mordor.lan> X-ASG-Orig-Subj: Re: HAST + ZFS + NFS + CARP References: <678321AB-A9F7-4890-A8C7-E20DFDC69137@gmail.com> <20160630185701.GD5695@mordor.lan> <6035AB85-8E62-4F0A-9FA8-125B31A7A387@gmail.com> <20160703192945.GE41276@mordor.lan> <20160703214723.GF41276@mordor.lan> <65906F84-CFFC-40E9-8236-56AFB6BE2DE1@ixsystems.com> <61283600-A41A-4A8A-92F9-7FAFF54DD175@ixsystems.com> <20160704183643.GI41276@mordor.lan> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha512; protocol="application/pgp-signature"; boundary="3MMMIZFJzhAsRj/+" Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.6.1 (2016-04-27) X-Barracuda-Connect: 213.219.165.225.bro01.dyn.edpnet.net[213.219.165.225] X-Barracuda-Start-Time: 1467660692 X-Barracuda-Encrypted: ECDHE-RSA-AES256-GCM-SHA384 X-Barracuda-URL: https://212.71.1.220:443/cgi-mod/mark.cgi X-Barracuda-Scan-Msg-Size: 3577 X-Virus-Scanned: by bsmtpd at edpnet.be X-Barracuda-BRTS-Status: 1 X-Barracuda-Bayes: INNOCENT GLOBAL 0.5000 1.0000 0.0000 X-Barracuda-Spam-Score: 0.00 X-Barracuda-Spam-Status: No, SCORE=0.00 using global scores of TAG_LEVEL=1000.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=6.0 tests= X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.3.31010 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 04 Jul 2016 19:31:40 -0000 --3MMMIZFJzhAsRj/+ Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Mon, Jul 04, 2016 at 11:56:57AM -0700, Jordan Hubbard wrote: >=20 > > On Jul 4, 2016, at 11:36 AM, Julien Cigar wrote: > >=20 > > I think the discussion evolved a bit since I started this thread, the > > original purpose was to build a low-cost redundant storage for a small > > infrastructure, no more no less. > >=20 > > The context is the following: I work in a small company, partially > > financed by public funds, we started small, evolved a bit to a point > > that some redundancy is required for $services.=20 > > Unfortunately I'm alone to take care of the infrastructure (and it's=20 > > only 50% of my time) and we don't have that much money :(=20 >=20 > Sure, I get that part also, but let=E2=80=99s put the entire conversation= into context: >=20 > 1. You=E2=80=99re looking for a solution to provide some redundant storag= e in a very specific scenario. >=20 > 2. We=E2=80=99re talking on a public mailing list with a bunch of folks, = so the conversation is also naturally going to go from the specific to the = general - e.g. =E2=80=9CIs there anything of broader applicability to be le= arned / used here?=E2=80=9D I=E2=80=99m speaking more to the larger audien= ce who is probably wondering if there=E2=80=99s a more general solution her= e using the same =E2=80=9Cmoving parts=E2=80=9D. of course..! It has been an interesting discussion, learned some things, and it's always enjoyable to get different point of view. >=20 > To get specific again, I am not sure I would do what you are contemplatin= g given your circumstances since it=E2=80=99s not the cheapest / simplest s= olution. The cheapest / simplest solution would be to create 2 small ZFS s= ervers and simply do zfs snapshot replication between them at periodic inte= rvals, so you have a backup copy of the data for maximum safety as well as = a physically separate server in case one goes down hard. Disk storage is t= he cheap part now, particularly if you have data redundancy and can therefo= re use inexpensive disks, and ZFS replication is certainly =E2=80=9Cgood en= ough=E2=80=9D for disaster recovery. As others have said, adding additiona= l layers will only increase the overall fragility of the solution, and =E2= =80=9Cfragile=E2=80=9D is kind of the last thing you need when you=E2=80=99= re frantically trying to deal with a server that has gone down for what cou= ld be any number of reasons. >=20 > I, for example, use a pair of FreeNAS Minis at home to store all my media= and they work fine at minimal cost. I use one as the primary server that = talks to all of the VMWare / Plex / iTunes server applications (and serves = as a backup device for all my iDevices) and it replicates the entire pool t= o another secondary server that can be pushed into service as the primary i= f the first one loses a power supply / catches fire / loses more than 1 dri= ve at a time / etc. Since I have a backup, I can also just use RAIDZ1 for = the 4x4Tb drive configuration on the primary and get a good storage / redun= dancy ratio (I can lose a single drive without data loss but am also not wa= sting a lot of storage on parity). You're right, I'll definitively reconsider the zfs send / zfs receive approach. >=20 > Just my two cents. There are a lot of different ways to do this, and lik= e all things involving computers (especially PCs), the simplest way is usua= lly the best. >=20 Thanks! Julien > - Jordan >=20 --=20 Julien Cigar Belgian Biodiversity Platform (http://www.biodiversity.be) PGP fingerprint: EEF9 F697 4B68 D275 7B11 6A25 B2BB 3710 A204 23C0 No trees were killed in the creation of this message. However, many electrons were terribly inconvenienced. --3MMMIZFJzhAsRj/+ Content-Type: application/pgp-signature; name="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAABCgAGBQJXermQAAoJELK7NxCiBCPAxloP/0EHtudE2MmOGNtgttmTGNGN KLLGYcaoDZI4sn2t/q+48oBNfTDPdkHnsXDhGgVRXcQ4yXTuBg9IzcqezwNB+lGb FaC5ckFLqBlXzXUo2Y4lia6T45MtH4QrZn9GPH6O4QfvIQp0ulmCEtVVIUKw/fuA tanEGTTwc6qMhdzd1Uopml2HZJ74SzGmavFf/N4eP3Gnzz/p9a6SQjCH7TAtvxH7 ck6AM3teS79eGdb2BmU6Ehs9A10LZefnleRfMLi2V7RDNNXP2oI4ohI00dTRz1Cf GrcmPB4oRlGEonbMBytYZVOTdrwCGsFyrXTnT7xy4XHCg/9ndUZDcnnDTD8syEBL CFz8l/uZB++l0lAs8a+RyarTNRSMvTznrS039IaOHhD23M/zGXLGVe8EwQlpkwom I/3UUQeQI1291Xnq12PPJUKE2SK8gZ9eJWswO97eXJow7ky1L2bIc3HFUkpkDu9y pPbrMtzslTdfCb4w2SHlE1yJn0/Mo/FyKMuPKbHP5uBDvVoc5PpH912Tcg544Ose ChriAsDJ1Fy23pg52wj/W5zXCzfMjTKLmokNwV3xH4c8wbpFo7jm7IR9lA8BoFME dNSSnLPF3mPFAc4TUM0hS980cPAYE5ovoKMwsOWPR0YEhwtFGZp5xUdpMdkrpnAA SbW49On9C/KPLhfK+NFp =TMxe -----END PGP SIGNATURE----- --3MMMIZFJzhAsRj/+--