Date: Mon, 4 Jul 2016 11:56:57 -0700 From: Jordan Hubbard <jkh@ixsystems.com> To: Julien Cigar <julien@perdition.city> Cc: Ben RUBSON <ben.rubson@gmail.com>, freebsd-fs@freebsd.org Subject: Re: HAST + ZFS + NFS + CARP Message-ID: <AE372BF0-02BE-4BF3-9073-A05DB4E7FE34@ixsystems.com> In-Reply-To: <20160704183643.GI41276@mordor.lan> References: <20160630153747.GB5695@mordor.lan> <63C07474-BDD5-42AA-BF4A-85A0E04D3CC2@gmail.com> <678321AB-A9F7-4890-A8C7-E20DFDC69137@gmail.com> <20160630185701.GD5695@mordor.lan> <6035AB85-8E62-4F0A-9FA8-125B31A7A387@gmail.com> <20160703192945.GE41276@mordor.lan> <20160703214723.GF41276@mordor.lan> <65906F84-CFFC-40E9-8236-56AFB6BE2DE1@ixsystems.com> <B48FB28E-30FA-477F-810E-DF4F575F5063@gmail.com> <61283600-A41A-4A8A-92F9-7FAFF54DD175@ixsystems.com> <20160704183643.GI41276@mordor.lan>
next in thread | previous in thread | raw e-mail | index | archive | help
> On Jul 4, 2016, at 11:36 AM, Julien Cigar <julien@perdition.city> = wrote: >=20 > I think the discussion evolved a bit since I started this thread, the > original purpose was to build a low-cost redundant storage for a small > infrastructure, no more no less. >=20 > The context is the following: I work in a small company, partially > financed by public funds, we started small, evolved a bit to a point > that some redundancy is required for $services.=20 > Unfortunately I'm alone to take care of the infrastructure (and it's=20= > only 50% of my time) and we don't have that much money :(=20 Sure, I get that part also, but let=E2=80=99s put the entire = conversation into context: 1. You=E2=80=99re looking for a solution to provide some redundant = storage in a very specific scenario. 2. We=E2=80=99re talking on a public mailing list with a bunch of folks, = so the conversation is also naturally going to go from the specific to = the general - e.g. =E2=80=9CIs there anything of broader applicability = to be learned / used here?=E2=80=9D I=E2=80=99m speaking more to the = larger audience who is probably wondering if there=E2=80=99s a more = general solution here using the same =E2=80=9Cmoving parts=E2=80=9D. To get specific again, I am not sure I would do what you are = contemplating given your circumstances since it=E2=80=99s not the = cheapest / simplest solution. The cheapest / simplest solution would be = to create 2 small ZFS servers and simply do zfs snapshot replication = between them at periodic intervals, so you have a backup copy of the = data for maximum safety as well as a physically separate server in case = one goes down hard. Disk storage is the cheap part now, particularly if = you have data redundancy and can therefore use inexpensive disks, and = ZFS replication is certainly =E2=80=9Cgood enough=E2=80=9D for disaster = recovery. As others have said, adding additional layers will only = increase the overall fragility of the solution, and =E2=80=9Cfragile=E2=80= =9D is kind of the last thing you need when you=E2=80=99re frantically = trying to deal with a server that has gone down for what could be any = number of reasons. I, for example, use a pair of FreeNAS Minis at home to store all my = media and they work fine at minimal cost. I use one as the primary = server that talks to all of the VMWare / Plex / iTunes server = applications (and serves as a backup device for all my iDevices) and it = replicates the entire pool to another secondary server that can be = pushed into service as the primary if the first one loses a power supply = / catches fire / loses more than 1 drive at a time / etc. Since I have = a backup, I can also just use RAIDZ1 for the 4x4Tb drive configuration = on the primary and get a good storage / redundancy ratio (I can lose a = single drive without data loss but am also not wasting a lot of storage = on parity). Just my two cents. There are a lot of different ways to do this, and = like all things involving computers (especially PCs), the simplest way = is usually the best. - Jordan
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?AE372BF0-02BE-4BF3-9073-A05DB4E7FE34>