Date: Tue, 30 Apr 2019 19:05:50 +1000 From: Michelle Sullivan <michelle@sorbs.net> To: rainer@ultra-secure.de Cc: Andrea Venturoli <ml@netfence.it>, freebsd-stable <freebsd-stable@freebsd.org>, owner-freebsd-stable@freebsd.org Subject: Re: ZFS... Message-ID: <70C87D93-D1F9-458E-9723-19F9777E6F12@sorbs.net> In-Reply-To: <70fac2fe3f23f85dd442d93ffea368e1@ultra-secure.de> References: <30506b3d-64fb-b327-94ae-d9da522f3a48@sorbs.net> <CAOtMX2gf3AZr1-QOX_6yYQoqE-H%2B8MjOWc=eK1tcwt5M3dCzdw@mail.gmail.com> <56833732-2945-4BD3-95A6-7AF55AB87674@sorbs.net> <3d0f6436-f3d7-6fee-ed81-a24d44223f2f@netfence.it> <17B373DA-4AFC-4D25-B776-0D0DED98B320@sorbs.net> <70fac2fe3f23f85dd442d93ffea368e1@ultra-secure.de>
next in thread | previous in thread | raw e-mail | index | archive | help
Michelle Sullivan http://www.mhix.org/ Sent from my iPad > On 30 Apr 2019, at 18:44, rainer@ultra-secure.de wrote: >=20 > Am 2019-04-30 10:09, schrieb Michelle Sullivan: >=20 >> Now, yes most production environments have multiple backing stores so >> will have a server or ten to switch to whilst the store is being >> recovered, but it still wouldn=E2=80=99t be a pleasant experience... not t= o >> mention the possibility that if one store is corrupted there is a >> chance that the other store(s) would also be affected in the same way >> if in the same DC... (Eg a DC fire - which I have seen) .. and if you >> have multi DC stores to protect from that.. size of the pipes between >> DCs comes clearly into play. >=20 >=20 > I have one customer with about 13T of ZFS - and because it would take a wh= ile to restore (actual backups), it zfs-sends delta-snapshots every hour to a= standby-system. >=20 > It was handy when we had to rebuild the system with different HBAs. >=20 >=20 I wonder what would happen if you scaled that up by just 10 (storage) and ha= d the master blow up where it needs to be restored from backup.. how long wo= uld one be praying to higher powers that there is no problem with the backup= ...? (As in no outage or error causing a complete outAge.)... don=E2=80=99t g= et me wrong.. we all get to that position at sometime, but in my recent expe= rience 2 issues colliding at the same time results in disaster. 13T is real= ly not something I have issues with as I can usually cobble something togeth= er with 16T.. (at least until 6T drives became a viable (cost and availabili= ty at short notice) option... even 10T is becoming easier to get a hold of n= ow.. but I have a measly 96T here and it takes weeks even with gigabit bonde= d interfaces when I need to restore.=
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?70C87D93-D1F9-458E-9723-19F9777E6F12>