From owner-freebsd-stable@freebsd.org Tue Apr 30 09:05:56 2019 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 5DF7A1589A67; Tue, 30 Apr 2019 09:05:56 +0000 (UTC) (envelope-from michelle@sorbs.net) Received: from hades.sorbs.net (hades.sorbs.net [72.12.213.40]) by mx1.freebsd.org (Postfix) with ESMTP id 4153094CA0; Tue, 30 Apr 2019 09:05:55 +0000 (UTC) (envelope-from michelle@sorbs.net) MIME-version: 1.0 Content-type: text/plain; charset=utf-8 Received: from [10.10.0.230] (gate.mhix.org [203.206.128.220]) by hades.sorbs.net (Oracle Communications Messaging Server 7.0.5.29.0 64bit (built Jul 9 2013)) with ESMTPSA id <0PQR002K7OKZY400@hades.sorbs.net>; Tue, 30 Apr 2019 02:19:50 -0700 (PDT) Subject: Re: ZFS... From: Michelle Sullivan X-Mailer: iPad Mail (16A404) In-reply-to: <70fac2fe3f23f85dd442d93ffea368e1@ultra-secure.de> Date: Tue, 30 Apr 2019 19:05:50 +1000 Cc: Andrea Venturoli , freebsd-stable , owner-freebsd-stable@freebsd.org Content-transfer-encoding: quoted-printable Message-id: <70C87D93-D1F9-458E-9723-19F9777E6F12@sorbs.net> References: <30506b3d-64fb-b327-94ae-d9da522f3a48@sorbs.net> <56833732-2945-4BD3-95A6-7AF55AB87674@sorbs.net> <3d0f6436-f3d7-6fee-ed81-a24d44223f2f@netfence.it> <17B373DA-4AFC-4D25-B776-0D0DED98B320@sorbs.net> <70fac2fe3f23f85dd442d93ffea368e1@ultra-secure.de> To: rainer@ultra-secure.de X-Rspamd-Queue-Id: 4153094CA0 X-Spamd-Bar: --- Authentication-Results: mx1.freebsd.org; spf=pass (mx1.freebsd.org: domain of michelle@sorbs.net designates 72.12.213.40 as permitted sender) smtp.mailfrom=michelle@sorbs.net X-Spamd-Result: default: False [-3.58 / 15.00]; ARC_NA(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; NEURAL_HAM_MEDIUM(-1.00)[-1.000,0]; FROM_HAS_DN(0.00)[]; RCPT_COUNT_THREE(0.00)[4]; R_SPF_ALLOW(-0.20)[+a:hades.sorbs.net]; NEURAL_HAM_LONG(-1.00)[-1.000,0]; MIME_GOOD(-0.10)[text/plain]; DMARC_NA(0.00)[sorbs.net]; TO_DN_SOME(0.00)[]; TO_MATCH_ENVRCPT_SOME(0.00)[]; MX_GOOD(-0.01)[cached: battlestar.sorbs.net]; NEURAL_HAM_SHORT(-0.69)[-0.689,0]; RCVD_IN_DNSWL_NONE(0.00)[40.213.12.72.list.dnswl.org : 127.0.10.0]; SUBJ_ALL_CAPS(0.45)[6]; IP_SCORE(-1.13)[ip: (-3.13), ipnet: 72.12.192.0/19(-1.41), asn: 11114(-1.07), country: US(-0.06)]; RCVD_NO_TLS_LAST(0.10)[]; FROM_EQ_ENVFROM(0.00)[]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; ASN(0.00)[asn:11114, ipnet:72.12.192.0/19, country:US]; MID_RHS_MATCH_FROM(0.00)[]; RCVD_COUNT_TWO(0.00)[2] X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 30 Apr 2019 09:05:56 -0000 Michelle Sullivan http://www.mhix.org/ Sent from my iPad > On 30 Apr 2019, at 18:44, rainer@ultra-secure.de wrote: >=20 > Am 2019-04-30 10:09, schrieb Michelle Sullivan: >=20 >> Now, yes most production environments have multiple backing stores so >> will have a server or ten to switch to whilst the store is being >> recovered, but it still wouldn=E2=80=99t be a pleasant experience... not t= o >> mention the possibility that if one store is corrupted there is a >> chance that the other store(s) would also be affected in the same way >> if in the same DC... (Eg a DC fire - which I have seen) .. and if you >> have multi DC stores to protect from that.. size of the pipes between >> DCs comes clearly into play. >=20 >=20 > I have one customer with about 13T of ZFS - and because it would take a wh= ile to restore (actual backups), it zfs-sends delta-snapshots every hour to a= standby-system. >=20 > It was handy when we had to rebuild the system with different HBAs. >=20 >=20 I wonder what would happen if you scaled that up by just 10 (storage) and ha= d the master blow up where it needs to be restored from backup.. how long wo= uld one be praying to higher powers that there is no problem with the backup= ...? (As in no outage or error causing a complete outAge.)... don=E2=80=99t g= et me wrong.. we all get to that position at sometime, but in my recent expe= rience 2 issues colliding at the same time results in disaster. 13T is real= ly not something I have issues with as I can usually cobble something togeth= er with 16T.. (at least until 6T drives became a viable (cost and availabili= ty at short notice) option... even 10T is becoming easier to get a hold of n= ow.. but I have a measly 96T here and it takes weeks even with gigabit bonde= d interfaces when I need to restore.=