From owner-freebsd-stable@freebsd.org Tue Apr 30 08:09:13 2019 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id F2CCF1587B27 for ; Tue, 30 Apr 2019 08:09:12 +0000 (UTC) (envelope-from michelle@sorbs.net) Received: from hades.sorbs.net (hades.sorbs.net [72.12.213.40]) by mx1.freebsd.org (Postfix) with ESMTP id BB3219253F for ; Tue, 30 Apr 2019 08:09:11 +0000 (UTC) (envelope-from michelle@sorbs.net) MIME-version: 1.0 Content-type: text/plain; charset=utf-8 Received: from [10.10.0.230] (gate.mhix.org [203.206.128.220]) by hades.sorbs.net (Oracle Communications Messaging Server 7.0.5.29.0 64bit (built Jul 9 2013)) with ESMTPSA id <0PQR002HSLYFY400@hades.sorbs.net> for freebsd-stable@freebsd.org; Tue, 30 Apr 2019 01:23:06 -0700 (PDT) Subject: Re: ZFS... From: Michelle Sullivan X-Mailer: iPad Mail (16A404) In-reply-to: <3d0f6436-f3d7-6fee-ed81-a24d44223f2f@netfence.it> Date: Tue, 30 Apr 2019 18:09:06 +1000 Cc: freebsd-stable Content-transfer-encoding: quoted-printable Message-id: <17B373DA-4AFC-4D25-B776-0D0DED98B320@sorbs.net> References: <30506b3d-64fb-b327-94ae-d9da522f3a48@sorbs.net> <56833732-2945-4BD3-95A6-7AF55AB87674@sorbs.net> <3d0f6436-f3d7-6fee-ed81-a24d44223f2f@netfence.it> To: Andrea Venturoli X-Rspamd-Queue-Id: BB3219253F X-Spamd-Bar: --- Authentication-Results: mx1.freebsd.org; spf=pass (mx1.freebsd.org: domain of michelle@sorbs.net designates 72.12.213.40 as permitted sender) smtp.mailfrom=michelle@sorbs.net X-Spamd-Result: default: False [-3.95 / 15.00]; ARC_NA(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; NEURAL_HAM_MEDIUM(-1.00)[-1.000,0]; FROM_HAS_DN(0.00)[]; R_SPF_ALLOW(-0.20)[+a:hades.sorbs.net]; NEURAL_HAM_LONG(-1.00)[-1.000,0]; MIME_GOOD(-0.10)[text/plain]; DMARC_NA(0.00)[sorbs.net]; TO_MATCH_ENVRCPT_SOME(0.00)[]; TO_DN_ALL(0.00)[]; MX_GOOD(-0.01)[cached: battlestar.sorbs.net]; RCPT_COUNT_TWO(0.00)[2]; RCVD_IN_DNSWL_NONE(0.00)[40.213.12.72.list.dnswl.org : 127.0.10.0]; SUBJ_ALL_CAPS(0.45)[6]; IP_SCORE(-1.21)[ip: (-3.37), ipnet: 72.12.192.0/19(-1.50), asn: 11114(-1.13), country: US(-0.06)]; NEURAL_HAM_SHORT(-0.98)[-0.978,0]; RCVD_NO_TLS_LAST(0.10)[]; FROM_EQ_ENVFROM(0.00)[]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; ASN(0.00)[asn:11114, ipnet:72.12.192.0/19, country:US]; MID_RHS_MATCH_FROM(0.00)[]; RCVD_COUNT_TWO(0.00)[2] X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 30 Apr 2019 08:09:13 -0000 Michelle Sullivan http://www.mhix.org/ Sent from my iPad > On 30 Apr 2019, at 17:10, Andrea Venturoli wrote: >=20 >> On 4/30/19 2:41 AM, Michelle Sullivan wrote: >>=20 >> The system was originally built on 9.0, and got upgraded through out the y= ears... zfsd was not available back then. So get your point, but maybe you d= idn=E2=80=99t realize this blog was a history of 8+ years? >=20 > That's one of the first things I thought about while reading the original p= ost: what can be inferred from it is that ZFS might not have been that good i= n the past. > It *could* still suffer from the same problems or it *could* have improved= and be more resilient. > Answering that would be interesting... >=20 Without a doubt it has come a long way, but in my opinion, until there is a t= ool to walk the data (to transfer it out) or something that can either repai= r or invalidate metadata (such as a spacemap corruption) there is still a fa= tal flaw that makes it questionable to use... and that is for one reason alo= ne (regardless of my current problems.) Consider.. If one triggers such a fault on a production server, how can one justify tra= nsferring from backup multiple terabytes (or even petabytes now) of data to r= epair an unmountable/faulted array.... because all backup solutions I know c= urrently would take days if not weeks to restore the sort of store ZFS is to= uted with supporting. =20 Now, yes most production environments have multiple backing stores so will h= ave a server or ten to switch to whilst the store is being recovered, but it= still wouldn=E2=80=99t be a pleasant experience... not to mention the possi= bility that if one store is corrupted there is a chance that the other store= (s) would also be affected in the same way if in the same DC... (Eg a DC fir= e - which I have seen) .. and if you have multi DC stores to protect from th= at.. size of the pipes between DCs comes clearly into play. Thoughts? Michelle