From owner-freebsd-questions@freebsd.org Sat Dec 14 07:54:43 2019 Return-Path: Delivered-To: freebsd-questions@mailman.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.nyi.freebsd.org (Postfix) with ESMTP id CE3CD1E4F40 for ; Sat, 14 Dec 2019 07:54:43 +0000 (UTC) (envelope-from dpchrist@holgerdanske.com) Received: from holgerdanske.com (holgerdanske.com [IPv6:2001:470:0:19b::b869:801b]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "xray.he.net", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 47ZfvR0ttpz45fY for ; Sat, 14 Dec 2019 07:54:42 +0000 (UTC) (envelope-from dpchrist@holgerdanske.com) Received: from 99.100.19.101 ([99.100.19.101]) by holgerdanske.com with ESMTPSA (ECDHE-RSA-AES128-GCM-SHA256:TLSv1.2:Kx=ECDH:Au=RSA:Enc=AESGCM(128):Mac=AEAD) (SMTP-AUTH username dpchrist@holgerdanske.com, mechanism PLAIN) for ; Fri, 13 Dec 2019 23:54:40 -0800 Subject: Re: Adding to a zpool -- different redundancies and risks To: freebsd-questions@freebsd.org References: <6104097C-009B-4E9C-A1D8-A2D0E5FECADF@glasgow.ac.uk> <09b11639-3303-df6b-f70c-6722caaacee7@holgerdanske.com> <5A01F7F7-9326-47E2-BA6E-79A7D3F0889A@glasgow.ac.uk> From: David Christensen Message-ID: Date: Fri, 13 Dec 2019 23:54:39 -0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.2.2 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: 47ZfvR0ttpz45fY X-Spamd-Bar: -- Authentication-Results: mx1.freebsd.org; dkim=none; dmarc=none; spf=none (mx1.freebsd.org: domain of dpchrist@holgerdanske.com has no SPF policy when checking 2001:470:0:19b::b869:801b) smtp.mailfrom=dpchrist@holgerdanske.com X-Spamd-Result: default: False [-2.75 / 15.00]; ARC_NA(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; NEURAL_HAM_MEDIUM(-1.00)[-0.998,0]; FROM_HAS_DN(0.00)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; IP_SCORE(-1.65)[ipnet: 2001:470::/32(-4.66), asn: 6939(-3.55), country: US(-0.05)]; MIME_GOOD(-0.10)[text/plain]; TO_DN_NONE(0.00)[]; PREVIOUSLY_DELIVERED(0.00)[freebsd-questions@freebsd.org]; AUTH_NA(1.00)[]; RCPT_COUNT_ONE(0.00)[1]; NEURAL_HAM_LONG(-1.00)[-1.000,0]; DMARC_NA(0.00)[holgerdanske.com]; R_SPF_NA(0.00)[]; FROM_EQ_ENVFROM(0.00)[]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; ASN(0.00)[asn:6939, ipnet:2001:470::/32, country:US]; MID_RHS_MATCH_FROM(0.00)[]; RCVD_TLS_ALL(0.00)[]; RCVD_COUNT_TWO(0.00)[2] X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 14 Dec 2019 07:54:43 -0000 On 2019-12-13 06:49, Norman Gray wrote: > > David, hello. > > On 13 Dec 2019, at 4:49, David Christensen wrote: > >> On 2019-12-12 04:42, Norman Gray wrote: >>> # zpool list pool >>> NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP >>> HEALTH ALTROOT >>> pool 98T 75.2T 22.8T - - 29% 76% 1.00x >>> ONLINE - >> >> So, your pool is 75.2 TB / 77 TB = 97.7% full. > > Well, I have compression turned on, so I take it that the 98TB quoted > here is an estimate of the capacity in that case, and that the 76% > capacity quoted in this output is the effective capacity -- ie, > alloc/size. > > The zpool(8) manpage documents these two properties as > > alloc Amount of storage space within the pool that has been > physically allocated. > > capacity Percentage of pool space used. This property can also > be > referred to by its shortened column name, "cap". > > size Total size of the storage pool. > > The term 'physically allocated' is a bit confusing. I'm guessing that > it takes compression into account, rather than bytes-in-sectors. > > I could be misinterpreting this output, though. I believe the 'SIZE 98T' corresponds to eighteen 5.5 TB drives. My bad -- I agree the 'CAP 76%' should be correct and my '97.7% full' calculation is wrong. I use the following command to get compression information: # zfs get -t filesystem compressratio | grep POOLNAME I am still trying to understand how to reconcile 'zpool list', 'zfs list', etc., against df(1) and du(1). >> https://jrs-s.net/2015/02/06/zfs-you-should-use-mirror-vdevs-not-raidz/ > > Thanks for the reminder of this. I'm familiar with that article, and > it's an interesting point of view. I don't find it completely > convincing, though, since I'm not convinced that the speed of > resilvering fully compensates for the less than 100% probability of > surviving two disk failures. I haven't done the benchmarking to find out, but I have read similar assertions and recommendations elsewhere. STFW might yield data to support or refuse the claims. > In the last couple of years I've had > problems with water ingress over a rack, and with a failed AC which > baked a room, so that failure modes which affect multiple disks > simultaneously are fairly prominent in my thinking about this sort of > issue. Poisson failures are not the only mode to worry about! Agreed. I am working towards implementing offsite scheduled replication. David