Date: Fri, 13 Dec 2019 23:54:39 -0800 From: David Christensen <dpchrist@holgerdanske.com> To: freebsd-questions@freebsd.org Subject: Re: Adding to a zpool -- different redundancies and risks Message-ID: <ce57e691-c569-a30a-1706-9b0442f9f67f@holgerdanske.com> In-Reply-To: <C6D326CC-2CF8-4E31-9CB5-273C3FAE8ECF@glasgow.ac.uk> References: <6104097C-009B-4E9C-A1D8-A2D0E5FECADF@glasgow.ac.uk> <09b11639-3303-df6b-f70c-6722caaacee7@holgerdanske.com> <5A01F7F7-9326-47E2-BA6E-79A7D3F0889A@glasgow.ac.uk> <a111524e-7760-fbc0-947a-864eb397573d@holgerdanske.com> <C6D326CC-2CF8-4E31-9CB5-273C3FAE8ECF@glasgow.ac.uk>
next in thread | previous in thread | raw e-mail | index | archive | help
On 2019-12-13 06:49, Norman Gray wrote:
>
> David, hello.
>
> On 13 Dec 2019, at 4:49, David Christensen wrote:
>
>> On 2019-12-12 04:42, Norman Gray wrote:
>>> # zpool list pool
>>> NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP
>>> HEALTH ALTROOT
>>> pool 98T 75.2T 22.8T - - 29% 76% 1.00x
>>> ONLINE -
>>
>> So, your pool is 75.2 TB / 77 TB = 97.7% full.
>
> Well, I have compression turned on, so I take it that the 98TB quoted
> here is an estimate of the capacity in that case, and that the 76%
> capacity quoted in this output is the effective capacity -- ie,
> alloc/size.
>
> The zpool(8) manpage documents these two properties as
>
> alloc Amount of storage space within the pool that has been
> physically allocated.
>
> capacity Percentage of pool space used. This property can also
> be
> referred to by its shortened column name, "cap".
>
> size Total size of the storage pool.
>
> The term 'physically allocated' is a bit confusing. I'm guessing that
> it takes compression into account, rather than bytes-in-sectors.
>
> I could be misinterpreting this output, though.
I believe the 'SIZE 98T' corresponds to eighteen 5.5 TB drives.
My bad -- I agree the 'CAP 76%' should be correct and my '97.7% full'
calculation is wrong.
I use the following command to get compression information:
# zfs get -t filesystem compressratio | grep POOLNAME
I am still trying to understand how to reconcile 'zpool list', 'zfs
list', etc., against df(1) and du(1).
>> https://jrs-s.net/2015/02/06/zfs-you-should-use-mirror-vdevs-not-raidz/
>
> Thanks for the reminder of this. I'm familiar with that article, and
> it's an interesting point of view. I don't find it completely
> convincing, though, since I'm not convinced that the speed of
> resilvering fully compensates for the less than 100% probability of
> surviving two disk failures.
I haven't done the benchmarking to find out, but I have read similar
assertions and recommendations elsewhere. STFW might yield data to
support or refuse the claims.
> In the last couple of years I've had
> problems with water ingress over a rack, and with a failed AC which
> baked a room, so that failure modes which affect multiple disks
> simultaneously are fairly prominent in my thinking about this sort of
> issue. Poisson failures are not the only mode to worry about!
Agreed. I am working towards implementing offsite scheduled replication.
David
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?ce57e691-c569-a30a-1706-9b0442f9f67f>
