Date: Wed, 5 May 2021 16:40:01 -0700 From: Mark Millard <marklmi@yahoo.com> To: freebsd-current <freebsd-current@freebsd.org>, FreeBSD-STABLE Mailing List <freebsd-stable@freebsd.org> Subject: zpool list -p 's FREE vs. zfs list -p's AVAIL ? FREE-AVAIL == 6_675_374_080 (199G zroot pool) Message-ID: <EAD6A790-EE50-4C3E-855E-CC4A83C25FF0@yahoo.com> References: <EAD6A790-EE50-4C3E-855E-CC4A83C25FF0.ref@yahoo.com>
next in thread | previous in thread | raw e-mail | index | archive | help
Context: # gpart show -pl da0 =3D> 40 468862048 da0 GPT (224G) 40 532480 da0p1 efiboot0 (260M) 532520 2008 - free - (1.0M) 534528 25165824 da0p2 swp12a (12G) 25700352 25165824 da0p4 swp12b (12G) 50866176 417994752 da0p3 zfs0 (199G) 468860928 1160 - free - (580K) There is just one pool: zroot and it is on zfs0 above. # zpool list -p NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG = CAP DEDUP HEALTH ALTROOT zroot 213674622976 71075655680 142598967296 - - 28 = 33 1.00 ONLINE - So FREE: 142_598_967_296 (using _ to make it more readable) # zfs list -p zroot=20 NAME USED AVAIL REFER MOUNTPOINT zroot 71073697792 135923593216 98304 /zroot So AVAIL: 135_923_593_216 FREE-AVAIL =3D=3D 6_675_374_080 The questions: Is this sort of unavailable pool-free-space normal? Is this some sort of expected overhead that just is not explicitly reported? Possibly a "FRAG" consequence? For reference: # zpool status pool: zroot state: ONLINE scan: scrub repaired 0B in 00:31:48 with 0 errors on Sun May 2 = 19:52:14 2021 config: NAME STATE READ WRITE CKSUM zroot ONLINE 0 0 0 da0p3 ONLINE 0 0 0 errors: No known data errors =3D=3D=3D Mark Millard marklmi at yahoo.com ( dsl-only.net went away in early 2018-Mar)
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?EAD6A790-EE50-4C3E-855E-CC4A83C25FF0>