Date: Tue, 03 Apr 2012 11:44:08 +0300 From: Volodymyr Kostyrko <c.kworr@gmail.com> To: Peter Maloney <peter.maloney@brockmann-consult.de> Cc: freebsd-fs@freebsd.org Subject: Re: ZFS v28 and free space leakage Message-ID: <4F7AB858.3030709@gmail.com> In-Reply-To: <4F75E05D.2060206@brockmann-consult.de> References: <4F75C7EC.30606@gmail.com> <4F75E05D.2060206@brockmann-consult.de>
next in thread | previous in thread | raw e-mail | index | archive | help
Peter Maloney wrote:
> I think you ran zpool list... Does zfs list show the same?
> zfs list -rt all kohrah1
NAME USED AVAIL REFER MOUNTPOINT
kohrah1 22,5M 134G 31K /kohrah1
> Do you have any snapshots or clones?
None.
> What sort of vdevs do you have?
pool: kohrah1
state: ONLINE
scan: scrub repaired 0 in 0h0m with 0 errors on Fri Mar 30 17:25:16 2012
config:
NAME STATE READ WRITE CKSUM
kohrah1 ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
da3 ONLINE 0 0 0
da0 ONLINE 0 0 0
errors: No known data errors
> Does creating an empty pool show 0 used? What about after adding more
> datasets?
As I have mirrored pool I'll split it for now and make some tests on
other disk.
# zpool split kohrah1 kohrah1new
# zpool import kohrah1new
# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
kohrah1 136G 22,4M 136G 0% 1.00x ONLINE -
kohrah1new 136G 21,7M 136G 0% 1.00x ONLINE -
# zpool status
pool: kohrah1
state: ONLINE
scan: scrub repaired 0 in 0h0m with 0 errors on Fri Mar 30 17:25:16 2012
config:
NAME STATE READ WRITE CKSUM
kohrah1 ONLINE 0 0 0
da3 ONLINE 0 0 0
errors: No known data errors
pool: kohrah1new
state: ONLINE
scan: scrub repaired 0 in 0h0m with 0 errors on Fri Mar 30 17:25:16 2012
config:
NAME STATE READ WRITE CKSUM
kohrah1new ONLINE 0 0 0
da0 ONLINE 0 0 0
errors: No known data errors
# zpool destroy kohrah1new
# zpool create -O compression=on -O atime=off kohrah1new da0
# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
kohrah1 136G 22,4M 136G 0% 1.00x ONLINE -
kohrah1new 136G 110K 136G 0% 1.00x ONLINE -
Fine for me now, 110K seems reasonable.
> Do you have datasets? They might use some for metadata.
None of them as shown above.
> Here begins the guessing and/or babbling...
>
> And I haven't tried this with zfs, but I know with ext on Linux, if you
> fill up a directory, and delete all the files in it, the directory takes
> more space than before it was filled (du will include this space when
> run). So be very thorough with how you calculate it. Maybe zfs did the
> same thing with metadata structures, and just left them allocated empty
> (just a guess).
>
> To prove there is a leak, you would need to fill up the disk, delete
> everything, and then fill it again to see if it fit less. If I did such
> a test and it was the same, I would just forget about the problem.
kk, throwing junk in:
# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
kohrah1 136G 22,4M 136G 0% 1.00x ONLINE -
kohrah1new 136G 2,29G 134G 1% 1.00x ONLINE -
# find /kohrah1new/ | wc -l
150590
# rm -rf /kohrah1new/*
# find /kohrah1new/
/kohrah1new/
# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
kohrah1 136G 22,4M 136G 0% 1.00x ONLINE -
kohrah1new 136G 436K 136G 0% 1.00x ONLINE -
Not a same test you ask but it seems that ZFS leaks on metadata. Or
peruses them. Repeating the same tasks results in:
# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
kohrah1 136G 22,4M 136G 0% 1.00x ONLINE -
kohrah1new 136G 336K 136G 0% 1.00x ONLINE -
So this feels like some leftover.
> Perhaps another interesting experiment would be to zfs send the pool to
> see if the destination pool ends up in the same state.
This one is interesting:
# zfs snapshot kohrah1@test
# zfs send kohrah1@test | zfs receive -F kohrah1new
# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
kohrah1 136G 22,4M 136G 0% 1.00x ONLINE -
kohrah1new 136G 248K 136G 0% 1.00x ONLINE -
So it frees up some space.
If I do this on the clean pool:
# zpool destroy kohrah1new
# zpool create -O compression=on -O atime=off kohrah1new da0
# zfs send kohrah1@test | zfs receive -F kohrah1new
# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
kohrah1 136G 22,4M 136G 0% 1.00x ONLINE -
kohrah1new 136G 92,5K 136G 0% 1.00x ONLINE -
So dump doesn't contain any leftover.
However pool counts this leftover as data and replicates it:
# zpool destroy kohrah1new
# zpool attach kohrah1 da3 da0
# zpool status
pool: kohrah1
state: ONLINE
scan: resilvered 22,4M in 0h0m with 0 errors on Tue Apr 3 11:34:31 2012
config:
NAME STATE READ WRITE CKSUM
kohrah1 ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
da3 ONLINE 0 0 0
da0 ONLINE 0 0 0
errors: No known data errors
--
Sphinx of black quartz judge my vow.
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4F7AB858.3030709>
