Date: Thu, 18 Dec 2008 11:10:33 +1100 From: Andrew Snow <andrew@modulus.org> Cc: freebsd-fs@freebsd.org Subject: Re: More on ZFS filesystem sizes. Message-ID: <494994F9.4010105@modulus.org> In-Reply-To: <20081217231757.GE27041@lor.one-eyed-alien.net> References: <5f67a8c40812171351j66dc5484pee631198030a5739@mail.gmail.com> <20081217231757.GE27041@lor.one-eyed-alien.net>
next in thread | previous in thread | raw e-mail | index | archive | help
> I now have another quandry. I have ZFS on my laptop (two drives, mirrored) > and I "zfs send" backups to my big array (6 drives, raid-Z1). The problem > is that they don't match up As you know, ZFS has variable block sizes from 512 bytes to 128kb with every power of 2 in between. Each block has a fair chunk of meta-data to go with it (those 128 bit pointers aren't very space efficient!) I suppose what you're seeing is due to fragmentation, since with copy-on-write for snapshots, big blocks can be replaced with smaller ones when a file is partially updated, but these can be written more efficiently during the send/receive process, as only the actually referenced data needs to be stored. Given all of that, your numbers are only out by 1 to 1.5%, so is it really that surprising? Regarding du on ZFS, it calculates the result based on the number of blocks consumed by the file, excluding metadata and parity and checksums, and after compression. /usr/ports will be full of tiny, compressable files resulting in a large ratio of metadata to actual file data. "zfs list" returns the space consumed including metadata, parity, and checksums. (Also, filesystem metadata is stored twice by default, or three times optionally, in addition to whatever RAID you are using.) So it is weird, but I believe what you're seeing is normal. Maybe you need special ZFS sunglasses which black out whenever you start trying to look at what ZFS is doing to your files :-) - Andrew
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?494994F9.4010105>