Date: Thu, 26 Apr 2007 11:15:52 -0500 From: Dan Nelson <dnelson@allantgroup.com> To: Barry Pederson <bp@barryp.org> Cc: freebsd-current@freebsd.org, Alexandre Biancalana <biancalana@gmail.com> Subject: Re: zfs: df and zpool list report different size Message-ID: <20070426161551.GH50353@dan.emsphone.com> In-Reply-To: <4630B9E5.9000606@barryp.org> References: <8e10486b0704260701w3a6ca86hb833de23849514df@mail.gmail.com> <4630B9E5.9000606@barryp.org>
next in thread | previous in thread | raw e-mail | index | archive | help
In the last episode (Apr 26), Barry Pederson said: > Alexandre Biancalana wrote: > > I update one machine to -CURRENT (yesterday), and now I'm creating zfs > > filesystem using the following devices: > > ad9: 305245MB <Seagate ST3320620AS 3.AAE> at ata4-slave SATA150 > > ad11: 305245MB <Seagate ST3320620AS 3.AAE> at ata5-slave SATA150 > > Next I created the pool: > > # zpool create backup raidz ad9 ad11 > > # mount > > /dev/ad8s1a on / (ufs, local) > > devfs on /dev (devfs, local) > > backup on /backup (zfs, local) > > # df -h > > Filesystem Size Used Avail Capacity Mounted on > > /dev/ad8s1a 72G 2.2G 64G 3% / > > devfs 1.0K 1.0K 0B 100% /dev > > backup 293G 0B 293G 0% /backup > > # zpool list > > NAME SIZE USED AVAIL CAP HEALTH ALTROOT > > backup 596G 222K 596G 0% ONLINE - > > My doubt is why zpool list and df -h report different size ? Which of then > > is correct and should I trust ? > > The zpool size is correct in totalling up the usable size on the > pool's drives, but it's not telling you how much is taken up by > redundancy, so it's probably not a useful number to you. > > The "df -h" is also correct and probably more useful. "zfs list" > should show a similar useful number. That looks like bug 6308817 "discrepancy between zfs and zpool space accounting". "zpool list" is including the parity disk space when it shouldn't. http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6308817 "zfs list" should give you the same info as "df -k". Note that a 2-disk raidz is really an inefficient way of creating a mirror, so the "workaround" in your case might just be to drop your raidz vdev and replace it with a mirror. -- Dan Nelson dnelson@allantgroup.com
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20070426161551.GH50353>