Date: Tue, 01 Oct 2013 11:14:02 +0100 From: Johannes Totz <johannes@jo-t.de> To: freebsd-fs@freebsd.org Subject: Re: zfs: the exponential file system from hell Message-ID: <l2e78v$ipa$2@ger.gmane.org> In-Reply-To: <20130930234401.GA68360@neutralgood.org> References: <52457A32.2090105@fsn.hu> <77F6465C-4E76-4EE9-88B5-238FFB4E0161@sarenet.es> <20130930234401.GA68360@neutralgood.org>
next in thread | previous in thread | raw e-mail | index | archive | help
On 01/10/2013 00:44, kpneal@pobox.com wrote: > On Mon, Sep 30, 2013 at 11:07:33AM +0200, Borja Marcos wrote: >> >> On Sep 27, 2013, at 2:29 PM, Attila Nagy wrote: >> >>> Hi, >>> >>> Did anyone try to fill a zpool with multiple zfs in it and graph the space accounted by df and zpool list? >>> If not, here it is: >>> https://picasaweb.google.com/104147045962330059540/FreeBSDZfsVsDf#5928271443977601554 >> >> There is a fundamental problem with "df" and ZFS. df is based on the assumption that each file system has >> a fixed maximum size (generally the size of the disk partition on which it resides). > >> Anyway, in a system with variable datasets "df" is actually meaningless and you should rely on "zpool list", which gives you >> the real size, allocated space, free space, etc. >> >> >> % zpool list >> NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT >> pool 1.59T 500G 1.11T 30% 1.00x ONLINE - >> % > > Well, not quite. The 'zpool' command works at a lower level of abstraction > than the 'zfs' command. And zpool has a quirk where the amount of space > used and available is only accurate for mirrors or single disk vdevs, but > for raidz* it does not factor in space used for redundancy. (This does not > make it _wrong_, you just have to understand what it is telling you.) I'd say this is a desgin flow in zfs though. One motivation for having it was to do away with all the layering in the storage stack and have something integrated. Does somebody have a usecase where the numbers reported by zpool for (free/used) space are actually useful? > For example, I have two pools here, one of which (aursys) is a two way > mirror, and the other (aurd0) is a 6-drive raidz2. > > [kpn@aurora ~]$ zpool list > NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT > aurd0 4.91T 3.21T 1.70T 65% 1.00x ONLINE - > aursys 278G 84.7G 193G 30% 1.00x ONLINE - > > [kpn@aurora ~]$ zfs list -o space aurd0 aursys > NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD > aurd0 1.08T 2.14T 4.00K 59.9K 1G 2.14T > aursys 189G 85.7G 0 44.5K 1G 84.7G > > See that the zfs command says aurd0 has used 2.14T of space while the zpool > command says it has used 3.21T? But aursys (the mirror) has numbers that > roughly match. > > Since 'zfs' works above the pool level it gives accurate sizes no matter > what kind of redundancy (if any) you are using. > > Bottom line: > The replacement for the 'df' command when using ZFS is 'zfs list'. >
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?l2e78v$ipa$2>