From owner-freebsd-fs@FreeBSD.ORG Tue Oct 1 10:14:16 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id 9587D116 for ; Tue, 1 Oct 2013 10:14:16 +0000 (UTC) (envelope-from freebsd-fs@m.gmane.org) Received: from plane.gmane.org (plane.gmane.org [80.91.229.3]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 53E042DDD for ; Tue, 1 Oct 2013 10:14:16 +0000 (UTC) Received: from list by plane.gmane.org with local (Exim 4.69) (envelope-from ) id 1VQwy3-0006a3-Po for freebsd-fs@freebsd.org; Tue, 01 Oct 2013 12:14:11 +0200 Received: from jtotz2.cs.ucl.ac.uk ([128.16.6.56]) by main.gmane.org with esmtp (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Tue, 01 Oct 2013 12:14:11 +0200 Received: from johannes by jtotz2.cs.ucl.ac.uk with local (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Tue, 01 Oct 2013 12:14:11 +0200 X-Injected-Via-Gmane: http://gmane.org/ To: freebsd-fs@freebsd.org From: Johannes Totz Subject: Re: zfs: the exponential file system from hell Date: Tue, 01 Oct 2013 11:14:02 +0100 Lines: 59 Message-ID: References: <52457A32.2090105@fsn.hu> <77F6465C-4E76-4EE9-88B5-238FFB4E0161@sarenet.es> <20130930234401.GA68360@neutralgood.org> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-15; format=flowed Content-Transfer-Encoding: 7bit X-Complaints-To: usenet@ger.gmane.org X-Gmane-NNTP-Posting-Host: jtotz2.cs.ucl.ac.uk User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.0 In-Reply-To: <20130930234401.GA68360@neutralgood.org> X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 01 Oct 2013 10:14:16 -0000 On 01/10/2013 00:44, kpneal@pobox.com wrote: > On Mon, Sep 30, 2013 at 11:07:33AM +0200, Borja Marcos wrote: >> >> On Sep 27, 2013, at 2:29 PM, Attila Nagy wrote: >> >>> Hi, >>> >>> Did anyone try to fill a zpool with multiple zfs in it and graph the space accounted by df and zpool list? >>> If not, here it is: >>> https://picasaweb.google.com/104147045962330059540/FreeBSDZfsVsDf#5928271443977601554 >> >> There is a fundamental problem with "df" and ZFS. df is based on the assumption that each file system has >> a fixed maximum size (generally the size of the disk partition on which it resides). > >> Anyway, in a system with variable datasets "df" is actually meaningless and you should rely on "zpool list", which gives you >> the real size, allocated space, free space, etc. >> >> >> % zpool list >> NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT >> pool 1.59T 500G 1.11T 30% 1.00x ONLINE - >> % > > Well, not quite. The 'zpool' command works at a lower level of abstraction > than the 'zfs' command. And zpool has a quirk where the amount of space > used and available is only accurate for mirrors or single disk vdevs, but > for raidz* it does not factor in space used for redundancy. (This does not > make it _wrong_, you just have to understand what it is telling you.) I'd say this is a desgin flow in zfs though. One motivation for having it was to do away with all the layering in the storage stack and have something integrated. Does somebody have a usecase where the numbers reported by zpool for (free/used) space are actually useful? > For example, I have two pools here, one of which (aursys) is a two way > mirror, and the other (aurd0) is a 6-drive raidz2. > > [kpn@aurora ~]$ zpool list > NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT > aurd0 4.91T 3.21T 1.70T 65% 1.00x ONLINE - > aursys 278G 84.7G 193G 30% 1.00x ONLINE - > > [kpn@aurora ~]$ zfs list -o space aurd0 aursys > NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD > aurd0 1.08T 2.14T 4.00K 59.9K 1G 2.14T > aursys 189G 85.7G 0 44.5K 1G 84.7G > > See that the zfs command says aurd0 has used 2.14T of space while the zpool > command says it has used 3.21T? But aursys (the mirror) has numbers that > roughly match. > > Since 'zfs' works above the pool level it gives accurate sizes no matter > what kind of redundancy (if any) you are using. > > Bottom line: > The replacement for the 'df' command when using ZFS is 'zfs list'. >