From owner-freebsd-fs@FreeBSD.ORG Tue Aug 20 17:20:04 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id 6A11A4B3 for ; Tue, 20 Aug 2013 17:20:04 +0000 (UTC) (envelope-from list_freebsd@bluerosetech.com) Received: from rush.bluerosetech.com (rush.bluerosetech.com [IPv6:2607:fc50:1000:9b00::25]) (using TLSv1 with cipher ADH-CAMELLIA256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 4059A2E80 for ; Tue, 20 Aug 2013 17:20:04 +0000 (UTC) Received: from chombo.houseloki.net (c-76-27-220-79.hsd1.wa.comcast.net [76.27.220.79]) by rush.bluerosetech.com (Postfix) with ESMTPSA id 70BA511434; Tue, 20 Aug 2013 10:20:03 -0700 (PDT) Received: from [192.168.1.102] (static-71-242-248-73.phlapa.east.verizon.net [71.242.248.73]) by chombo.houseloki.net (Postfix) with ESMTPSA id 757858ED; Tue, 20 Aug 2013 10:20:01 -0700 (PDT) Message-ID: <5213A53D.4010701@bluerosetech.com> Date: Tue, 20 Aug 2013 13:19:57 -0400 From: Darren Pilgrim User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:17.0) Gecko/20130801 Thunderbird/17.0.8 MIME-Version: 1.0 To: kpneal@pobox.com Subject: Re: du which understands ZFS References: <20130820165452.GA76782@neutralgood.org> In-Reply-To: <20130820165452.GA76782@neutralgood.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 Aug 2013 17:20:04 -0000 On 8/20/2013 12:54 PM, kpneal@pobox.com wrote: > On Tue, Aug 20, 2013 at 10:01:48AM -0400, gtodd@bellanet.org wrote: >> >> On Tue, 20 Aug 2013, Mark Felder wrote: >>> On Mon, Aug 19, 2013, at 22:51, aurfalien wrote: >>>> Hi, >>>> >>>> Is there a version of du which understands ZFS? >>>> >>>> Currently when running du I get this; >>>> >>>> Filesystem Size Used Avail Capacity Mounted on >>>> abyss 51T 50k 51T 0% /abyss >>>> abyss/PROJECT 72T 20T 51T 29% /abyss/PROJECTS >>>> abyss/PROJECTX 54T 2.6T 51T 5% /abyss/PROJECTSX >>>> >>>> The zpool of abyss is 75TB in size. >>>> >>> And do you want df to be aware of compression and deduplication, too? I >>> don't think this will show up in FreeBSD's df. Use the tools that ZFS >>> provides and you'll never get any unexpected surprises. >> >> I think if there were to be a "zfs aware" df it would show filesystem >> statistics for ufs and other "traditional" filesystems, but if/when it >> detected zfs it would output something like: >> >> "ZFS - free disk space does not apply" :-) > > Well, except that that isn't really accurate. It looks like a df of > zfs shows a size of used+avail. You'll see something similar if you > NFS mount an exported filesystem. > >> or maybe some more helpful message about using zpool(1) zfs(1) etc. > > How about a short blurb along the lines of: > "The results from df are approximations that do not take into account > features supported by some filesystems like compression or deduplication. > Please refer to your filesystems documentation for filesystem-specific > details." > > Perhaps throw in something about filesystem-specific metadata also > throwing off the numbers. This is true for UFS as well if indirect > blocks are needed, correct? UFS with softupdates also has the fun side-effect of it being possible to have negative free space. :) df is a nice tool to give rough approximations of usage, but is far too high level to provide the kind of reporting OP wants. ZFS's own tools must be used for that. If we made df ZFS aware, we'd open the door to make it SMB, NFS, soft-updates, and FUSE aware. Personally, I'd rather not have the lumbering monstrosity that would be such a df. P.S. I'd like to offer this bit of humour: Excerpt from `df -ciH` Filesystem Size Used Avail Capacity iused ifree %iused tank 807G 38k 807G 0% 7 1.6G 0% total 260T 1.1T 259T 0% 1.4M 505G 0% A quarter petabyte and a half a trillion inodes. Pretty impressive for a RAID-Z2 of four 1 TB disks. ;)