From owner-freebsd-fs@FreeBSD.ORG Fri Apr 13 15:45:57 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 8FF401065672 for ; Fri, 13 Apr 2012 15:45:57 +0000 (UTC) (envelope-from fjwcash@gmail.com) Received: from mail-pb0-f54.google.com (mail-pb0-f54.google.com [209.85.160.54]) by mx1.freebsd.org (Postfix) with ESMTP id 65B1C8FC08 for ; Fri, 13 Apr 2012 15:45:57 +0000 (UTC) Received: by pbcwz17 with SMTP id wz17so4088409pbc.13 for ; Fri, 13 Apr 2012 08:45:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=2cEcNaNeRgDUkf7yxvv6bA5hFlH0/zvrefjxfGgojVw=; b=KqChCcHe47Rc6SfyuwnJ1NGllmlb+1/QRhzGXbZCzr5MGNZxiR73N+yvPalATO/c3e 4KkdTc7YtKzEVgbF+GL1ZZv5EoIps5v7ULTeuY+4vm9XnDNbUOmPE62aDn+bmnBtCCma B6912NPOaHuysVek71FENxtBUUy6e0WWjPLXfAxi3VI6CLKPqpuKaZRPUtd3bSAXPrxr KHbKBEfoZOBFkpNMMpwI/mipMvypm8xDeqVmhXFrhSZNnigsT6XPfmpL1kM6M7fDbPHV W1SGQ4ZIcdNSuD+QAXrRKX9tZd8xsjBBSDXZH1hRekOTEApIi0Vl5Wdst48bIw+O+6ts zwYw== MIME-Version: 1.0 Received: by 10.68.202.234 with SMTP id kl10mr5623092pbc.52.1334331957003; Fri, 13 Apr 2012 08:45:57 -0700 (PDT) Received: by 10.68.42.7 with HTTP; Fri, 13 Apr 2012 08:45:56 -0700 (PDT) In-Reply-To: References: <4F8825E5.3040809@gmail.com> <1334323707.4f8829fbe801e@www.hyperdesktop.nl> Date: Fri, 13 Apr 2012 08:45:56 -0700 Message-ID: From: Freddie Cash To: Johannes Totz Content-Type: text/plain; charset=UTF-8 Cc: freebsd-fs@freebsd.org Subject: Re: ZFS and disk usage X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 13 Apr 2012 15:45:57 -0000 On Fri, Apr 13, 2012 at 8:27 AM, Johannes Totz wrote: > Without checking the numbers myself... > Note that zpool and zfs do not agree on (free) space accounting: zpool > shows "raw" space, whereas zfs includes metadata overhead for itself. > > Small rant: I dont understand why zpool and zfs show different things. > If you have an integrated storage stack then why not show consistent > numbers? Is there any use for this extra (mis-)information that > zpool-vs-zfs provides? There's a great posting about the differences in the zfs-discuss mailing list archives, although I can't find a reference to it at the moment. Going from memory, the breakdown is something like: zpool shows "raw storage available to the pool across all vdevs", without counting any redundancy. This should be approx. "size of drives * num of drives". zfs shows "storage space available for use", after removing all redundancy, extra space for metadata, checksums, etc. This is what's available for programs to use, before compression and dedupe take effect. df shows "storage space available to userspace programs" after all compressions, dedupe, metadata, checksums, etc have been removed. This is the actual space that users can access. "ls -l" shows the "size of files" (as in, uncompressed, rehydrated, the size it would be if you copied it to a floppy). They each work on different layers of the storage stack (DMU, ZPL, userspace, etc). Hence, they show different values. But once you think about what each layer of the stack is doing ... the numbers make perfect sense. -- Freddie Cash fjwcash@gmail.com