From owner-freebsd-fs@freebsd.org Wed Apr 12 19:20:39 2017 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 18472D3B3C1; Wed, 12 Apr 2017 19:20:39 +0000 (UTC) (envelope-from asomers@gmail.com) Received: from mail-yw0-x229.google.com (mail-yw0-x229.google.com [IPv6:2607:f8b0:4002:c05::229]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id CC4ECCC2; Wed, 12 Apr 2017 19:20:38 +0000 (UTC) (envelope-from asomers@gmail.com) Received: by mail-yw0-x229.google.com with SMTP id j9so16399950ywj.3; Wed, 12 Apr 2017 12:20:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:sender:in-reply-to:references:from:date:message-id :subject:to:cc; bh=k/kotrJScR2Ps5zMciC8N6nuMXXQnEkykIYvsdVifk0=; b=EilNL9aXFQ1sGHpxzyQjaNlyL0CYn6RKRMowUz7yyDQdsrkimjHOpfWb8Dgt0gwLxT PbNSfyh5EyQt06OLLl8Jg67e8CoifoBQS8D9axd4aN2rWvxrhfZB6Q6cPEimeUSLBcr1 4vsK17vPmmxbeR9k7vPwXqCJ3CAdiE4y+WS5fJU/5zyNTiJ0wOmYaYaEtcDnxjm2eXZH N5cUTr1NyCPTIewiSeQgaVj+b0k8UIxydNl0hAgGLP9z15cFr87PvVav7rje8FAaNP9B V9OJyQ4mgUHsZiNyQWuLgvKWx5IMUVqPCERoT/0VIcica2ihJpFuETi2FsXgkRauX0ih /mmg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:sender:in-reply-to:references:from :date:message-id:subject:to:cc; bh=k/kotrJScR2Ps5zMciC8N6nuMXXQnEkykIYvsdVifk0=; b=ah088oGgmexz6Q9lnskUplFlJwWrcrH0gMpo+negFNOuFuiqYS0w5iifTf1+DIoPm2 dllnJ/5YiFvSofXHX1lWj1hHQuB/65faVzE3Mtz3s5apACjpAMblvlZ5M1vNDIdM8WVb AfSJmkgERN86hWYPisima1nE3LNH16BhdU4hujcuKIVQtf8MofYL1GgAd7e0FZ+bTSiH C/OJ+PXxmZs9wiIW98nc9weED3jesssljKW2jng70bYcDS4BzCzMHgs9MajEPUWfda4W Zm2ABR6oumVWK6/0deaINhUZl4FXsvZN9KLseMROy/PZDtREk6396LZImJuv72PCMCl5 qEiw== X-Gm-Message-State: AN3rC/719+HO1sA+/HLH3bddDCtv2zfZ76DypOrJcVqnfI3EcNlkzRWvGdS1/rwrsPX7uu3cTgA3OcAnUBz1ig== X-Received: by 10.129.173.69 with SMTP id l5mr8155955ywk.351.1492024837733; Wed, 12 Apr 2017 12:20:37 -0700 (PDT) MIME-Version: 1.0 Sender: asomers@gmail.com Received: by 10.129.20.214 with HTTP; Wed, 12 Apr 2017 12:20:37 -0700 (PDT) In-Reply-To: <71ef8400-94ec-1f59-3b2b-bb576ad65b89@norma.perm.ru> References: <71ef8400-94ec-1f59-3b2b-bb576ad65b89@norma.perm.ru> From: Alan Somers Date: Wed, 12 Apr 2017 13:20:37 -0600 X-Google-Sender-Auth: jbmXM0ZpdrCEXnabGKrRoNu6EjE Message-ID: Subject: Re: zpool list show nonsense on raidz pools, at least it looks like it for me To: "Eugene M. Zheganin" Cc: FreeBSD , FreeBSD FS Content-Type: text/plain; charset=UTF-8 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 12 Apr 2017 19:20:39 -0000 On Wed, Apr 12, 2017 at 12:01 PM, Eugene M. Zheganin wrote: > Hi, > > > It's not my first letter where I fail to understand the space usage from zfs > utilities, and in previous ones I was kind of convinced that I just read it > wrong, but not this time I guess. See for yourself: > > > [emz@san01:~]> zpool list data > NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT > data 17,4T 7,72T 9,66T - 46% 44% 1.00x ONLINE - > > > Here' as I understand it, zpool says that less than a half of the pool is > used. As far as I know this is very complicated when it comes to the radiz > pools. Let's see: > > > [emz@san01:~]> zfs list -t all data > NAME USED AVAIL REFER MOUNTPOINT > data 13,3T 186G 27,2K /data > > > So, if we won't investigate further, it looks like that only 186G is free. > Spoiling - this is the real free space amount, because I've just managed to > free 160 gigs of data, and I really know I was short on space when sending > 30 Gb dataset, because zfs was saying "Not enough free space". So, let's > investigate further: > > > [emz@san01:~]> zfs list -t all | more > NAME USED AVAIL REFER MOUNTPOINT > data 13,3T 186G 27,2K /data > data/esx 5,23T 186G 27,2K /data/esx ... > data/esx/boot-esx26 8,25G 194G 12,8K - > data/esx/shared 5,02T 2,59T 2,61T - > data/reference 6,74T 4,17T 2,73T - > data/reference@ver7_214 127M - 2,73T - > data/reference@ver2_739 12,8M - 2,73T - > data/reference@ver2_740 5,80M - 2,73T - > data/reference@ver2_741 4,55M - 2,73T - > data/reference@ver2_742 993K - 2,73T - > data/reference-ver2_739-worker100 1,64G 186G 2,73T - ... > > > This are getting really complicated now. > > What I don't understand is: > > - why the amount of free space changes from dataset to dataset ? I mean they > all share the same free space pool, all have the same refreservation=none, > but the AVAIL differs. When it comes to workerX datasets, it differs > slightly, but when it comes to the large zvols, like esx/shared or > reference, it differs a lot ! > > - why the esx/shared and reference datasets are shown like they can be > enlarged ? I mean, I really don't have THAT much of free space. > > > Here are their properties: > > > [emz@san01:~]> zfs get all data/esx/shared > NAME PROPERTY VALUE SOURCE ... > data/esx/shared refreservation 5,02T local ... > [emz@san01:~]> zfs get all data/reference > NAME PROPERTY VALUE SOURCE ... > data/reference refreservation 3,98T local ... > > > Could please someone explain why they show as having like half of the total > pool space as AVAIL ? I thing this is directly related to the fact that > zpool list shows only 44% of the total pool space is used. And I use this > value to monitor the pool space usage, looks like I'm totally failing with > this. > Some of your datasets have refreservations. That's why. > > I also don't understand whe the zvol of the size 3.97T really uses 6.74T of > the space. I found an article, explaing that the volblocksize and the sector > size has to do something with this, and this happens when the device block > size is 4k, and volblocksize is default, thus 8k. Mine disks sector size is > 512 native, so this is really not the case. I'm also having equal number of > disks in vdevs, and they are 5: > The AVAIL reported by zpool list doesn't account for RAIDZ overhead (or maybe it assumes optimum alignment; I can't remember). But the USED reported by "zfs list" does account for RAIDZ overhead. -alan