From owner-freebsd-fs@FreeBSD.ORG Tue Nov 18 17:49:00 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 0B093765 for ; Tue, 18 Nov 2014 17:49:00 +0000 (UTC) Received: from platinum.linux.pl (platinum.edu.pl [81.161.192.4]) by mx1.freebsd.org (Postfix) with ESMTP id BCABEB67 for ; Tue, 18 Nov 2014 17:48:55 +0000 (UTC) Received: by platinum.linux.pl (Postfix, from userid 87) id 814D345218C; Tue, 18 Nov 2014 18:33:41 +0100 (CET) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on platinum.linux.pl X-Spam-Level: X-Spam-Status: No, score=-1.0 required=3.0 tests=ALL_TRUSTED,AWL, TO_NO_BRKTS_PCNT autolearn=disabled version=3.4.0 Received: from [10.255.0.2] (c38-073.client.duna.pl [83.151.38.73]) by platinum.linux.pl (Postfix) with ESMTPA id 23F13452086 for ; Tue, 18 Nov 2014 18:33:41 +0100 (CET) Message-ID: <546B8203.5040607@platinum.linux.pl> Date: Tue, 18 Nov 2014 18:29:39 +0100 From: Adam Nowacki User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.6.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: No more free space after upgrading to 10.1 and zpool upgrade References: <20141116080128.GA20042@exhan.dylanleigh.net> <20141118054443.GA40514@core.summit> In-Reply-To: <20141118054443.GA40514@core.summit> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 18 Nov 2014 17:49:00 -0000 On 2014-11-18 06:44, Emil Mikulic wrote: > On Sun, Nov 16, 2014 at 04:10:28PM +0100, Olivier Cochard-Labb? wrote: >> On Sun, Nov 16, 2014 at 9:01 AM, Dylan Leigh wrote: >> >>> >>> Could you provide some other details about the pool structure/config, >>> including the output of "zpool status"? >>> >>> >> It's a raidz1 pool build with 5 SATA 2TB drives, and there are 5 zvolumes >> without advanced features (no compression, no snapshot, no de-dup, etc...). >> Because it's a raidz1 pool, I know that FREE space reported by a "zpool >> list" include redundancy overhead and is bigger than AVAIL space reported >> by a "zfs list". >> >> I've moved about 100GB (on hundred GigaByte) of files and after this step >> there were only 2GB (two GigaByte) of Free space only: How is it possible ? > > I had the same problem. Very old pool: > > History for 'jupiter': > 2010-01-20.20:46:00 zpool create jupiter raidz /dev/ad10 /dev/ad12 /dev/ad14 > > I upgraded FreeBSD 8.3 to 9.0, which I think went fine, but when I upgraded > to 10.1, I had 0B AVAIL according to "zfs list" and df(1), even though there was > free space according to "zpool list" > > # zpool list -p jupiter > NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH ALTROOT > jupiter 4466765987840 4330587288576 136178699264 30% - 96 1.00x ONLINE - > > # zfs list -p jupiter > NAME USED AVAIL REFER MOUNTPOINT > jupiter 2884237136220 0 46376 /jupiter > > Deleting files, snapshots, and child filesystems didn't help, AVAIL stayed at > zero bytes... until I deleted enough: > > NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH ALTROOT > jupiter 4466765987840 4320649953280 146116034560 30% - 96 1.00x ONLINE - > > NAME USED AVAIL REFER MOUNTPOINT > jupiter 2877618732010 4350460950 46376 /jupiter > > Apparently, the above happened somewhere between 96.0% and 96.9% used. > > Any ideas what happened here? It's almost like 100+GB of free space is somehow > reserved by the system (and I don't mean "zfs set reservation", those are all > "none") This commit is to blame: http://svnweb.freebsd.org/base?view=revision&revision=268455 3.125% of disk space is reserved.