From owner-freebsd-fs@FreeBSD.ORG Sun Mar 27 07:13:17 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id EE991106566B for ; Sun, 27 Mar 2011 07:13:17 +0000 (UTC) (envelope-from josef.karthauser@unitedlane.com) Received: from k2smtpout03-01.prod.mesa1.secureserver.net (k2smtpout03-01.prod.mesa1.secureserver.net [64.202.189.171]) by mx1.freebsd.org (Postfix) with SMTP id C82C68FC14 for ; Sun, 27 Mar 2011 07:13:17 +0000 (UTC) Received: (qmail 24929 invoked from network); 27 Mar 2011 07:13:17 -0000 Received: from unknown (HELO ip-72.167.34.38.ip.secureserver.net) (72.167.34.38) by k2smtpout03-01.prod.mesa1.secureserver.net (64.202.189.171) with ESMTP; 27 Mar 2011 07:13:16 -0000 Received: (qmail 10109 invoked from network); 27 Mar 2011 03:12:41 -0400 Received: from unknown (HELO ?90.155.77.76?) (90.155.77.76) by ip-72.167.34.38.ip.secureserver.net with (AES128-SHA encrypted) SMTP; 27 Mar 2011 03:12:40 -0400 Content-Type: text/plain; charset=us-ascii Mime-Version: 1.0 (Apple Message framework v1082) From: Dr Josef Karthauser In-Reply-To: <20110326225430.00006a76@unknown> Date: Sun, 27 Mar 2011 08:13:44 +0100 Content-Transfer-Encoding: 7bit Message-Id: <3BBB1E36-8E09-4D07-B49E-ACA8548B0B44@unitedlane.com> References: <9CF23177-92D6-40C5-8C68-B7E2F88236E6@unitedlane.com> <20110326225430.00006a76@unknown> To: freebsd-fs@freebsd.org X-Mailer: Apple Mail (2.1082) Subject: Re: ZFS Problem - full disk, can't recover space :(. X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 27 Mar 2011 07:13:18 -0000 On 26 Mar 2011, at 21:54, Alexander Leidinger wrote: >> Any idea on were the 23G has gone, or how I pursuade the zpool to >> return it? Why is the filesystem referencing storage that isn't being >> used? > > I suggest a > zfs list -r -t all void/store > to make really sure we/you see what we want to see. > > Can it be that an application has the 23G still open? > >> p.s. this is FreeBSD 8.2 with ZFS pool version >> 15. > > The default setting of showing snapshots or not changed somewhere. As > long as you didn't configure the pool to show snapshots (zpool get > listsnapshots ), they are not shown by default. Definitely no snapshots: infinity# zfs list -tall NAME USED AVAIL REFER MOUNTPOINT void 99.1G 24.8G 2.60G legacy void/home 33.5K 24.8G 33.5K /home void/j 87.5G 24.8G 54K /j void/j/buttsby 136M 9.87G 2.40M /j/buttsby void/j/buttsby/home 34.5K 9.87G 34.5K /j/buttsby/home void/j/buttsby/local 130M 9.87G 130M /j/buttsby/local void/j/buttsby/tmp 159K 9.87G 159K /j/buttsby/tmp void/j/buttsby/var 3.97M 9.87G 104K /j/buttsby/var void/j/buttsby/var/db 2.40M 9.87G 1.55M /j/buttsby/var/db void/j/buttsby/var/db/pkg 866K 9.87G 866K /j/buttsby/var/db/pkg void/j/buttsby/var/empty 21K 9.87G 21K /j/buttsby/var/empty void/j/buttsby/var/log 838K 9.87G 838K /j/buttsby/var/log void/j/buttsby/var/mail 592K 9.87G 592K /j/buttsby/var/mail void/j/buttsby/var/run 30.5K 9.87G 30.5K /j/buttsby/var/run void/j/buttsby/var/tmp 23K 9.87G 23K /j/buttsby/var/tmp void/j/legacy-alpha 56.6G 3.41G 56.6G /j/legacy-alpha void/j/legacy-brightstorm 29.2G 10.8G 29.2G /j/legacy-brightstorm void/j/legacy-obleo 1.29G 1.71G 1.29G /j/legacy-obleo void/j/mesh 310M 3.70G 2.40M /j/mesh void/j/mesh/home 21K 3.70G 21K /j/mesh/home void/j/mesh/local 305M 3.70G 305M /j/mesh/local void/j/mesh/tmp 26K 3.70G 26K /j/mesh/tmp void/j/mesh/var 2.91M 3.70G 104K /j/mesh/var void/j/mesh/var/db 2.63M 3.70G 1.56M /j/mesh/var/db void/j/mesh/var/db/pkg 1.07M 3.70G 1.07M /j/mesh/var/db/pkg void/j/mesh/var/empty 21K 3.70G 21K /j/mesh/var/empty void/j/mesh/var/log 85K 3.70G 85K /j/mesh/var/log void/j/mesh/var/mail 24K 3.70G 24K /j/mesh/var/mail void/j/mesh/var/run 28.5K 3.70G 28.5K /j/mesh/var/run void/j/mesh/var/tmp 23K 3.70G 23K /j/mesh/var/tmp void/local 282M 1.72G 282M /local void/mysql 22K 78K 22K /mysql void/tmp 55K 2.00G 55K /tmp void/usr 1.81G 2.19G 275M /usr void/usr/obj 976M 2.19G 976M /usr/obj void/usr/ports 289M 2.19G 234M /usr/ports void/usr/ports/distfiles 54.8M 2.19G 54.8M /usr/ports/distfiles void/usr/ports/packages 21K 2.19G 21K /usr/ports/packages void/usr/src 311M 2.19G 311M /usr/src void/var 6.86G 3.14G 130K /var void/var/crash 22.5K 3.14G 22.5K /var/crash void/var/db 6.86G 3.14G 58.3M /var/db void/var/db/mysql 6.80G 3.14G 4.79G /var/db/mysql void/var/db/mysql/innodbdata 2.01G 3.14G 2.01G /var/db/mysql/innodbdata void/var/db/pkg 2.00M 3.14G 2.00M /var/db/pkg void/var/empty 21K 3.14G 21K /var/empty void/var/log 642K 3.14G 642K /var/log void/var/mail 712K 3.14G 712K /var/mail void/var/run 49.5K 3.14G 49.5K /var/run void/var/tmp 27K 3.14G 27K /var/tmp This is the problematic filesystem: void/j/legacy-alpha 56.6G 3.41G 56.6G /j/legacy-alpha No chance that an application is holding any data - I rebooting and came up in single user mode to try and get this resolved, but no cookie. Joe