From owner-freebsd-fs@FreeBSD.ORG Sun Mar 27 07:58:16 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 8CD57106566B for ; Sun, 27 Mar 2011 07:58:16 +0000 (UTC) (envelope-from jdc@koitsu.dyndns.org) Received: from qmta13.emeryville.ca.mail.comcast.net (qmta13.emeryville.ca.mail.comcast.net [76.96.27.243]) by mx1.freebsd.org (Postfix) with ESMTP id 747258FC08 for ; Sun, 27 Mar 2011 07:58:16 +0000 (UTC) Received: from omta22.emeryville.ca.mail.comcast.net ([76.96.30.89]) by qmta13.emeryville.ca.mail.comcast.net with comcast id Q7vQ1g0041vN32cAD7yGD0; Sun, 27 Mar 2011 07:58:16 +0000 Received: from koitsu.dyndns.org ([67.180.84.87]) by omta22.emeryville.ca.mail.comcast.net with comcast id Q7yF1g0081t3BNj8i7yFqs; Sun, 27 Mar 2011 07:58:15 +0000 Received: by icarus.home.lan (Postfix, from userid 1000) id C2BB49B429; Sun, 27 Mar 2011 00:58:14 -0700 (PDT) Date: Sun, 27 Mar 2011 00:58:14 -0700 From: Jeremy Chadwick To: Dr Josef Karthauser Message-ID: <20110327075814.GA71131@icarus.home.lan> References: <9CF23177-92D6-40C5-8C68-B7E2F88236E6@unitedlane.com> <20110326225430.00006a76@unknown> <3BBB1E36-8E09-4D07-B49E-ACA8548B0B44@unitedlane.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <3BBB1E36-8E09-4D07-B49E-ACA8548B0B44@unitedlane.com> User-Agent: Mutt/1.5.21 (2010-09-15) Cc: freebsd-fs@freebsd.org Subject: Re: ZFS Problem - full disk, can't recover space :(. X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 27 Mar 2011 07:58:16 -0000 On Sun, Mar 27, 2011 at 08:13:44AM +0100, Dr Josef Karthauser wrote: > On 26 Mar 2011, at 21:54, Alexander Leidinger wrote: > >> Any idea on were the 23G has gone, or how I pursuade the zpool to > >> return it? Why is the filesystem referencing storage that isn't being > >> used? > > > > I suggest a > > zfs list -r -t all void/store > > to make really sure we/you see what we want to see. > > > > Can it be that an application has the 23G still open? > > > >> p.s. this is FreeBSD 8.2 with ZFS pool version > >> 15. > > > > The default setting of showing snapshots or not changed somewhere. As > > long as you didn't configure the pool to show snapshots (zpool get > > listsnapshots ), they are not shown by default. > > Definitely no snapshots: > > infinity# zfs list -tall > NAME USED AVAIL REFER MOUNTPOINT > void 99.1G 24.8G 2.60G legacy > void/home 33.5K 24.8G 33.5K /home > void/j 87.5G 24.8G 54K /j > void/j/buttsby 136M 9.87G 2.40M /j/buttsby > void/j/buttsby/home 34.5K 9.87G 34.5K /j/buttsby/home > void/j/buttsby/local 130M 9.87G 130M /j/buttsby/local > void/j/buttsby/tmp 159K 9.87G 159K /j/buttsby/tmp > void/j/buttsby/var 3.97M 9.87G 104K /j/buttsby/var > void/j/buttsby/var/db 2.40M 9.87G 1.55M /j/buttsby/var/db > void/j/buttsby/var/db/pkg 866K 9.87G 866K /j/buttsby/var/db/pkg > void/j/buttsby/var/empty 21K 9.87G 21K /j/buttsby/var/empty > void/j/buttsby/var/log 838K 9.87G 838K /j/buttsby/var/log > void/j/buttsby/var/mail 592K 9.87G 592K /j/buttsby/var/mail > void/j/buttsby/var/run 30.5K 9.87G 30.5K /j/buttsby/var/run > void/j/buttsby/var/tmp 23K 9.87G 23K /j/buttsby/var/tmp > void/j/legacy-alpha 56.6G 3.41G 56.6G /j/legacy-alpha > void/j/legacy-brightstorm 29.2G 10.8G 29.2G /j/legacy-brightstorm > void/j/legacy-obleo 1.29G 1.71G 1.29G /j/legacy-obleo > void/j/mesh 310M 3.70G 2.40M /j/mesh > void/j/mesh/home 21K 3.70G 21K /j/mesh/home > void/j/mesh/local 305M 3.70G 305M /j/mesh/local > void/j/mesh/tmp 26K 3.70G 26K /j/mesh/tmp > void/j/mesh/var 2.91M 3.70G 104K /j/mesh/var > void/j/mesh/var/db 2.63M 3.70G 1.56M /j/mesh/var/db > void/j/mesh/var/db/pkg 1.07M 3.70G 1.07M /j/mesh/var/db/pkg > void/j/mesh/var/empty 21K 3.70G 21K /j/mesh/var/empty > void/j/mesh/var/log 85K 3.70G 85K /j/mesh/var/log > void/j/mesh/var/mail 24K 3.70G 24K /j/mesh/var/mail > void/j/mesh/var/run 28.5K 3.70G 28.5K /j/mesh/var/run > void/j/mesh/var/tmp 23K 3.70G 23K /j/mesh/var/tmp > void/local 282M 1.72G 282M /local > void/mysql 22K 78K 22K /mysql > void/tmp 55K 2.00G 55K /tmp > void/usr 1.81G 2.19G 275M /usr > void/usr/obj 976M 2.19G 976M /usr/obj > void/usr/ports 289M 2.19G 234M /usr/ports > void/usr/ports/distfiles 54.8M 2.19G 54.8M /usr/ports/distfiles > void/usr/ports/packages 21K 2.19G 21K /usr/ports/packages > void/usr/src 311M 2.19G 311M /usr/src > void/var 6.86G 3.14G 130K /var > void/var/crash 22.5K 3.14G 22.5K /var/crash > void/var/db 6.86G 3.14G 58.3M /var/db > void/var/db/mysql 6.80G 3.14G 4.79G /var/db/mysql > void/var/db/mysql/innodbdata 2.01G 3.14G 2.01G /var/db/mysql/innodbdata > void/var/db/pkg 2.00M 3.14G 2.00M /var/db/pkg > void/var/empty 21K 3.14G 21K /var/empty > void/var/log 642K 3.14G 642K /var/log > void/var/mail 712K 3.14G 712K /var/mail > void/var/run 49.5K 3.14G 49.5K /var/run > void/var/tmp 27K 3.14G 27K /var/tmp > > This is the problematic filesystem: > > void/j/legacy-alpha 56.6G 3.41G 56.6G /j/legacy-alpha > > No chance that an application is holding any data - I rebooting and came up > in single user mode to try and get this resolved, but no cookie. Are these filesystems using compression? Have any quota or reservation filesystem settings set? "zfs get all" might help, but it'll be a lot of data. We don't mind. -- | Jeremy Chadwick jdc@parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, USA | | Making life hard for others since 1977. PGP 4BD6C0CB |