From owner-freebsd-fs@FreeBSD.ORG Sun Mar 27 10:00:38 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 981071065674 for ; Sun, 27 Mar 2011 10:00:38 +0000 (UTC) (envelope-from josef.karthauser@unitedlane.com) Received: from k2smtpout01-01.prod.mesa1.secureserver.net (k2smtpout01-01.prod.mesa1.secureserver.net [64.202.189.88]) by mx1.freebsd.org (Postfix) with SMTP id 6C1EE8FC12 for ; Sun, 27 Mar 2011 10:00:38 +0000 (UTC) Received: (qmail 10298 invoked from network); 27 Mar 2011 10:00:37 -0000 Received: from unknown (HELO ip-72.167.34.38.ip.secureserver.net) (72.167.34.38) by k2smtpout01-01.prod.mesa1.secureserver.net (64.202.189.88) with ESMTP; 27 Mar 2011 10:00:37 -0000 Received: (qmail 15418 invoked from network); 27 Mar 2011 06:00:02 -0400 Received: from unknown (HELO ?90.155.77.76?) (90.155.77.76) by ip-72.167.34.38.ip.secureserver.net with (AES128-SHA encrypted) SMTP; 27 Mar 2011 06:00:01 -0400 Mime-Version: 1.0 (Apple Message framework v1082) Content-Type: text/plain; charset=us-ascii From: Dr Josef Karthauser In-Reply-To: <20110327094121.GA72701@icarus.home.lan> Date: Sun, 27 Mar 2011 11:01:04 +0100 Content-Transfer-Encoding: quoted-printable Message-Id: <980F394D-36FC-42F2-9F3F-A3C44A385600@unitedlane.com> References: <9CF23177-92D6-40C5-8C68-B7E2F88236E6@unitedlane.com> <20110326225430.00006a76@unknown> <3BBB1E36-8E09-4D07-B49E-ACA8548B0B44@unitedlane.com> <20110327075814.GA71131@icarus.home.lan> <20110327084355.GA71864@icarus.home.lan> <094E71D9-B28B-46DB-8EA9-B11F17D5A32A@unitedlane.com> <20110327094121.GA72701@icarus.home.lan> To: Jeremy Chadwick X-Mailer: Apple Mail (2.1082) Cc: freebsd-fs@freebsd.org Subject: Re: ZFS Problem - full disk, can't recover space :(. X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 27 Mar 2011 10:00:38 -0000 On 27 Mar 2011, at 10:41, Jeremy Chadwick wrote: > I'm curious about something -- we use RELENG_8 systems with a mirror > zpool (kinda funny how I did it too, since the system only has 2 = disks) > for /home. Our SpamAssassin configuration is set to obviously writes = to > $user/.spamassassin/bayes_* files. Yet, we do not see this sparse = file > problem that others are reporting. >=20 > $ df -k /home > Filesystem 1024-blocks Used Avail Capacity Mounted on > data/home 239144704 107238740 131905963 45% /home > $ zfs list data/home > NAME USED AVAIL REFER MOUNTPOINT > data/home 102G 126G 102G /home >=20 > $ zpool status data > pool: data > state: ONLINE > scrub: resilver completed after 0h9m with 0 errors on Wed Oct 20 = 03:08:22 2010 > config: >=20 > NAME STATE READ WRITE CKSUM > data ONLINE 0 0 0 > mirror ONLINE 0 0 0 > ada1 ONLINE 0 0 0 > ada0s1g ONLINE 0 0 0 26.0G resilvered >=20 > $ grep bayes /usr/local/etc/mail/spamassassin/local.cf > use_bayes 1 > bayes_auto_learn 1 > bayes_ignore_header X-Bogosity > bayes_ignore_header X-Spam-Flag > bayes_ignore_header X-Spam-Status >=20 > $ ls -l .spamassassin/ > total 4085 > -rw------- 1 jdc users 102192 Mar 27 02:30 bayes_journal > -rw------- 1 jdc users 360448 Mar 27 02:30 bayes_seen > -rw------- 1 jdc users 4947968 Mar 27 02:30 bayes_toks > -rw------- 1 jdc users 8719 Mar 20 04:11 user_prefs No idea what caused it, but whenever I ran the bayes expiry it created a = new file that just blew up and filled all the available space. I've got = around the issue temporarily. I used 'swapoff' to recover a 4Gb swap = partition, created a UFS and mounted that in the jail in question. After = rsyncing the bayes database to that disk I was able to run an expire = with no trouble at all, so it wasn't that the bayes was corrupt or = anything. I've now copied it back and it runs fine. I expect that the = problem will reoccur at some inconvenient point in the future. I'd really like my disk space back though please! I suspect that I'm = going to have to wait for 28 to have that happen though :(. Joe