Date: Mon, 15 Aug 2005 09:36:00 -0700 From: Lei Sun <lei.sun@gmail.com> To: Freminlins <freminlins@gmail.com> Cc: Jerry McAllister <jerrymc@clunix.cl.msu.edu>, questions@freebsd.org, cpghost <cpghost@cordula.ws>, Glenn Dawson <glenn@antimatter.net> Subject: Re: disk fragmentation, <0%? Message-ID: <d396fddf050815093619ec18f2@mail.gmail.com> In-Reply-To: <eeef1a4c05081506391b6fbb2f@mail.gmail.com> References: <d396fddf05081421343aeded9d@mail.gmail.com> <200508151320.j7FDKCVq025507@clunix.cl.msu.edu> <eeef1a4c05081506391b6fbb2f@mail.gmail.com>
next in thread | previous in thread | raw e-mail | index | archive | help
This happened, after I tested the atacontrol to rebuild the raid1. The /tmp partition doesn't have anything but several empty directories crea= ted. and I have the clear /tmp directive in the rc.conf, which will clean up the /tmp everytime when system boot up. So that was really wierd. as it never happened this way the previous time that I was rebuilding the raid1. Thanks Lei On 8/15/05, Freminlins <freminlins@gmail.com> wrote: > On 8/15/05, Jerry McAllister <jerrymc@clunix.cl.msu.edu> wrote: > > > >=20 > > As someone mentioned, there is a FAQ on this. You should read it. > > > > It is going negative because you have used more than the nominal > > capacity of the slice. The nominal capacity is the total space > > minus the reserved proportion (usually 8%) that is held out. > > Root is able to write to that space and you have done something > > that got root to write beyond the nominal space. >=20 > I'm not sure you are right in this case. I think you need to re-read > the post. I've quoted the relevent part here: >=20 > > > Filesystem Size Used Avail Capacity Mounted on > > > /dev/ar0s1e 248M -278K 228M -0% /tmp >=20 > Looking at how the columns line up I have to state that I too have > never seen this behaviour. As an experiment I over-filled a file > system and here's the results: >=20 > Filesystem Size Used Avail Capacity Mounted on > /dev/ad0s1f 965M 895M -7.4M 101% /tmp >=20 > Note capacity is not negative. So that makes three of us in this > thread who have not seen negative capacity on UFS. >=20 > I have seen negative capacity when running an old version of FreeBSD > with a very large NFS mount (not enough bits in statfs if I remember > correctly). >=20 > > ////jerry >=20 > Frem. >
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?d396fddf050815093619ec18f2>