Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 15 Aug 2005 14:39:02 +0100
From:      Freminlins <freminlins@gmail.com>
To:        Jerry McAllister <jerrymc@clunix.cl.msu.edu>
Cc:        Lei Sun <lei.sun@gmail.com>, questions@freebsd.org, cpghost <cpghost@cordula.ws>, Glenn Dawson <glenn@antimatter.net>
Subject:   Re: disk fragmentation, <0%?
Message-ID:  <eeef1a4c05081506391b6fbb2f@mail.gmail.com>
In-Reply-To: <200508151320.j7FDKCVq025507@clunix.cl.msu.edu>
References:  <d396fddf05081421343aeded9d@mail.gmail.com> <200508151320.j7FDKCVq025507@clunix.cl.msu.edu>

next in thread | previous in thread | raw e-mail | index | archive | help
On 8/15/05, Jerry McAllister <jerrymc@clunix.cl.msu.edu> wrote:
> >

> As someone mentioned, there is a FAQ on this.   You should read it.
>=20
> It is going negative because you have used more than the nominal
> capacity of the slice.   The nominal capacity is the total space
> minus the reserved proportion (usually 8%) that is held out.
> Root is able to write to that space and you have done something
> that got root to write beyond the nominal space.

I'm not sure you are right in this case. I think you need to re-read
the post. I've quoted the relevent part here:
=20
> > Filesystem     Size    Used   Avail Capacity  Mounted on
> > /dev/ar0s1e    248M   -278K    228M    -0%    /tmp

Looking at how the columns line up I have to state that I too have
never seen this behaviour.  As an experiment I over-filled a file
system and here's the results:

Filesystem     Size    Used   Avail Capacity  Mounted on
/dev/ad0s1f    965M    895M   -7.4M   101%    /tmp

Note capacity is not negative. So that makes three of us in this
thread who have not seen negative capacity on UFS.

I have seen negative capacity when running an old version of FreeBSD
with a very large NFS mount (not enough bits in statfs if I remember
correctly).

> ////jerry

Frem.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?eeef1a4c05081506391b6fbb2f>