Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 27 Jul 2006 01:45:16 +0200
From:      "Julian H. Stacey" <jhs@flat.berklix.net>
To:        Sven Willenberger <sven@dmv.com>
Cc:        freebsd-stable@freebsd.org, Feargal Reilly <feargal@helgrim.com>
Subject:   Re: filesystem full error with inumber 
Message-ID:  <200607262345.k6QNjGv2012721@fire.jhs.private>
In-Reply-To: Message from Sven Willenberger <sven@dmv.com>  of "Wed, 26 Jul 2006 13:07:19 EDT." <44C7A147.9010106@dmv.com> 

next in thread | previous in thread | raw e-mail | index | archive | help
Sven Willenberger wrote:
> 
> 
> Feargal Reilly presumably uttered the following on 07/24/06 11:48:
> > On Mon, 24 Jul 2006 17:14:27 +0200 (CEST)
> > Oliver Fromme <olli@lurza.secnetix.de> wrote:
> > 
> >> Nobody else has answered so far, so I try to give it a shot ...
> >>
> >> The "filesystem full" error can happen in three cases:
> >> 1.  The file system is running out of data space.
> >> 2.  The file system is running out of inodes.
> >> 3.  The file system is running out of non-fragmented blocks.
> >>
> >> The third case can only happen on extremely fragmented
> >> file systems which happens very rarely, but maybe it's
> >> a possible cause of your problem.
> > 
> > I rebooted that server, and df then reported that disk at 108%,
> > so it appears that df was reporting incorrect figures prior to
> > the reboot. Having cleaned up, it appears by my best
> > calculations to be showing correct figures now.
> > 
> >>  > kern.maxfiles: 20000
> >>  > kern.openfiles: 3582
> >>
> >> Those have nothing to do with "filesystem full".
> >>
> > 
> > Yeah, that's what I figured.
> > 
> >>  > Looking again at dumpfs, it appears to say that this is
> >>  > formatted with a block size of 8K, and a fragment size of
> >>  > 2K, but tuning(7) says:  [...]
> >>  > Reading this makes me think that when this server was
> >>  > installed, the block size was dropped from the 16K default
> >>  > to 8K for performance reasons, but the fragment size was
> >>  > not modified accordingly.
> >>  > 
> >>  > Would this be the root of my problem?
> >>
> >> I think a bsize/fsize ratio of 4/1 _should_ work, but it's
> >> not widely used, so there might be bugs hidden somewhere.
> >>
> > 
> > Such as df not reporting the actual data usage, which is now my
> > best working theory. I don't know what df bases it's figures on,
> > perhaps it either slowly got out of sync, or more likely, got
> > things wrong once the disk filled up.
> > 
> > I'll monitor it to see if this happens again, but hopefully
> > won't keep that configuration around for too much longer anyway.
> > 
> > Thanks,
> > -fr.
> > 
> 
> One of my machines that I recently upgraded to 6.1 (6.1-RELEASE-p3) is also
> exhibiting df reporting wrong data usage numbers. Notice the negative "Used" numbers
> below:

Negative isnt an example of programming error, just that the system
is now using the last bit only root can use.

for insight try for example
	man 	tunefs
	reboot
	boot -s
	tunefs -m 2 /dev/da0s1e	
then decide what level of m you want default is 8 to 10 I recall.

> 
> > df -h
> Filesystem     Size    Used   Avail Capacity  Mounted on
> /dev/da0s1a    496M     63M    393M    14%    /
> devfs          1.0K    1.0K      0B   100%    /dev
> /dev/da0s1e    989M   -132M    1.0G   -14%    /tmp
> /dev/da0s1f     15G    478M     14G     3%    /usr
> /dev/da0s1d     15G   -1.0G     14G    -8%    /var
> /dev/md0       496M    228K    456M     0%    /var/spool/MIMEDefang
> devfs          1.0K    1.0K      0B   100%    /var/named/dev
> 
> Sven

-- 
Julian Stacey.  Consultant Unix Net & Sys. Eng., Munich.  http://berklix.com
Mail in Ascii, HTML=spam.     Ihr Rauch = mein allergischer Kopfschmerz.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200607262345.k6QNjGv2012721>