Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 13 Jan 1996 15:10:02 -0800 (PST)
From:      Bruce Evans <bde@zeta.org.au>
To:        freebsd-bugs
Subject:   Re: bin/943: df gets confused by huge filesystems
Message-ID:  <199601132310.PAA20976@freefall.freebsd.org>

index | next in thread | raw e-mail

The following reply was made to PR bin/943; it has been noted by GNATS.

From: Bruce Evans <bde@zeta.org.au>
To: FreeBSD-gnats-submit@FreeBSD.ORG, asami@cs.berkeley.edu
Cc: nisha@sunrise.cs.berkeley.edu
Subject: Re: bin/943: df gets confused by huge filesystems
Date: Sun, 14 Jan 1996 09:54:47 +1100

 >	df seems to get confused when there is too many blocks, e.g.:
 
 >>How-To-Repeat:
 
 >	Get yourself a disk array. :)
 
 I just happened to have a rack of 32GB ones handy :-).  Actually, I
 built one using vnconfig.  It only took a bug fix for the vn driver
 and 190MB of metadata for `newfs -i 32768'.
 
 >>> df -k -t local
 >Filesystem  1K-blocks     Used    Avail Capacity  Mounted on
 >...
 >/dev/ccd0c   19801168   205004 -3462766    -6%    /mnt
 >                               ^^^^^^^^^^^^^^
 
 The freespace calculation is buggy.  ffs does a calculation involving
 (minfree * total_blocks) / 100, so with the standard minfree of 8%, ffs
 stops working at about 1TB/8 instead of at 1TB.  ffs_statfs() and df.c
 for some reason don't use the standard freespace() macro (df.c uses a
 clone of ffs_statfs() to avoid having to mount ffs file systems to
 report them).  They do a calculation involving
 ((100 - minfree) * total_blocks) / 100.  Apart from apparently having a
 rounding bug, this overflows at 1TB/92 with the standard minfree.  This
 is only a reporting bug.  You get negative available blocks for empty
 file systems of sizes between about 1GB and 22GB, and truncated block
 sizes between 22GB and 33GB...
 
 Bruce


home | help

Want to link to this message? Use this
URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199601132310.PAA20976>