Date: Thu, 30 Oct 2008 14:04:36 +1030 From: "Brendan Hart" <brendanh@strategicecommerce.com.au> To: "'Jeremy Chadwick'" <koitsu@FreeBSD.org> Cc: freebsd-questions@freebsd.org Subject: RE: Large discrepancy in reported disk usage on USR partition Message-ID: <022a01c93a40$6f16e860$4d44b920$@com.au> In-Reply-To: <20081030015517.GA92091@icarus.home.lan> References: <021f01c93a28$651752e0$2f45f8a0$@com.au> <20081030011949.GA91409@icarus.home.lan> <022601c93a30$b283e7c0$178bb740$@com.au> <20081030015517.GA92091@icarus.home.lan>
next in thread | previous in thread | raw e-mail | index | archive | help
On Thu 30/10/2008 12:25 PM, Jeremy Chadwick wrote: >> Could the "missing" space be an indication of hardware disk issues i.e. >> physical blocks marked as bad? >The simple answer is no, bad blocks would not cause what you're seeing. >smartctl -a /dev/disk will help you determine if there's evidence the disk is in bad shape. I can help you with reading SMART stats if need be. I took a look at using the smart tools as you suggested, but have now found that the disk in question is a RAID1 set on a DELL PERC 3/Di controller and smartctl does not appear to be the correct tool to access the SMART data for the individual disks. After a little research, I have found the aaccli tool and used it to get the following information: AAC0> disk show smart Executing: disk show smart Smart Method of Enable Capable Informational Exception Performance Error B:ID:L Device Exceptions(MRIE) Control Enabled Count ------ ------- ---------------- --------- ----------- ------ 0:00:0 Y 6 Y N 0 0:01:0 Y 6 Y N 0 AAC0> disk show defects 00 Executing: disk show defects (ID=0) Number of PRIMARY defects on drive: 285 Number of GROWN defects on drive: 0 AAC0> disk show defects 01 Executing: disk show defects (ID=1) Number of PRIMARY defects on drive: 193 Number of GROWN defects on drive: 0 This output doesn't seem to indicate existing physical issues on the disks. > Since you booted single-user and presumably ran fsck -f /usr, and nothing came back, I'm left to believe this isn't filesystem corruption. Yes, this is the command I tried when I went into the data centre yesterday, and yes, nothing came back. I have done some additional digging and noticed that there is a /usr/.snap folder present. "ls -al" shows no content however. Some quick searching shows this could possibly be part of a UFS snapshot... I wonder if partition snapshots might be the cause of my major disk space "loss". Some old message group posts suggest that UFS snapshots were dangerously flakey on Release 6.1, so I would hope that my predecessors were not using them however... Do you know anything about snapshots, and how I could see what/if any/ space is used by snapshots? I also took a look to see if the issue could be something like running out of inodes, But this does't seem to be the case: #: df -ih /usr Filesystem Size Used Avail Capacity iused ifree %iused Mounted on /dev/aacd0s1f 28G 25G 1.1G 96% 708181 3107241 19% /usr BTW Jeremy, thanks for your help thus far. I will wait and see if any other list member has any suggestions for me to try, but I am now leaning toward scrubbing the system. Oh well. Best Regards, Brendan Hart --------------------------------- Brendan Hart, Development Manager Strategic Ecommerce Division Securepay Pty Ltd Phone: 08-8274-4000 Fax: 08-8274-1400 __________ Information from ESET NOD32 Antivirus, version of virus signature database 3568 (20081030) __________ The message was checked by ESET NOD32 Antivirus. http://www.eset.com
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?022a01c93a40$6f16e860$4d44b920$>