Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 20 Feb 2001 15:25:24 -0500
From:      "James F. Hranicky" <jfh@cise.ufl.edu>
To:        freebsd-fs@freebsd.org
Subject:   Softupdates umount bug? Or vinum problem?
Message-ID:  <20010220202525.090D5DCC3@mail.cise.ufl.edu>

next in thread | raw e-mail | index | archive | help

Over the past few weeks, I've been playing around with a filesystem
destined to become our mail filesystem, doing things like turning
softupdates on/off, and setting various other fs options. All in
all, a lot of mounting/umounting, etc.

Three times now, something I've done has caused the filesystem to
become corrupt, twice beyond repair. The first two times caused a panic,
and the third time, I simply unmounted the filesystem after some
IO and ran fsck, and got a badly mangled filesystem.

I'm wondering if the bug fixed with the following patch is the 
culprit:

  dillon      2001/01/29 00:19:28 PST
  Revision  Changes    Path
  1.149     +17 -8     src/sys/miscfs/specfs/spec_vnops.c

Does anyone know if umounting immediately after heavy IO could 
cause a SU-enabled filesystem to become mangled, if the above
bug still exists?

The first time, I was able to repair the fs, leaving some files
in place and some in lost+found, but the next two times the filesystem
was basically unrecoverable. The current fs had a usage of no more
than 5%, but fsck is now reporting that "there is no more room in
lost+found" . Obviously, I can't mount it up to check on it.

I should mention that the last two crashes occured when testing 
filesystem extention using vinum/growfs, but I never got to the end. 
The last test I started with a 2 disk mirror, dropped one, created 
a striped plex with two drives, attached the plex to the mail volume 
and started it. During the remirror, I did some IO on the volume to 
simulate a live fs (I was pleased to see both drives of the new plex 
were getting IO), and when the mirror sync was finished, I umounted 
the fs, ran fsck, and found it scrambled. I was going to drop the original
plex, add another drive, and reattach, but never got the chance.
During none of my examinations, however, did it appear that vinum
was involved.

Does anyone have any reommendations? I have a trace of the first panic,
a trace and a kernel crash dump from the second panic, and I still
have the corrupt fs from the third test, plus I have my shell history
from newfs to crash on the 3rd test. If anyone would find these useful
please let me know.

I should mention that we have almost 300G of space on freebsd fss that 
have had no problems. However, I'd like to be able to get to the bottom 
of this problem before bringing up the mail server.

----------------------------------------------------------------------
| Jim Hranicky, Senior SysAdmin                   UF/CISE Department |
| E314D CSE Building                            Phone (352) 392-1499 |
| jfh@cise.ufl.edu                      http://www.cise.ufl.edu/~jfh |
----------------------------------------------------------------------
         -  Encryption: its use by criminals is far less  - 
         - frightening than its banishment by governments -
                      - Vote for Privacy -

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-fs" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20010220202525.090D5DCC3>