Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 5 Sep 1996 07:21:34 +1000
From:      Bruce Evans <bde@zeta.org.au>
To:        hackers@freebsd.org, sef@kithrup.com
Subject:   Re: reported disk corruption
Message-ID:  <199609042121.HAA18058@godzilla.zeta.org.au>

next in thread | raw e-mail | index | archive | help
>My friend Torbjorn reported this to questions, but didn't get a response.
>Since he has given up on freebsd as a result of that, but this is still
>something that should probably be reported, I'm forwarding it to hackers...
>
>From: Torbjorn Granlund <tege@matematik.su.se>
>Subject: Desperation time

>During installation, the FreeBSD installtion program complained that the
>disk label (of the pristine SCSI disk) was bad, and gave me the option to

There should been no label on a really pristine (untouched by *BSD) disk :-).
The (1,1,1) geometry is probably caused by a bug in sysinstall (putting
a bogus partition table in the MBR in some cases).  The (1,1,1) geometry
is alarming but harmless AFAIK, at least if it is kept out of labels.  It
will still appear in the dummy label for the whole disk and copying that
label can easily result in a label like the one below.

>procdeed or to modify it.  I chose to modify it, giving what I though were
>the correct parameters.  From the output of `disklabel sd0' one might
>conclude that the disklabel is wrong:
>
>  quiet> disklabel -r sd0
>  # /dev/rsd0c:
>  type: SCSI
>  disk: sd1s1
>  label: 
>  flags:
>  bytes/sector: 512
>  sectors/track: 1
   ^^^^^^^^^^^^^^^^
>  tracks/cylinder: 1
   ^^^^^^^^^^^^^^^^^^
>  sectors/cylinder: 1
>  cylinders: 1
   ^^^^^^^^^^^^

This says that the disk has 1*1*1 = 1 sector total.

>  sectors/unit: 8498506

This says it has more.  The inconsistency and the 1-sector total size are
probably not fatal because the (1,1,1) values are ignored almost everywhere.

>...

>  #        size   offset    fstype   [fsize bsize bps/cpg]
>    a:   131072        0    4.2BSD        0     0     0   # (Cyl.    0 - 131071)
>    b:   282624   131072      swap                        # (Cyl. 131072 - 413695)
>    c:  8498506        0    unused        0     0         # (Cyl.    0 - 8498505)
>    e:   131072   413696    4.2BSD        0     0     0   # (Cyl. 413696 - 544767)
>    f:  6348800   544768    4.2BSD        0     0     0   # (Cyl. 544768 - 6893567)
>    g:  1604938  6893568    4.2BSD        0     0     0   # (Cyl. 6893568 - 8498505)
>
>Note the wild cylinder numbers!

The (1,1,1) values apperently aren't ignored here.  They give cylinder numbers
equal to sector numbers.  These cylinder numbers are informational only.
They are printed so that you can see if the partitions occupy whole cylinders.
This was worth worrying about several years ago.

>Today, my 3.1 GB /usr file system started to act weird.  When doing `ls -l'
>in a directory, I got "Bad file descriptor" for one of the directories.
>When running fsck, it said "/foo/bar/foobar unallocated, delete?".  fsck
>complained like that about a large number of files.  I also got a large
>number of unref files and files with incorrect counts.
>
>A curious facts is that all the problematic files had inode numbers around
>253600 or 491900.

I don't think this is related to the bogus label.  newfs by default ignores
the sectors/track and tracks/cylinder values in the label.  It does this
(unlike it did several years ago) because what it does with "normal" values
is a pessimization for modern drives.

>Does this information give any hint on what might be wrong?  Is the bogus
>disklabel the culprit?  Isn't the geometry simply used for scheduling of
>disk accesses?  The total number of sectors and thereby the disk size seems
>to be correct.

The geometry in labels is normally only used by sysinstall and fdisk so
that they have an idea of the correct geometry to use for laying out the
DOS partition table.  The geometry in the partition table is normally only
used for booting.

>I used to keep my partitions below 2GB, to avoid potential problems with
>integer overflow in the kernel.  Now I decided to try with a larger file
>system.  Could the file system size be the culprit?

I don't know of any current 2GB overflow problems for file systems.  There
are are likely to be problems at 2GB for individual files.

Bruce



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199609042121.HAA18058>