Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 28 Feb 1999 13:05:59 -0500
From:      "Robert W. Rowe" <rrowe@winstar.com>
To:        ma-linux@tux.org, aic7xxx@FreeBSD.ORG
Subject:   dja@stratpar.com
Message-ID:  <3.0.5.32.19990228130559.00954d40@mail.winstar.com>

next in thread | raw e-mail | index | archive | help
A couple of weeks ago I posted a question to this list which didn't get an
answer, probably because the problem was uncommon or the answer was so
simple it should have been obvious.  Research also turned up zilch.  So, I
kept trying things and came up with the solution myself.  I still don't
know what caused the problem, so if anyone has any suggestions, I would be
glad to hear them.

The problem was:

I had a raid0 configuration running well on a Red Had 5.2 installation.  I
decided to add an IBM 10.1GB IDE drive to use as a hot backup drive; at the
same time I also wanted to resize some partitions on my hda drive and get
rid of a huge BillyDOS partition I wasn't using.  While adding the drive
and redoing the partitions, naturally I had to reinstall RH 5.2.  

My raid0 configuration consisted of RH software raid put together by Eric
Troan (his name is on the man pages) at Red Hat, plus two 4.1GB narrow SCSI
Quantum Fireballs.  The system files are all on hda, a fast EIDE drive.  I
slightly modified a sample raidtab and put it in /etc.  The raid
configuration was /dev/md0 mounted on /raid.  Everything ran fine until the
reinstallation.

After 5.2 was reinstalled, /proc/mdstat showed "inactive raid0 sda1 sdb1 0
blocks".  Running raidadd /dev/md0 or raidadd -a /dev/md0 or any variations
thereof followed by raidrun -a resulted in errors saying that the devices
were 0-length and couldn't be used.  The raid0 device /dev/md0 would mount
to /raid, but it had 0 length.

The solution:

I had two things in mind when I set up raid0:  1) speed and 2) to have a
large file system that would not be bothered by system updates and
upgrades.  All changes to the Linux setup would be made on hda.  Then,
theoretcially, I should be able to boot up and mount the raid configuration
and plow onward.  In fact, that very concept worked once in the past.  But
a total reinstallation and hda reconfiguration left me out in the cold.

After trying everything that would not affect the file system on the raid
configuration, I decided to go ahead and start over.  So I ran fdisk
against sda and sdb.  They had no partitions.  Heaving a sigh and saying
farewell to the file system, I fdisk'd a new partition to each one.

When I rebooted, everything was there.  File system and data were intact.
Celebration time!

I spent quite a lot of time looking for this problem and a possible
solution in FAQs, HowTo's, web sites, email archives--everything I could
find to look at.  Nothing.  Very little of the material out there refers to
the current iteration of the raid software; most references the older,
mdadd, etc., instead of the newer raidadd and raidrun.  Still, the material
is mostly usable because the underlying basics are the same.  However, this
problem is not addressed.


---
  Bob Rowe                         Would-be Linux nut
                             Home:  rrowe@bigfoot.com
                             Work:  rrowe@winstar.com
  Promote sexual dimorphism.


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe aic7xxx" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?3.0.5.32.19990228130559.00954d40>