Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 23 Sep 2010 12:56:54 +1000
From:      Danny Carroll <fbsd@dannysplace.net>
To:        freebsd-fs <freebsd-fs@freebsd.org>
Subject:   Devices disappeared after drive shuffle - or - how to recover and mount a slice with UFS partitions.
Message-ID:  <4C9AC1F6.90305@dannysplace.net>

next in thread | raw e-mail | index | archive | help
 I had a strange problem this week and although it's already solved, I
am still not sure what went wrong.
This is a long email so bear with me ;-)    There is a nice recovery
procedure at the end and a few questions along the way for someone who
knows more than I.

I have 15 drives attached to my home fileserver.  
    12 are connected via an areca controller (configured as
pass-through).   Data drives.
    3 are connected directly to the motherboard SATA controllers.    OS
Drives.

The motherboard is a Supermicro X7SBE with 8Gb RAM.

The 3 OS drives are 80Gb seagate sata drives and up till this week one
of them was running 7.3-Stable.
I wanted to reinstall so I decided to pull the OS drive, re-install 8.1
on one of the other 2 drives and use PJD's procedure for ZFS mirrored
root. 
(http://blogs.freebsdish.org/pjd/2010/08/06/from-sysinstall-to-zfs-only-configuration/)

I installed on ad6 and mirrored to ad4.  Ad8 was the old 7.3 drive which
was pulled from the server to make sure I did not accidentally overwrite it.

Install went fine and I was able to plug the old drive back in after the
upgrade and see the old 7.3 data (it was again ad8).  My problem arose
when I decided to test if the ARECA passthrough drives are really 100%
passed through.   The 12 drives are configured in a zfs raidz array.   I
use GPT partitions with labels that are used.
I powered down, pulled one of the drives (da0) and plugged it into one
of the SATA ports.

ZFS came up perfectly and a scrub revealed no errors.   This tells me
that a drive connected to an Areca controller and configured as a
pass-through is no different to a drive connected to a standard SATA
port.   This is good to know if my areca card ever fails.  (Areca
Pass-through differs from JBOD in that JBOD ignores the Areca's cache
(and battery backup) and passthrough does indeed use the cache.)

The da0 drive is now ad4, the 2 OS drives are now ad6 and ad8 and the
old OS drive (which I plugged back in after the upgrade) is now ad10.

All looked good but the 2 slices on the ad10 drive did not show up in
/dev/.   My data was on the second slice with 4 UFS partitions on that
slice.
Running fdisk on ad10 showed the slices were there, but just not in /dev/.

I had backups but I wanted the data anyway so I used testdisk to create
an image of the second slice.   (dd would have worked just as well I guess).
I then mounted the raw slice via a md0 device.   As expected md0a, md0d,
md0e and md0f showed up in /dev/
Great...  I had my data back!

My only real question is.   Why did the devices fail to be created in
/dev from the original disk?  And is there a way to force devfs to
rescan a device and create those sub-devices?

And now, a procedure for anyone who finds themselves having to recover a
slice containing UFS partitions.  I kinda guessed my way though this and
was surprised that it was so easy.
    Use sysutils/testdisk to dump the partition to a flat file.
    mdconfig -a -t vnode -f /data/image.dd -u 0
    mount /dev/md0a /oldserv
    mount /dev/md0f /oldserv/usr
    mount /dev/md0d /oldserv/var
    mount /dev/md0e /oldserv/tmp
    done!


Interestingly enough, I tried this with a zvol as well.
    Use sysutils/testdisk to dump the partition to a flat file.   Called
image.dd  (it was about 70G)
    Create the zvol:  zfs create -V 70G data/oldserv  (make sure size is
the same)
    dd if=/data/image.dd of=/dev/zvol/data/oldserv bs=1M


Unfortunately it did not work out.   The slices were created but I had
trouble mounting them.  I am not sure why.

I learned a little about UFS data recovery, a little about md0 and a
little about zvols today.
I assume if I had have done the same with the whole disk device instead
of the slice, I would have seen md0p2a md0p2d etc in /dev/...

-D











Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4C9AC1F6.90305>