Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 26 Jul 2013 01:33:01 -0700
From:      Kirk Richard Holz <krh@kirkholz.com.au>
To:        <freebsd-fs@freebsd.org>
Subject:   Trying to recover 2-element zfs striped (raid0) filesystem
Message-ID:  <1b756c89576eb509d1197c4d9ab66fea@kirkholz.com>

next in thread | raw e-mail | index | archive | help
I have a server which got hit with some kind of power shock. Clearly it 
was poorly configured, partially by me -- all I want to do now is get as 
much data as possible from it.

--
# uname -a
FreeBSD server 9.1-RELEASE FreeBSD 9.1-RELEASE 
root@farrell.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC  amd64
--

The partition table of one of the two disks in a zfs striped (raid0) 
array has been corrupted.

When the drive's GPT partition table was corrupted it apparently picked 
up the backup partition table, which wasn't current. I'm not sure how 
that happened.

ZFS / zpool can't bring the filesystem back to full functionality and 
just hangs instead of producing any kind of verbose diagnostic data.

This is what I get from zfs:
--
# zpool list -v
NAME                    SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  
ALTROOT
zShare                 2.72T   796G  1.94T    28%  1.00x  UNAVAIL  -
   ada1                  928G   362G   566G         -
   8683733800792668130  1.81T   433G  1.39T     16.0E
--

Now, ada1 is available, and the 1.81T drive is available, but the 1.81T 
drive appears like this now:

--
Geom name: ada3
modified: false
state: OK
fwheads: 16
fwsectors: 63
last: 3907029167
first: 63
entries: 4
scheme: MBR
Providers:
1. Name: ada3s1
    Mediasize: 2000397795328 (1.8T)
    Sectorsize: 512
    Stripesize: 4096
    Stripeoffset: 0
    Mode: r0w0e0
    rawtype: 131
    length: 2000397795328
    offset: 1048576
    type: linux-data
    index: 1
    end: 3907028991
    start: 2048
Consumers:
1. Name: ada3
    Mediasize: 2000398934016 (1.8T)
    Sectorsize: 512
    Stripesize: 4096
    Stripeoffset: 0
    Mode: r0w0e0
--

Drive ada3 is the only candidate for the second element of the 
filesystem:

--
# zpool status -xv
   pool: zShare
  state: UNAVAIL
status: One or more devices are faulted in response to IO failures.
action: Make sure the affected devices are connected, then run 'zpool 
clear'.
    see: http://illumos.org/msg/ZFS-8000-HC
   scan: none requested
config:

         NAME                   STATE     READ WRITE CKSUM
         zShare                 UNAVAIL      0     0     0
           ada1                 ONLINE       0     0     0
           8683733800792668130  UNAVAIL      0     0     0  was 
/dev/ada3s1

errors: Permanent errors have been detected in the following files:

         zShare:<0x7386>
--

I have run zpool clear.

I need help with the best queries to pinpoint the problem and attempt 
recovery.

Thank you in advance,

Kirk




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?1b756c89576eb509d1197c4d9ab66fea>