Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 10 May 2013 16:45:41 +0300
From:      Volodymyr Kostyrko <c.kworr@gmail.com>
To:        Outback Dingo <outbackdingo@gmail.com>, freebsd-fs@freebsd.org
Subject:   Re: Corrupted zpool import -f FAILS state FAULTED
Message-ID:  <518CFA05.6090706@gmail.com>
In-Reply-To: <CAKYr3zz1gLZArACqdrzkr6APVMvom6y-80omghoo4nb1KMTrKA@mail.gmail.com>
References:  <CAKYr3zz1gLZArACqdrzkr6APVMvom6y-80omghoo4nb1KMTrKA@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
09.05.2013 15:31, Outback Dingo:
> ok zfsgurus, FreeBSD 9.1-STABLE box zpool import -f reports pool status
> Faulted, one of more devices contains corrupted data, however its showing
> the guid as faulted in the poll, and not the actual disk device /dev/daX,
> the pool is a single vdev 24 disk raidz3. Essentially the hardward platform
> is a dual node system, with 8 enclosures connected to 24 SAS drives via 4
> LSI cards. I am not currently using geom_multipath, but the box is zoned so
> that each node can see 50% of the drives,
> in case of Failure, carp kicks in and migrates "zpool import -af" the pools
> onto the other node. it seems as though somehow the pool is now seeing guid
> and not devices, not sure if they have switched devices ids due to a reboot.

Am not a zfs guru, but I'll try to help.

Any console log snippets are welcome. What does "showing the guid as 
faulted in the pool" looks like.

What are the guids for all partitions? Do they interlap for different nodes?

ZFS recognizes devices by tasting they vdev labels and not by their 
logical location and naming. It can safely report any vdev location - 
but it requires the same set vdevs to bring pool online.

-- 
Sphinx of black quartz, judge my vow.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?518CFA05.6090706>