Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 7 Jun 2009 16:55:23 +0400
From:      "Yan V. Batuto" <yan.batuto@gmail.com>
To:        freebsd-fs@freebsd.org
Subject:   Re: Strange ZFS pool failure after updating kernel v6->v13
Message-ID:  <e37d10340906070555j40e7d4d5lc97d7cb7ae2c1c61@mail.gmail.com>
In-Reply-To: <4A2A7DE4.1080008@egr.msu.edu>
References:  <e37d10340906060431l1981d954r530b66b934d5f18c@mail.gmail.com>  <4A2A7DE4.1080008@egr.msu.edu>

next in thread | previous in thread | raw e-mail | index | archive | help
2009/6/6 Adam McDougall <mcdouga9@egr.msu.edu>:
> Yan V. Batuto wrote:
>>
>> Hello!
>>
>> RAID-Z v6 works OK with 7.2-RELEASE, but it fails with recent 7.2-STABLE=
.
>> --------------------------------------------------
>> # zpool status bigstore
>> =A0pool: bigstore
>> =A0state: ONLINE
>> =A0scrub: scrub completed with 0 errors on Fri Jun =A05 22:28:19 2009
>> config:
>>
>> =A0 =A0 =A0 =A0NAME =A0 =A0 =A0 =A0STATE =A0 =A0 READ WRITE CKSUM
>> =A0 =A0 =A0 =A0bigstore =A0 =A0ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0 0
>> =A0 =A0 =A0 =A0 =A0raidz1 =A0 =A0ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0 =
0
>> =A0 =A0 =A0 =A0 =A0 =A0ad4 =A0 =A0 ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =
=A0 0
>> =A0 =A0 =A0 =A0 =A0 =A0ad6 =A0 =A0 ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =
=A0 0
>> =A0 =A0 =A0 =A0 =A0 =A0ad8 =A0 =A0 ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =
=A0 0
>> =A0 =A0 =A0 =A0 =A0 =A0ad10 =A0 =A0ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =
=A0 0
>>
>> errors: No known data errors
>> --------------------------------------------------
>> After cvsup to 7-STABLE, usual procedure of rebuilding kernel and
>> world, and reboot pool is failed.
>> It's quite strange that now pool consists of ad8, ad10, and again ad8,
>> ad10 drives instead of ad4, ad6, ad8, ad10.
>>
>> I removed additional disk controller few weeks ago, so raid-z
>> originally was created as ad8+ad10+ad12+ad14, and then
>> it appeared to be ad4+ad6+ad8+ad10. It was not a trouble for zfs v6,
>> but, probably, something is wrong here in zfs v13.
>> --------------------------------------------------
>> # zpool status bigstore
>> pool: bigstore
>> =A0state: UNAVAIL
>> status: One or more devices could not be used because the label is missi=
ng
>> =A0 =A0 =A0 =A0or invalid. =A0There are insufficient replicas for the po=
ol to
>> continue
>> =A0 =A0 =A0 =A0functioning.
>> action: Destroy and re-create the pool from a backup source.
>> =A0 see: http://www.sun.com/msg/ZFS-8000-5E
>> =A0scrub: none requested
>> config:
>>
>> =A0 =A0 =A0 =A0NAME =A0 =A0 =A0 =A0STATE =A0 =A0 READ WRITE CKSUM
>> =A0 =A0 =A0 =A0bigstore =A0 =A0UNAVAIL =A0 =A0 =A00 =A0 =A0 0 =A0 =A0 0 =
=A0insufficient replicas
>> =A0 =A0 =A0 =A0 =A0raidz1 =A0 =A0UNAVAIL =A0 =A0 =A00 =A0 =A0 0 =A0 =A0 =
0 =A0insufficient replicas
>> =A0 =A0 =A0 =A0 =A0 =A0ad8 =A0 =A0 FAULTED =A0 =A0 =A00 =A0 =A0 0 =A0 =
=A0 0 =A0corrupted data
>> =A0 =A0 =A0 =A0 =A0 =A0ad10 =A0 =A0FAULTED =A0 =A0 =A00 =A0 =A0 0 =A0 =
=A0 0 =A0corrupted data
>> =A0 =A0 =A0 =A0 =A0 =A0ad8 =A0 =A0 ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =
=A0 0
>> =A0 =A0 =A0 =A0 =A0 =A0ad10 =A0 =A0ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =
=A0 0
>>
>
> Please try:
> zpool export bigstore
> zpool import bigstore
>
> This should make it find the right hard drives if they are present,
> otherwise should give a more informative error.
>

Thank you!
I exported pool on 7.2-release, then upgraded to 7.2-stable and
imported the pool back. All works OK.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?e37d10340906070555j40e7d4d5lc97d7cb7ae2c1c61>