From owner-freebsd-fs@FreeBSD.ORG Wed Apr 23 14:09:58 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 7CE3C81F for ; Wed, 23 Apr 2014 14:09:58 +0000 (UTC) Received: from st11p02mm-asmtp002.mac.com (st11p02mm-asmtpout002.mac.com [17.172.220.237]) by mx1.freebsd.org (Postfix) with ESMTP id 4767B1AA0 for ; Wed, 23 Apr 2014 14:09:57 +0000 (UTC) MIME-version: 1.0 Content-type: text/plain; charset=windows-1252 Received: from [17.153.37.167] (unknown [17.153.37.167]) by st11p02mm-asmtp002.mac.com (Oracle Communications Messaging Server 7u4-27.08(7.0.4.27.7) 64bit (built Aug 22 2013)) with ESMTPSA id <0N4H00AW8M016H50@st11p02mm-asmtp002.mac.com> for freebsd-fs@freebsd.org; Wed, 23 Apr 2014 14:09:39 +0000 (GMT) Subject: Re: ZFS unable to import pool From: Gena Guchin In-reply-to: <5357937D.4080302@gmail.com> Date: Wed, 23 Apr 2014 07:09:35 -0700 Content-transfer-encoding: quoted-printable Message-id: <72E79259-3DB1-48B7-8E5E-19CC2145A464@icloud.com> References: <20140423064203.GD2830@sludge.elizium.za.net> <20140423080056.GE2830@sludge.elizium.za.net> <20140423091852.GH2830@sludge.elizium.za.net> <20140423100126.GJ2830@sludge.elizium.za.net> <5357937D.4080302@gmail.com> To: Johan Hendriks X-Mailer: Apple Mail (2.1878) X-MANTSH: 1TEIXWV4bG1oaGkdHB0lGUkdDRl5PWBoaEhEKTEMXGx0EGx0YBBIZBBsSEBseGh8 aEQpYTRdLEQptfhcaEQpMWRcbGhsbEQpZSRcRClleF2hjeREKQ04XSxsbGmJCH2lmHlxCGXhzB xlkGx4aE05sEQpYXBcZBBoEHQdNSx0SSEkcTAUbHQQbHRgEEhkEGxIQGx4aHxsRCl5ZF2FAfR4 BEQpMRhdia2sRCkNaFxsdBBsfGQQZHQQbHB0RCkRYFx4RCkRJFxgRCkJFF2BIWFB7c0NrUBteE QpCThdrRRpSUB5DXFlcaBEKQkwXZkh9WWJdUntiWR8RCkJsF29nQVheY19CZhhEEQpCQBdlbmt lf3pnUGcdXREKcGcXbkNueG5FRll9HRoRCnBoF3pnHkdvRmV+GHtkEQpwaBdhYlpOQWNwX1t5c BEKcGgXZl55WV4eU2VtX2IRCnBoF2lCa0ceZkR+HwUBEQpwaBdtbV98HxIcXEx8bhEKcH8Xb09 6WhJQS1hrH1ARCnBfF2sbQ3xbfB0SREN6EQpwfxdtGxp5e21bf0hFbhEKcF8XaH5cRnJPUEdeU GQRCnBnF2V4UGxgXl5BYWkYEQpwbBdnH0VcfnkeG0J7XxEKcEwXa197WFp+Ym9AZ3sR X-CLX-Spam: false X-CLX-Score: 1011 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:5.11.96,1.0.14,0.0.0000 definitions=2014-04-23_04:2014-04-23,2014-04-23,1970-01-01 signatures=0 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 suspectscore=2 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=7.0.1-1402240000 definitions=main-1404230210 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Apr 2014 14:09:58 -0000 Johan,=20 Looking though the history, i DID add that disk ada7 (!) to the pool, = but I added it as a separate disk. I wanted to re-add the disk to the = storage pool, but it added as a new disk=85 this does help a lille.. anything I can do now?=20 can I remove that vdev? thanks! On Apr 23, 2014, at 3:18 AM, Johan Hendriks = wrote: >=20 > op 23-04-14 12:01, Hugo Lombard schreef: >> Hello >>=20 >> In your original 'zpool import' output, it shows the following: >>=20 >> Additional devices are known to be part of this pool, though = their >> exact configuration cannot be determined. >>=20 >> I'm thinking your problem might be related to devices that's supposed = to >> be part of the pool but that's not shown in the import. >>=20 >> For instance, here's my attempt at recreating your scenario: >>=20 >> # zpool import >> pool: t >> id: 15230454775812525624 >> state: DEGRADED >> status: One or more devices are missing from the system. >> action: The pool can be imported despite missing or damaged = devices. The >> fault tolerance of the pool may be compromised if imported. >> see: http://illumos.org/msg/ZFS-8000-2Q >> config: >> t DEGRADED >> raidz1-0 DEGRADED >> md3 ONLINE >> md4 ONLINE >> md5 ONLINE >> md6 ONLINE >> 3421664295019948379 UNAVAIL cannot open >> cache >> md1s2 >> logs >> md1s1 ONLINE >> # >>=20 >> As you can see, the pool stattus is 'DEGRADED' instead of 'UNAVAIL', = and >> I don't have the 'Additional devices...' message. >>=20 >> The pool imports OK: >>=20 >> # zpool import t >> # zpool status t >> pool: t >> state: DEGRADED >> status: One or more devices could not be opened. Sufficient = replicas exist for >> the pool to continue functioning in a degraded state. >> action: Attach the missing device and online it using 'zpool = online'. >> see: http://illumos.org/msg/ZFS-8000-2Q >> scan: none requested >> config: >> NAME STATE READ WRITE CKSUM >> t DEGRADED 0 0 0 >> raidz1-0 DEGRADED 0 0 0 >> md3 ONLINE 0 0 0 >> md4 ONLINE 0 0 0 >> md5 ONLINE 0 0 0 >> md6 ONLINE 0 0 0 >> 3421664295019948379 UNAVAIL 0 0 0 was = /dev/md7 >> logs >> md1s1 ONLINE 0 0 0 >> cache >> md1s2 ONLINE 0 0 0 >> errors: No known data errors >> # >>=20 >> As a further test, let's see what happens when the cache disk >> disappears: >>=20 >> # zpool export t >> # gpart delete -i 2 md1 >> md1s2 deleted >> # zpool import >> pool: t >> id: 15230454775812525624 >> state: DEGRADED >> status: One or more devices are missing from the system. >> action: The pool can be imported despite missing or damaged = devices. The >> fault tolerance of the pool may be compromised if imported. >> see: http://illumos.org/msg/ZFS-8000-2Q >> config: >> t DEGRADED >> raidz1-0 DEGRADED >> md3 ONLINE >> md4 ONLINE >> md5 ONLINE >> md6 ONLINE >> 3421664295019948379 UNAVAIL cannot open >> cache >> 7736388725784014558 >> logs >> md1s1 ONLINE >> # zpool import t >> # zpool status t >> pool: t >> state: DEGRADED >> status: One or more devices could not be opened. Sufficient = replicas exist for >> the pool to continue functioning in a degraded state. >> action: Attach the missing device and online it using 'zpool = online'. >> see: http://illumos.org/msg/ZFS-8000-2Q >> scan: none requested >> config: >> NAME STATE READ WRITE CKSUM >> t DEGRADED 0 0 0 >> raidz1-0 DEGRADED 0 0 0 >> md3 ONLINE 0 0 0 >> md4 ONLINE 0 0 0 >> md5 ONLINE 0 0 0 >> md6 ONLINE 0 0 0 >> 3421664295019948379 UNAVAIL 0 0 0 was = /dev/md7 >> logs >> md1s1 ONLINE 0 0 0 >> cache >> 7736388725784014558 UNAVAIL 0 0 0 was = /dev/md1s2 >> errors: No known data errors >> # >>=20 >> So even with a missing raidz component and a missing cache device, = the >> pool still imports. >>=20 >> I think some crucial piece of information is missing to complete the >> picture. >>=20 > Did you in the past add an extra disk to the pool? > This could explain the whole issue as the pool is missing a whole = vdev. >=20 > regards > Johan >=20 > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"