From owner-freebsd-fs@FreeBSD.ORG Thu Nov 26 11:30:06 2009 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id C5BC6106568F for ; Thu, 26 Nov 2009 11:30:06 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 9685C8FC15 for ; Thu, 26 Nov 2009 11:30:06 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.3/8.14.3) with ESMTP id nAQBU6sa077051 for ; Thu, 26 Nov 2009 11:30:06 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.3/8.14.3/Submit) id nAQBU65B077048; Thu, 26 Nov 2009 11:30:06 GMT (envelope-from gnats) Date: Thu, 26 Nov 2009 11:30:06 GMT Message-Id: <200911261130.nAQBU65B077048@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org From: kot@softlynx.ru Cc: Subject: Re: kern/140888: [zfs] boot fail from zfs root while the pool resilvering X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: kot@softlynx.ru List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 26 Nov 2009 11:30:06 -0000 The following reply was made to PR kern/140888; it has been noted by GNATS. From: kot@softlynx.ru To: bug-followup@FreeBSD.org Cc: Subject: Re: kern/140888: [zfs] boot fail from zfs root while the pool resilvering Date: Thu, 26 Nov 2009 14:02:12 +0300 (MSK) I found, that it keep fail booting if has at least one device not ONLINE and pool state DEGRADED. For instance [root@livecd8:/]# zpool status pool: tank0 state: DEGRADED scrub: none requested config: NAME STATE READ WRITE CKSUM tank0 DEGRADED 0 0 0 raidz1 DEGRADED 0 0 0 replacing DEGRADED 0 0 0 12996219703647995136 UNAVAIL 0 298 0 was /dev/gpt/QM00002 gpt/SN023432 ONLINE 0 0 0 gpt/SN091234 ONLINE 0 0 0 errors: No known data errors considered as degraded even it has replace gpt/QM00002 with new gpt/SN023432. Detaching UNAVAIL component turns pool to ONLINE state back. [root@livecd8:/]# zpool detach tank0 12996219703647995136 [root@livecd8:/]# zpool status pool: tank0 state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank0 ONLINE 0 0 0 raidz1 ONLINE 0 0 0 gpt/SN023432 ONLINE 0 0 0 gpt/SN091234 ONLINE 0 0 0 errors: No known data errors This case lets to boot from tank0. It also keeps booting fine in case of component is manually turns to OFFLINE state in any combination, for instance like [root@fresh-inst:~]# zpool status pool: tank0 state: DEGRADED status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine if the device needs to be replaced, and clear the errors using 'zpool clear' or replace the device with 'zpool replace'. see: http://www.sun.com/msg/ZFS-8000-9P scrub: none requested config: NAME STATE READ WRITE CKSUM tank0 DEGRADED 0 0 0 raidz1 DEGRADED 0 0 0 gpt/SN023432 ONLINE 0 0 0 gpt/SN091234 OFFLINE 0 921 0 errors: No known data errors