From owner-freebsd-stable@FreeBSD.ORG Wed Feb 15 14:12:48 2012 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 60396106566B for ; Wed, 15 Feb 2012 14:12:48 +0000 (UTC) (envelope-from O.Seibert@cs.ru.nl) Received: from kookpunt.science.ru.nl (kookpunt.science.ru.nl [131.174.30.61]) by mx1.freebsd.org (Postfix) with ESMTP id C2F528FC28 for ; Wed, 15 Feb 2012 14:12:47 +0000 (UTC) Received: from twoquid.cs.ru.nl (twoquid.cs.ru.nl [131.174.142.38]) by kookpunt.science.ru.nl (8.13.7/5.31) with ESMTP id q1FECbHp006782; Wed, 15 Feb 2012 15:12:37 +0100 (MET) Received: by twoquid.cs.ru.nl (Postfix, from userid 4100) id 8C9C32E05C; Wed, 15 Feb 2012 15:12:37 +0100 (CET) Date: Wed, 15 Feb 2012 15:12:37 +0100 From: Olaf Seibert To: freebsd-stable@freebsd.org Message-ID: <20120215141237.GA40349@twoquid.cs.ru.nl> References: <20120213155902.GE867@twoquid.cs.ru.nl> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20120213155902.GE867@twoquid.cs.ru.nl> User-Agent: Mutt/1.5.19 (2009-01-05) X-Spam-Score: -1.799 () ALL_TRUSTED,BAYES_50 X-Scanned-By: MIMEDefang 2.63 on 131.174.30.61 Subject: Re: ZFS faulted pool problem X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 15 Feb 2012 14:12:48 -0000 I'm still (or again) in more or less the same situation as before. I have tried things like exporting the pool and re-importing it. However the import didn't even want to work. Only once I found an undocumented "zpool import -V" option by UTSLing, the pool got imported again, but still without an attempt at resilvering. There also exist (undocumented) -F (rewind) and -X (extreme rewind) options for zpool import and zpool clear. However, the effect of -X seems to be that some extremely lengthy operation is attempted, that never finishes. In some cases, a reboot was necessary to terminate the process. I have also tried to remove an extra disk, on the theory that with raidz2 you can miss 2 disks, and that the problem might be restricted to a single disk. I haven't done each disk yet, but so far there was no success. (I "removed" each disk by unconfiguring the pass-through on the Areca web interface). $ zpool status pool: tank state: FAULTED status: One or more devices could not be opened. There are insufficient replicas for the pool to continue functioning. action: Attach the missing device and online it using 'zpool online'. see: http://www.sun.com/msg/ZFS-8000-3C scan: scrub repaired 0 in 49h3m with 2 errors on Fri Jan 20 15:10:35 2012 config: NAME STATE READ WRITE CKSUM tank FAULTED 0 0 2 raidz2-0 DEGRADED 0 0 8 da0 ONLINE 0 0 0 da1 ONLINE 0 0 0 da2 ONLINE 0 0 0 da3 ONLINE 0 0 0 3758301462980058947 UNAVAIL 0 0 0 was /dev/da4 da5 ONLINE 0 0 0 fourquid.1:~$ sudo zpool online tank da4 cannot open 'tank': pool is unavailable fourquid.1:~$ sudo zpool clear -nF tank fourquid.1:~$ sudo zpool clear -F tank cannot clear errors for tank: I/O error fourquid.1:~$ sudo zpool clear -nFX tank (no output, uses some cpu, some I/O) zdb -v ok zdb -v -c tank zdb: can't open 'tank': input/output error zdb -v -l /dev/da[01235] ok zdb -v -u tank zdb: can't open 'tank': Input/output error zdb -v -l -u /dev/da[01235] ok zdb -v -m tank zdb: can't open 'tank': Input/output error zdb -v -m -X tank no output, uses cpu and I/O$ zdb -v -i tank zdb: can't open 'tank': Input/output error zdb -v -i -F tank zdb: can't open 'tank': Input/output error zdb -v -i -X tank no output, uses cpu and I/O Any ideas? -Olaf. -- Pipe rene = new PipePicture(); assert(Not rene.GetType().Equals(Pipe));