From owner-freebsd-fs@FreeBSD.ORG Sat Dec 31 07:18:04 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A41001065670 for ; Sat, 31 Dec 2011 07:18:04 +0000 (UTC) (envelope-from freebsd@deman.com) Received: from plato.corp.nas.com (plato.corp.nas.com [66.114.32.138]) by mx1.freebsd.org (Postfix) with ESMTP id 6879B8FC16 for ; Sat, 31 Dec 2011 07:18:04 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by plato.corp.nas.com (Postfix) with ESMTP id BE7651045D02C for ; Fri, 30 Dec 2011 22:58:11 -0800 (PST) X-Virus-Scanned: amavisd-new at corp.nas.com Received: from plato.corp.nas.com ([127.0.0.1]) by localhost (plato.corp.nas.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id r5ielERA3ln0 for ; Fri, 30 Dec 2011 22:58:10 -0800 (PST) Received: from [192.168.2.247] (mono-sis1.s.bli.openaccess.org [66.114.32.149]) by plato.corp.nas.com (Postfix) with ESMTPSA id 042051045D01F for ; Fri, 30 Dec 2011 22:58:09 -0800 (PST) From: Michael DeMan Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable Date: Fri, 30 Dec 2011 22:58:08 -0800 Message-Id: <8EA721E0-977D-483C-AC06-1040B87E0AA7@deman.com> To: freebsd-fs@freebsd.org Mime-Version: 1.0 (Apple Message framework v1084) X-Mailer: Apple Mail (2.1084) Subject: zfs detach/replace X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 31 Dec 2011 07:18:04 -0000 Hi All, The origination of the problem is entirely my fault on FreeBSD 8.1 = RELEASE #0. We had old notes that attempting a 'replace' (which is = appropriate for a mirror) leaves ZFS in a funky state on BSD. I = inadvertently did just that on a drive swap on a raidz2 pool. My old = notes show the only recovery that we knew of at the time was to rsync or = zfs-send the pool elsewhere, destroy the local and rebuild from scratch. Is there a better way to handle this nowadays? Thanks, - Mike DeMan # zpool status pool: zp1rz2 state: DEGRADED scrub: scrub in progress for 4h5m, 9.28% done, 39h55m to go config: NAME STATE READ WRITE CKSUM zp1rz2 DEGRADED 0 0 0 raidz2 DEGRADED 0 0 0 label/ada0LABEL ONLINE 0 0 0 label/ada1LABEL ONLINE 0 0 0 label/ada2LABEL ONLINE 0 0 0 label/ada3LABEL ONLINE 0 0 0 label/ada4LABEL ONLINE 0 0 0 replacing UNAVAIL 0 984K 0 = insufficient replicas label/ada5LABEL/old UNAVAIL 0 1.11M 0 cannot = open label/ada5LABEL UNAVAIL 0 1.11M 0 cannot = open label/ada6LABEL ONLINE 0 0 0 label/ada7LABEL ONLINE 0 0 0 errors: No known data errors