From owner-freebsd-current@FreeBSD.ORG Mon Nov 5 10:26:06 2012 Return-Path: Delivered-To: freebsd-current@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id D46BE88D for ; Mon, 5 Nov 2012 10:26:06 +0000 (UTC) (envelope-from paul-freebsd@fletchermoorland.co.uk) Received: from hercules.mthelicon.com (hercules.mthelicon.com [66.90.118.40]) by mx1.freebsd.org (Postfix) with ESMTP id A8B008FC0A for ; Mon, 5 Nov 2012 10:26:06 +0000 (UTC) Received: from demophon.fletchermoorland.co.uk (hydra.fletchermoorland.co.uk [78.33.209.59] (may be forged)) (authenticated bits=0) by hercules.mthelicon.com (8.14.5/8.14.5) with ESMTP id qA5APxnb054779 for ; Mon, 5 Nov 2012 10:25:59 GMT (envelope-from paul-freebsd@fletchermoorland.co.uk) Message-ID: <50979436.4020907@fletchermoorland.co.uk> Date: Mon, 05 Nov 2012 10:25:58 +0000 From: Paul Wootton User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:12.0) Gecko/20120530 Thunderbird/12.0.1 MIME-Version: 1.0 To: freeBSD-CURRENT Mailing List Subject: ZFS RaidZ-2 problems References: <508F98F9.3040604@fletchermoorland.co.uk> In-Reply-To: <508F98F9.3040604@fletchermoorland.co.uk> X-Forwarded-Message-Id: <508F98F9.3040604@fletchermoorland.co.uk> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 05 Nov 2012 10:26:07 -0000 I've already posted this to freebsd-fs@ but still have no idea as to why the below has happened. On 10/30/12 09:08, Paul Wootton wrote: > Hi, > > I have had lots of bad luck with SATA drives and have had them fail on > me far too often. Started with a 3 drive RAIDZ and lost 2 drives at > the same time. Upgraded to a 6 drive RAIDZ and lost 2 drives with in > hours of each other and finally had a 9 drive RAIDZ (1 parity) and > lost another 2 drives (as luck would happen, this time I had a 90% > backup on another machine so did not loose everything). I finally > decided that I should switch to a RAIDZ2 (my current setup). > Now I have lost 1 drive and the pack is showing as faulted. I have > tried exporting and reimporting, but that did not help either. > Is this normal? Has any one got any ideas as to what has happened and > why? > > The fault this time might be cabling so I might not have lost the > data, but my understanding was that with RAIDZ-2, you could loose 2 > drives and still have a working pack. > I do know the fault could also be the power supply, controller etc. I > can take care of all the hardware. > The issue I have is, I have a 9 RAIDZ-2 pack with only 1 disk showing > as offline and the pack is showing as faulted. > If the power supply was bouncing and a drive was giving bad data, I > would expect ZFS to report that 2 drives were faulted (1 offline and 1 > corrupt) > > Is there a way with ZDB that I can see why the pool is showing as > faulted? Can it tell me which drives it thinks are bad, or has bad data? > > I do still have the 90% backup of the pool and nothing has really > changed since that backup, so if someone wants me to try something and > it blows the pack away, it's not the end of the world. > > > Cheers > Paul > > > pool: storage > state: FAULTED > status: One or more devices could not be opened. There are insufficient > replicas for the pool to continue functioning. > action: Attach the missing device and online it using 'zpool online'. > see: http://illumos.org/msg/ZFS-8000-3C > scan: resilvered 30K in 0h0m with 0 errors on Sun Oct 14 12:52:45 2012 > config: > > NAME STATE READ WRITE CKSUM > storage FAULTED 0 0 1 > raidz2-0 FAULTED 0 0 6 > ada0 ONLINE 0 0 0 > ada1 ONLINE 0 0 0 > ada2 ONLINE 0 0 0 > 17777811927559723424 UNAVAIL 0 0 0 was > /dev/ada3 > ada4 ONLINE 0 0 0 > ada5 ONLINE 0 0 0 > ada6 ONLINE 0 0 0 > ada7 ONLINE 0 0 0 > ada8 ONLINE 0 0 0 > ada10p4 ONLINE 0 0 0 > > root@filekeeper:/storage # zpool export storage > root@filekeeper:/storage # zpool import storage > cannot import 'storage': I/O error > Destroy and re-create the pool from > a backup source. > > root@filekeeper:/usr/home/paul # uname -a > FreeBSD filekeeper.caspersworld.co.uk 10.0-CURRENT FreeBSD > 10.0-CURRENT #0 r240967: Thu Sep 27 08:01:24 UTC 2012 > root@filekeeper.caspersworld.co.uk:/usr/obj/usr/src/sys/GENERIC amd64