From owner-freebsd-fs@FreeBSD.ORG Tue Sep 11 11:08:20 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 96838106566C for ; Tue, 11 Sep 2012 11:08:20 +0000 (UTC) (envelope-from Newsletter@goelli.de) Received: from mo6-p05-ob.rzone.de (mo6-p05-ob.rzone.de [IPv6:2a01:238:20a:202:5305::1]) by mx1.freebsd.org (Postfix) with ESMTP id EEE7E8FC08 for ; Tue, 11 Sep 2012 11:08:19 +0000 (UTC) X-RZG-CLASS-ID: mo05 X-RZG-AUTH: :ImkTZkytb+s5KUDumTG4i0mGDH1K4fweaf9O+/5rQT5ns8rb41Pk1sfUhKBRrQ== Received: from goelliNotebook (p4FEE52FB.dip.t-dialin.net [79.238.82.251]) by smtp.strato.de (jorabe mo11) (RZmta 30.14 DYNA|AUTH) with ESMTPA id 006b53o8BALJk8 for ; Tue, 11 Sep 2012 13:08:18 +0200 (CEST) From: =?iso-8859-1?Q?Thomas_G=F6llner_=28Newsletter=29?= To: Date: Tue, 11 Sep 2012 13:07:59 +0200 Message-ID: <001a01cd900d$bcfcc870$36f65950$@goelli.de> MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable X-Mailer: Microsoft Outlook 14.0 Thread-Index: Ac2QDbI7xL4yTxh5SqyFiLJJuz7H3Q== Content-Language: de Subject: ZFS: Corrupted pool metadata after adding vdev to a pool - no opportunity to rescue data from healthy vdevs? Remove a vdev? Rewrite metadata? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 11 Sep 2012 11:08:20 -0000 Hi all, I recently crashed my pool with adding a new vdev to my pool. I=92m running NAS4Free 9.0.0.1 - Sandstorm (Revision 188). My Pool "GoelliZFS1" has one vdv - a raidz out of 3 discs =E1 3TB. As I needed = more space I put 3 discs =E1 1.5TB in the mashine and created a new raidz = vdev. Now something must have happened when I added the new vdev to the existing = pool. I think somehow the disclables got mixed up or something. Because after adding the vdev my pool had a capacity of 16TB o_O Until that point I = did everything via webGUI. I thought a restart could help, but after that my pool was gone. Now I did some reading and tried via CLI over SSH. I don't want to put = the whole log here, because it might be to long. I'll give shortup and if = you want to know more, just ask ;-) With "zpool import" I can see my pool. I checked the smart logs to = verify the disc names. Options -F and -X didn't help. With option -V the pool = was imported, but still faulty. goelli-nas4free:~# zpool import -faV goelli-nas4free:~# zpool status pool: GoelliZFS1 state: FAULTED status: The pool metadata is corrupted and the pool cannot be opened. action: Destroy and re-create the pool from a backup source. see: http://www.sun.com/msg/ZFS-8000-72 scan: none requested config: NAME STATE READ WRITE CKSUM GoelliZFS1 FAULTED 1 0 0 missing-0 ONLINE 0 0 0 raidz1-1 ONLINE 0 0 0 ada3 ONLINE 0 0 0 ada4 ONLINE 0 0 0 ada4 ONLINE 0 0 0 I used "zdb -l" for all discs. There are all 4 Labels on each disc. "zdb" also gave me some feedback (too long to post). I'm sure now, that my data is on the discs. After adding the new vdev I = had nothing changed. So there must be a chance to tell the zfs to dissmiss the wrong entry in = the metadata - or to edit the metadata myself... When I think of the following case, you would agree that there has to be = an opportunity to detach vdevs... If you have a pool of 4 vdevs, which is full of data, you are supposed = to add more space to the pool by adding a new vdev, right? If now, for some reason, after a short time and not much new data the new attached vdev completely fails - what do you do? ZFS is allways consistend on-disk. = ZFS has copy-on-write, so no data is changed until it's touched. In this = case you have a pool with 4 healthy vdevs with all you data and one faulty = vdev with almost no data. And you get the message to discard all your data, destroy the pool and roll-back from backup?! Somehow rediculous, right? I hope someone can tell me what I can try to do. I will appreachiate any kind of help... Greetings, Thomas