From owner-freebsd-fs@freebsd.org Sat May 28 23:08:01 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 41D5FB4EBB5 for ; Sat, 28 May 2016 23:08:01 +0000 (UTC) (envelope-from esamorokov@gmail.com) Received: from mail-it0-x22d.google.com (mail-it0-x22d.google.com [IPv6:2607:f8b0:4001:c0b::22d]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 143541E50 for ; Sat, 28 May 2016 23:08:01 +0000 (UTC) (envelope-from esamorokov@gmail.com) Received: by mail-it0-x22d.google.com with SMTP id l63so15780257ita.1 for ; Sat, 28 May 2016 16:08:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to; bh=kt69UcFaIl01nSZoDprMPROVwfcj0oko9Qnn0oyqhJ8=; b=v3fXeNPVwQr8QuJB9ENpvavReuaprCIc7o/6Jd8QVK2cgMHm8ZedTYKIsUj2CCVOSt 5Luara/OBEDKsN8W1VYUMMSJKurrzmUBCoenEu2TQGlumlyrl3pevpKVFTbcwq/BwTMe 65TV8J3zVCsoDYI5LuYeh+524gcGEReE3j3vcAdx/Zi4T24x80iqAZ//cVYPDnfR/bSC kbQQ45b9n7g8dRS/gc8Org4oqBnXcQf0FD41+zGksVvdakaJv7gzLbI2cB2d+ZgIbpBy pWS56duroxdGqrk09NVb78i5aFZMfOviSAISHpziklkT6ebB3+5OtUz5WxglWgMSBTjk 8k2Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:date:message-id:subject:from:to; bh=kt69UcFaIl01nSZoDprMPROVwfcj0oko9Qnn0oyqhJ8=; b=dtOncQDDlRfW8Ac8+F8OVI3Gye6IMFdrYQr2F24DhTKsmzrbJqjvET2uqTQyWyfit9 pJxgXl0knzCB5mEXIuy6bVN0XaQiaRWzn+Vep0SteLTLSXoCkvWqwflSXVc8fEZNucxn i5cYyDx3MeQRntkkSuPJy5P/5LoNGRFAehJujAuqbQYihHgkqKq6g+BQd9oxvgDuNzT1 gtpwENbap7k3Cj/QopeaMayOAer/KumE8YYxYxa1SkDF6UiMtv0VQQC3zQTNrh+KACpl 7yf4mruMemVlEFrq9zaBNg2LT7jl6vIFBW0lJoNzxpcA7/9vxz3B+U6zi/OGK+1cXY1r aDTg== X-Gm-Message-State: ALyK8tJzMCu3ClHaoTuKbf9zhQGoGPmj9sf9vOpMtellAkRcz775S+upeIFEcs/e9h2JulXh5JPmJuLoZAjLRg== MIME-Version: 1.0 X-Received: by 10.36.108.76 with SMTP id w73mr3592370itb.63.1464476879872; Sat, 28 May 2016 16:07:59 -0700 (PDT) Received: by 10.107.154.16 with HTTP; Sat, 28 May 2016 16:07:59 -0700 (PDT) Date: Sat, 28 May 2016 16:07:59 -0700 Message-ID: Subject: Re: ZFS - RAIDZ1 Recovery (Evgeny Sam) From: Evgeny Sam To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.22 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 28 May 2016 23:08:01 -0000 BlackCat, I ran the command "zpool import -fFn 2918670121059000644 zh_vol_old" amd it did not work. [root@juicy] ~# zpool import -fFn 2918670121059000644 zh_vol_old [root@juicy] ~# zpool status no pools available I think it did not work, because I am running it on the clonned drives, which have different GPID's, please correct me if I am wrong. I can switch it to the original drives, if you suggest so. Kevin, At this moment the third drive is connected and it is/was faulty. Also, the rest of the drives are the clones of the original ones. Thank you, EVGENY. On Fri, May 27, 2016 at 5:00 AM, wrote: > Send freebsd-fs mailing list submissions to > freebsd-fs@freebsd.org > > To subscribe or unsubscribe via the World Wide Web, visit > https://lists.freebsd.org/mailman/listinfo/freebsd-fs > or, via email, send a message with subject or body 'help' to > freebsd-fs-request@freebsd.org > > You can reach the person managing the list at > freebsd-fs-owner@freebsd.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of freebsd-fs digest..." > > > Today's Topics: > > 1. Re: ZFS - RAIDZ1 Recovery (Kevin P. Neal) > 2. Re: ZFS - RAIDZ1 Recovery (BlackCat) > 3. Re: ZFS - RAIDZ1 Recovery (InterNetX - Juergen Gotteswinter) > 4. Re: ZFS - RAIDZ1 Recovery (Evgeny Sam) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Thu, 26 May 2016 20:47:10 -0400 > From: "Kevin P. Neal" > To: esamorokov > Cc: freebsd-fs@freebsd.org, BlackCat > Subject: Re: ZFS - RAIDZ1 Recovery > Message-ID: <20160527004710.GA47195@neutralgood.org> > Content-Type: text/plain; charset=us-ascii > > On Thu, May 26, 2016 at 03:26:18PM -0700, esamorokov wrote: > > Hello All, > > > > My name is Evgeny and I have 3 x 3TB in RAIDZ1, where one drive is > > gone and I accidentally > > screwed the other two. The data should be fine, just need to revert > > uberblock in point of time, where i started doing changes. > > You may need to ask on a ZFS or OpenZFS specific list. I'm not aware of > many deep ZFS experts who hang out on this list. > > > History: > > I was using WEB GUI of FreeNas and it reported a failed drive > > I shutdown the computer and replaced the drive, but I did not > > noticed that I accidentally disconnected power of another drive > > What happened to the third drive, the one you pulled? Did it fail > in a way that may make it viable for an attempt to revive the pool? > Or is it just a brick at this point in which case it is useless? > > If the third drive is perhaps usable then make sure all three are > connected and powered up. > -- > Kevin P. Neal http://www.pobox.com/~kpn/ > > "Nonbelievers found it difficult to defend their position in \ > the presense of a working computer." -- a DEC Jensen paper > > > ------------------------------ > > Message: 2 > Date: Fri, 27 May 2016 10:36:11 +0300 > From: BlackCat > To: esamorokov > Cc: freebsd-fs@freebsd.org > Subject: Re: ZFS - RAIDZ1 Recovery > Message-ID: > < > CAD-rSeea_7TzxREVAsn8tKxLbtth62m3j8opsb2FoA3qc_ZrsQ@mail.gmail.com> > Content-Type: text/plain; charset=UTF-8 > > Hello Evgeny, > > 2016-05-27 1:26 GMT+03:00 esamorokov : > > I have 3 x 3TB in RAIDZ1, where one drive is gone and I accidentally > > screwed the other two. The data should be fine, just need to revert > > uberblock in point of time, where i started doing changes. > > > try the following command, it just checks whether is possible to > import your pool by discarding some of the most recent writes): > > # zpool import -fFn 2918670121059000644 zh_vol_old > > Because you have already created a new pool with the same name as old, > this command import pool by it ID (2918670121059000644) with new name > (zh_vol_old). > > > History: > > I was using WEB GUI of FreeNas and it reported a failed drive > > I shutdown the computer and replaced the drive, but I did not noticed > > that I accidentally disconnected power of another drive > > I powered on the server and expanded the pool where there only one > drive > > of the pool was active > > As far as I understand attached log, zfs assumes that disk data > corrupted. But this is quite stranger, since zfs normally survives if > you forget to attach some disk during bad disk replacement. > > > Then I began to really learn ZFS and messing up with bits > > At some point I created a backup bit-to-bit images of the two drives > > from the pool (using R-Studio) > > > The question of curiosity: do you experimenting now with copies or > with original disks? > > > > > Specs: > > OS: FreeBSD 9.2-RELEASE (FREENAS.amd64) #0 r+2315ea3: Fri Dec 20 > > 12:48:50 PST 2013 > > RAID: [root@juicy] ~# camcontrol devlist > > at scbus1 target 0 lun 0 > (pass1,ada1) > > at scbus2 target 0 lun 0 > (ada2,pass2) > > at scbus3 target 0 lun 0 > (pass3,ada3) > > [root@juicy] ~# zdb > > zh_vol: > > version: 5000 > > name: 'zh_vol' > > state: 0 > > txg: 14106447 > > pool_guid: 2918670121059000644 > > hostid: 1802987710 > > hostname: '' > > vdev_children: 1 > > vdev_tree: > > type: 'root' > > id: 0 > > guid: 2918670121059000644 > > create_txg: 4 > > children[0]: > > type: 'raidz' > > id: 0 > > guid: 14123440993587991088 > > nparity: 1 > > metaslab_array: 34 > > metaslab_shift: 36 > > ashift: 12 > > asize: 8995321675776 > > is_log: 0 > > create_txg: 4 > > children[0]: > > type: 'disk' > > id: 0 > > guid: 17624020450804741401 > > path: '/dev/gptid/6e5cea27-7f52-11e3-9cd8-d43d7ed5b587' > > whole_disk: 1 > > DTL: 137 > > create_txg: 4 > > children[1]: > > type: 'disk' > > id: 1 > > guid: 3253299067537287428 > > path: '/dev/gptid/2b70d9c0-8e40-11e3-aa1c-d43d7ed5b587' > > whole_disk: 1 > > DTL: 133 > > create_txg: 4 > > children[2]: > > type: 'disk' > > id: 2 > > guid: 17999524418015963258 > > path: '/dev/gptid/1e898758-9488-11e3-a86e-d43d7ed5b587' > > whole_disk: 1 > > DTL: 134 > > create_txg: 4 > > features_for_read: > > -- > BR BC > > > ------------------------------ > > Message: 3 > Date: Fri, 27 May 2016 09:30:30 +0200 > From: InterNetX - Juergen Gotteswinter > To: esamorokov , freebsd-fs@freebsd.org, > BlackCat > Subject: Re: ZFS - RAIDZ1 Recovery > Message-ID: <3af5eba4-4e04-abc4-9fa7-d0a1ce47747e@internetx.com> > Content-Type: text/plain; charset=windows-1252 > > Hi, > > after scrolling through the "History" i would wonder if its not > completely messed up now. Less is more in such Situations.. > > Juergen > > Am 5/27/2016 um 12:26 AM schrieb esamorokov: > > Hello All, > > > > My name is Evgeny and I have 3 x 3TB in RAIDZ1, where one drive is > > gone and I accidentally > > screwed the other two. The data should be fine, just need to revert > > uberblock in point of time, where i started doing changes. > > > > I AM KINDLY ASKING FOR HELP! The pool had all of the family memories > > for many years :( Thanks in advance! > > > > I am not a FreeBSD guru and have been using ZFS for a couple of > > years, but I know Linux and do some programming/scripting. > > Since I got that incident I started learning the depth of the ZFS, > > but I definitely need help on it at this point. > > Please don't ask me why I did not have backups, I was building > > backup server in my garage, when it happened > > > > History: > > I was using WEB GUI of FreeNas and it reported a failed drive > > I shutdown the computer and replaced the drive, but I did not > > noticed that I accidentally disconnected power of another drive > > I powered on the server and expanded the pool where there only one > > drive of the pool was active > > Then I began to really learn ZFS and messing up with bits > > At some point I created a backup bit-to-bit images of the two drives > > from the pool (using R-Studio) > > > > > > Specs: > > OS: FreeBSD 9.2-RELEASE (FREENAS.amd64) #0 r+2315ea3: Fri Dec 20 > > 12:48:50 PST 2013 > > RAID: [root@juicy] ~# camcontrol devlist > > at scbus1 target 0 lun 0 > > (pass1,ada1) > > at scbus2 target 0 lun 0 > > (ada2,pass2) > > at scbus3 target 0 lun 0 > > (pass3,ada3) > > [root@juicy] ~# zdb > > zh_vol: > > version: 5000 > > name: 'zh_vol' > > state: 0 > > txg: 14106447 > > pool_guid: 2918670121059000644 > > hostid: 1802987710 > > hostname: '' > > vdev_children: 1 > > vdev_tree: > > type: 'root' > > id: 0 > > guid: 2918670121059000644 > > create_txg: 4 > > children[0]: > > type: 'raidz' > > id: 0 > > guid: 14123440993587991088 > > nparity: 1 > > metaslab_array: 34 > > metaslab_shift: 36 > > ashift: 12 > > asize: 8995321675776 > > is_log: 0 > > create_txg: 4 > > children[0]: > > type: 'disk' > > id: 0 > > guid: 17624020450804741401 > > path: '/dev/gptid/6e5cea27-7f52-11e3-9cd8-d43d7ed5b587' > > whole_disk: 1 > > DTL: 137 > > create_txg: 4 > > children[1]: > > type: 'disk' > > id: 1 > > guid: 3253299067537287428 > > path: '/dev/gptid/2b70d9c0-8e40-11e3-aa1c-d43d7ed5b587' > > whole_disk: 1 > > DTL: 133 > > create_txg: 4 > > children[2]: > > type: 'disk' > > id: 2 > > guid: 17999524418015963258 > > path: '/dev/gptid/1e898758-9488-11e3-a86e-d43d7ed5b587' > > whole_disk: 1 > > DTL: 134 > > create_txg: 4 > > features_for_read: > > > > > > _______________________________________________ > > freebsd-fs@freebsd.org mailing list > > https://lists.freebsd.org/mailman/listinfo/freebsd-fs > > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > > > > > ------------------------------ > > Message: 4 > Date: Fri, 27 May 2016 00:38:56 -0700 > From: Evgeny Sam > To: jg@internetx.com > Cc: BlackCat , freebsd-fs@freebsd.org > Subject: Re: ZFS - RAIDZ1 Recovery > Message-ID: > 4XKK7qiOTtYBka_gHzkVNyXh78ecvhOwqxpMZLdcsupw@mail.gmail.com> > Content-Type: text/plain; charset=UTF-8 > > Hi, > I don't know if it helps, but right after I recreated the pool with > absolute paths of the drives (adaX) I made a bit-to-bit image copy of the > drives. Now I am restoring those images to the NEW DRIVES (similar models). > > Thank you, > Evgeny. > On May 27, 2016 12:30 AM, "InterNetX - Juergen Gotteswinter" < > jg@internetx.com> wrote: > > > Hi, > > > > after scrolling through the "History" i would wonder if its not > > completely messed up now. Less is more in such Situations.. > > > > Juergen > > > > Am 5/27/2016 um 12:26 AM schrieb esamorokov: > > > Hello All, > > > > > > My name is Evgeny and I have 3 x 3TB in RAIDZ1, where one drive is > > > gone and I accidentally > > > screwed the other two. The data should be fine, just need to revert > > > uberblock in point of time, where i started doing changes. > > > > > > I AM KINDLY ASKING FOR HELP! The pool had all of the family > memories > > > for many years :( Thanks in advance! > > > > > > I am not a FreeBSD guru and have been using ZFS for a couple of > > > years, but I know Linux and do some programming/scripting. > > > Since I got that incident I started learning the depth of the ZFS, > > > but I definitely need help on it at this point. > > > Please don't ask me why I did not have backups, I was building > > > backup server in my garage, when it happened > > > > > > History: > > > I was using WEB GUI of FreeNas and it reported a failed drive > > > I shutdown the computer and replaced the drive, but I did not > > > noticed that I accidentally disconnected power of another drive > > > I powered on the server and expanded the pool where there only one > > > drive of the pool was active > > > Then I began to really learn ZFS and messing up with bits > > > At some point I created a backup bit-to-bit images of the two > drives > > > from the pool (using R-Studio) > > > > > > > > > Specs: > > > OS: FreeBSD 9.2-RELEASE (FREENAS.amd64) #0 r+2315ea3: Fri Dec 20 > > > 12:48:50 PST 2013 > > > RAID: [root@juicy] ~# camcontrol devlist > > > at scbus1 target 0 lun 0 > > > (pass1,ada1) > > > at scbus2 target 0 lun 0 > > > (ada2,pass2) > > > at scbus3 target 0 lun 0 > > > (pass3,ada3) > > > [root@juicy] ~# zdb > > > zh_vol: > > > version: 5000 > > > name: 'zh_vol' > > > state: 0 > > > txg: 14106447 > > > pool_guid: 2918670121059000644 > > > hostid: 1802987710 > > > hostname: '' > > > vdev_children: 1 > > > vdev_tree: > > > type: 'root' > > > id: 0 > > > guid: 2918670121059000644 > > > create_txg: 4 > > > children[0]: > > > type: 'raidz' > > > id: 0 > > > guid: 14123440993587991088 > > > nparity: 1 > > > metaslab_array: 34 > > > metaslab_shift: 36 > > > ashift: 12 > > > asize: 8995321675776 > > > is_log: 0 > > > create_txg: 4 > > > children[0]: > > > type: 'disk' > > > id: 0 > > > guid: 17624020450804741401 > > > path: '/dev/gptid/6e5cea27-7f52-11e3-9cd8-d43d7ed5b587' > > > whole_disk: 1 > > > DTL: 137 > > > create_txg: 4 > > > children[1]: > > > type: 'disk' > > > id: 1 > > > guid: 3253299067537287428 > > > path: '/dev/gptid/2b70d9c0-8e40-11e3-aa1c-d43d7ed5b587' > > > whole_disk: 1 > > > DTL: 133 > > > create_txg: 4 > > > children[2]: > > > type: 'disk' > > > id: 2 > > > guid: 17999524418015963258 > > > path: '/dev/gptid/1e898758-9488-11e3-a86e-d43d7ed5b587' > > > whole_disk: 1 > > > DTL: 134 > > > create_txg: 4 > > > features_for_read: > > > > > > > > > _______________________________________________ > > > freebsd-fs@freebsd.org mailing list > > > https://lists.freebsd.org/mailman/listinfo/freebsd-fs > > > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > > > > > > > > ------------------------------ > > Subject: Digest Footer > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > > ------------------------------ > > End of freebsd-fs Digest, Vol 672, Issue 6 > ****************************************** >