From owner-freebsd-fs@freebsd.org Sat May 28 23:16:40 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 4417BB4EE09 for ; Sat, 28 May 2016 23:16:40 +0000 (UTC) (envelope-from esamorokov@gmail.com) Received: from mail-io0-x233.google.com (mail-io0-x233.google.com [IPv6:2607:f8b0:4001:c06::233]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 10DBC138E for ; Sat, 28 May 2016 23:16:40 +0000 (UTC) (envelope-from esamorokov@gmail.com) Received: by mail-io0-x233.google.com with SMTP id p64so50383764ioi.2 for ; Sat, 28 May 2016 16:16:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to; bh=J858ViBnSDYReLFeVWqLxibpQRGcJ+g6rQ3Eo4eh5go=; b=GkUroHiAIyi1zT6hYN3f/uYNBZ22uzOj0k3GMK++J9QVPGV13g/nYeVKPx1sqPcxcb 993l38lUiIdUSsi6/BYtbWMtSPSeZhQY2OY+DJbXCSlIErYDHzV9CI+z/SlX02lsta/U nhfeFiu2ON2PN72MyaBdGE6czgeo3EYCOSmR/dGbEecDmzM54+1cE7Jhx2DE5UAVCuzK E1DfBNhXzbhq4rS6/++8ngUHfMBeXG2hV6lilKEoI3htXdq1d5ubLEDMqqqG/du2t0EU 4IoCitWYNv+NmPb9P/pxTGJsO26InfESPJs92p+DHVgdPoIbcT5zpWM1JEyMHwcSjlyL aGTw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:date:message-id:subject:from:to; bh=J858ViBnSDYReLFeVWqLxibpQRGcJ+g6rQ3Eo4eh5go=; b=GJ2bV2RaFgf57gvO045WLzm88VIvtzvYSDoBm8uc4bWOV8jLAiwfABcWULg+uZZiqX sKfyIbs8GOrnHq6VWofh+RMAk7nZMqZUWab/j9KGMnyW9E8wUtPfeoZj4oIFs4w+KSCA J2lIfPTKI8ulTbSpR/K35to3hf5y9b9k/0hQ3hhzmLhP+jzHFGG/E1u/secc+QCjpXbF nxsiFpAaRHxZ22lrcFHMNNgKIXG/R4Qx62wZ9HwxivXsKgKrqz2tVq2n9qL/WWoFopd/ l8ovntefOQoJBcKJ2CmzTGU0UANyMkrdNZOHA/M/3zb6HAYqR9aPXqjgs+EZH9UB1+vR joNw== X-Gm-Message-State: ALyK8tKqF53CyNWdcKXKZ9ZNc2Vwc/c5caCkvDtB436lbIt1OnXoMU+zINmV0vF0FM37VXX6I3mBGZ4+26H77A== MIME-Version: 1.0 X-Received: by 10.107.131.105 with SMTP id f102mr18033284iod.136.1464477399124; Sat, 28 May 2016 16:16:39 -0700 (PDT) Received: by 10.107.154.16 with HTTP; Sat, 28 May 2016 16:16:39 -0700 (PDT) Date: Sat, 28 May 2016 16:16:39 -0700 Message-ID: Subject: Re: ZFS - RAIDZ1 Recovery (Evgeny Sam) From: Evgeny Sam To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.22 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 28 May 2016 23:16:40 -0000 Here is the current state of the drives: zh_vol: version: 5000 name: 'zh_vol' state: 0 txg: 1491 pool_guid: 10149654347507244742 hostid: 1802987710 hostname: 'juicy.zhelana.local' vdev_children: 2 vdev_tree: type: 'root' id: 0 guid: 10149654347507244742 create_txg: 4 children[0]: type: 'disk' id: 0 guid: 5892508334691495384 path: '/dev/ada0s2' whole_disk: 1 metaslab_array: 33 metaslab_shift: 23 ashift: 12 asize: 983564288 is_log: 0 create_txg: 4 children[1]: type: 'disk' id: 1 guid: 296669430778697937 path: '/dev/ada2p2' whole_disk: 1 metaslab_array: 37 metaslab_shift: 34 ashift: 12 asize: 2997366816768 is_log: 0 create_txg: 1489 features_for_read: [root@juicy] ~# camcontrol devlist at scbus0 target 0 lun 0 (ada0,pass0) at scbus1 target 0 lun 0 (ada1,pass1) at scbus2 target 0 lun 0 (ada2,pass2) at scbus3 target 0 lun 0 (ada3,pass3) [root@juicy] ~# gpart show => 63 117231345 ada0 MBR (55G) 63 1930257 1 freebsd [active] (942M) 1930320 63 - free - (31k) 1930383 1930257 2 freebsd (942M) 3860640 3024 3 freebsd (1.5M) 3863664 41328 4 freebsd (20M) 3904992 113326416 - free - (54G) => 0 1930257 ada0s1 BSD (942M) 0 16 - free - (8.0k) 16 1930241 1 !0 (942M) => 34 5860533101 ada1 GPT (2.7T) 34 94 - free - (47k) 128 6291456 1 freebsd-swap (3.0G) 6291584 5854241544 2 freebsd-zfs (2.7T) 5860533128 7 - free - (3.5k) => 34 5860533101 ada2 GPT (2.7T) 34 94 - free - (47k) 128 6291456 1 freebsd-swap (3.0G) 6291584 5854241544 2 freebsd-zfs (2.7T) 5860533128 7 - free - (3.5k) => 34 5860533101 ada3 GPT (2.7T) 34 94 - free - (47k) 128 6291456 1 freebsd-swap (3.0G) 6291584 5854241544 2 freebsd-zfs (2.7T) 5860533128 7 - free - (3.5k) Geom name: ada1 modified: false state: OK fwheads: 16 fwsectors: 63 last: 5860533134 first: 34 entries: 128 scheme: GPT Providers: 1. Name: ada1p1 Mediasize: 3221225472 (3.0G) Sectorsize: 512 Stripesize: 4096 Stripeoffset: 0 Mode: r1w1e1 rawuuid: 5d985baa-18ac-11e6-9c25-001b7859b93e rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b label: (null) length: 3221225472 offset: 65536 type: freebsd-swap index: 1 end: 6291583 start: 128 2. Name: ada1p2 Mediasize: 2997371670528 (2.7T) Sectorsize: 512 Stripesize: 4096 Stripeoffset: 0 Mode: r0w0e0 rawuuid: 5dacd737-18ac-11e6-9c25-001b7859b93e rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b label: (null) length: 2997371670528 offset: 3221291008 type: freebsd-zfs index: 2 end: 5860533127 start: 6291584 Consumers: 1. Name: ada1 Mediasize: 3000592982016 (2.7T) Sectorsize: 512 Stripesize: 4096 Stripeoffset: 0 Mode: r1w1e2 Geom name: ada2 modified: false state: OK fwheads: 16 fwsectors: 63 last: 5860533134 first: 34 entries: 128 scheme: GPT Providers: 1. Name: ada2p1 Mediasize: 3221225472 (3.0G) Sectorsize: 512 Stripesize: 4096 Stripeoffset: 0 Mode: r1w1e1 rawuuid: 5e164720-18ac-11e6-9c25-001b7859b93e rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b label: (null) length: 3221225472 offset: 65536 type: freebsd-swap index: 1 end: 6291583 start: 128 2. Name: ada2p2 Mediasize: 2997371670528 (2.7T) Sectorsize: 512 Stripesize: 4096 Stripeoffset: 0 Mode: r0w0e0 rawuuid: 5e2ab04c-18ac-11e6-9c25-001b7859b93e rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b label: (null) length: 2997371670528 offset: 3221291008 type: freebsd-zfs index: 2 end: 5860533127 start: 6291584 Consumers: 1. Name: ada2 Mediasize: 3000592982016 (2.7T) Sectorsize: 512 Stripesize: 4096 Stripeoffset: 0 Mode: r1w1e2 Geom name: ada3 modified: false state: OK fwheads: 16 fwsectors: 63 last: 5860533134 first: 34 entries: 128 scheme: GPT Providers: 1. Name: ada3p1 Mediasize: 3221225472 (3.0G) Sectorsize: 512 Stripesize: 4096 Stripeoffset: 0 Mode: r0w0e0 rawuuid: 2b570bb9-8e40-11e3-aa1c-d43d7ed5b587 rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b label: (null) length: 3221225472 offset: 65536 type: freebsd-swap index: 1 end: 6291583 start: 128 2. Name: ada3p2 Mediasize: 2997371670528 (2.7T) Sectorsize: 512 Stripesize: 4096 Stripeoffset: 0 Mode: r0w0e0 rawuuid: 2b70d9c0-8e40-11e3-aa1c-d43d7ed5b587 rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b label: (null) length: 2997371670528 offset: 3221291008 type: freebsd-zfs index: 2 end: 5860533127 start: 6291584 Consumers: 1. Name: ada3 Mediasize: 3000592982016 (2.7T) Sectorsize: 512 Stripesize: 4096 Stripeoffset: 0 Mode: r0w0e0 On Sat, May 28, 2016 at 4:07 PM, Evgeny Sam wrote: > BlackCat, > I ran the command "zpool import -fFn 2918670121059000644 zh_vol_old" > amd it did not work. > > [root@juicy] ~# zpool import -fFn 2918670121059000644 zh_vol_old > [root@juicy] ~# zpool status > no pools available > > I think it did not work, because I am running it on the clonned drives, > which have different GPID's, please correct me if I am wrong. I can switch > it to the original drives, if you suggest so. > > Kevin, > At this moment the third drive is connected and it is/was faulty. > Also, the rest of the drives are the clones of the original ones. > > Thank you, > > EVGENY. > > > On Fri, May 27, 2016 at 5:00 AM, wrote: > >> Send freebsd-fs mailing list submissions to >> freebsd-fs@freebsd.org >> >> To subscribe or unsubscribe via the World Wide Web, visit >> https://lists.freebsd.org/mailman/listinfo/freebsd-fs >> or, via email, send a message with subject or body 'help' to >> freebsd-fs-request@freebsd.org >> >> You can reach the person managing the list at >> freebsd-fs-owner@freebsd.org >> >> When replying, please edit your Subject line so it is more specific >> than "Re: Contents of freebsd-fs digest..." >> >> >> Today's Topics: >> >> 1. Re: ZFS - RAIDZ1 Recovery (Kevin P. Neal) >> 2. Re: ZFS - RAIDZ1 Recovery (BlackCat) >> 3. Re: ZFS - RAIDZ1 Recovery (InterNetX - Juergen Gotteswinter) >> 4. Re: ZFS - RAIDZ1 Recovery (Evgeny Sam) >> >> >> ---------------------------------------------------------------------- >> >> Message: 1 >> Date: Thu, 26 May 2016 20:47:10 -0400 >> From: "Kevin P. Neal" >> To: esamorokov >> Cc: freebsd-fs@freebsd.org, BlackCat >> Subject: Re: ZFS - RAIDZ1 Recovery >> Message-ID: <20160527004710.GA47195@neutralgood.org> >> Content-Type: text/plain; charset=us-ascii >> >> On Thu, May 26, 2016 at 03:26:18PM -0700, esamorokov wrote: >> > Hello All, >> > >> > My name is Evgeny and I have 3 x 3TB in RAIDZ1, where one drive is >> > gone and I accidentally >> > screwed the other two. The data should be fine, just need to revert >> > uberblock in point of time, where i started doing changes. >> >> You may need to ask on a ZFS or OpenZFS specific list. I'm not aware of >> many deep ZFS experts who hang out on this list. >> >> > History: >> > I was using WEB GUI of FreeNas and it reported a failed drive >> > I shutdown the computer and replaced the drive, but I did not >> > noticed that I accidentally disconnected power of another drive >> >> What happened to the third drive, the one you pulled? Did it fail >> in a way that may make it viable for an attempt to revive the pool? >> Or is it just a brick at this point in which case it is useless? >> >> If the third drive is perhaps usable then make sure all three are >> connected and powered up. >> -- >> Kevin P. Neal http://www.pobox.com/~kpn/ >> >> "Nonbelievers found it difficult to defend their position in \ >> the presense of a working computer." -- a DEC Jensen paper >> >> >> ------------------------------ >> >> Message: 2 >> Date: Fri, 27 May 2016 10:36:11 +0300 >> From: BlackCat >> To: esamorokov >> Cc: freebsd-fs@freebsd.org >> Subject: Re: ZFS - RAIDZ1 Recovery >> Message-ID: >> < >> CAD-rSeea_7TzxREVAsn8tKxLbtth62m3j8opsb2FoA3qc_ZrsQ@mail.gmail.com> >> Content-Type: text/plain; charset=UTF-8 >> >> Hello Evgeny, >> >> 2016-05-27 1:26 GMT+03:00 esamorokov : >> > I have 3 x 3TB in RAIDZ1, where one drive is gone and I accidentally >> > screwed the other two. The data should be fine, just need to revert >> > uberblock in point of time, where i started doing changes. >> > >> try the following command, it just checks whether is possible to >> import your pool by discarding some of the most recent writes): >> >> # zpool import -fFn 2918670121059000644 zh_vol_old >> >> Because you have already created a new pool with the same name as old, >> this command import pool by it ID (2918670121059000644) with new name >> (zh_vol_old). >> >> > History: >> > I was using WEB GUI of FreeNas and it reported a failed drive >> > I shutdown the computer and replaced the drive, but I did not >> noticed >> > that I accidentally disconnected power of another drive >> > I powered on the server and expanded the pool where there only one >> drive >> > of the pool was active >> >> As far as I understand attached log, zfs assumes that disk data >> corrupted. But this is quite stranger, since zfs normally survives if >> you forget to attach some disk during bad disk replacement. >> >> > Then I began to really learn ZFS and messing up with bits >> > At some point I created a backup bit-to-bit images of the two drives >> > from the pool (using R-Studio) >> > >> The question of curiosity: do you experimenting now with copies or >> with original disks? >> >> > >> > Specs: >> > OS: FreeBSD 9.2-RELEASE (FREENAS.amd64) #0 r+2315ea3: Fri Dec 20 >> > 12:48:50 PST 2013 >> > RAID: [root@juicy] ~# camcontrol devlist >> > at scbus1 target 0 lun 0 >> (pass1,ada1) >> > at scbus2 target 0 lun 0 >> (ada2,pass2) >> > at scbus3 target 0 lun 0 >> (pass3,ada3) >> > [root@juicy] ~# zdb >> > zh_vol: >> > version: 5000 >> > name: 'zh_vol' >> > state: 0 >> > txg: 14106447 >> > pool_guid: 2918670121059000644 >> > hostid: 1802987710 >> > hostname: '' >> > vdev_children: 1 >> > vdev_tree: >> > type: 'root' >> > id: 0 >> > guid: 2918670121059000644 >> > create_txg: 4 >> > children[0]: >> > type: 'raidz' >> > id: 0 >> > guid: 14123440993587991088 >> > nparity: 1 >> > metaslab_array: 34 >> > metaslab_shift: 36 >> > ashift: 12 >> > asize: 8995321675776 >> > is_log: 0 >> > create_txg: 4 >> > children[0]: >> > type: 'disk' >> > id: 0 >> > guid: 17624020450804741401 >> > path: '/dev/gptid/6e5cea27-7f52-11e3-9cd8-d43d7ed5b587' >> > whole_disk: 1 >> > DTL: 137 >> > create_txg: 4 >> > children[1]: >> > type: 'disk' >> > id: 1 >> > guid: 3253299067537287428 >> > path: '/dev/gptid/2b70d9c0-8e40-11e3-aa1c-d43d7ed5b587' >> > whole_disk: 1 >> > DTL: 133 >> > create_txg: 4 >> > children[2]: >> > type: 'disk' >> > id: 2 >> > guid: 17999524418015963258 >> > path: '/dev/gptid/1e898758-9488-11e3-a86e-d43d7ed5b587' >> > whole_disk: 1 >> > DTL: 134 >> > create_txg: 4 >> > features_for_read: >> >> -- >> BR BC >> >> >> ------------------------------ >> >> Message: 3 >> Date: Fri, 27 May 2016 09:30:30 +0200 >> From: InterNetX - Juergen Gotteswinter >> To: esamorokov , freebsd-fs@freebsd.org, >> BlackCat >> Subject: Re: ZFS - RAIDZ1 Recovery >> Message-ID: <3af5eba4-4e04-abc4-9fa7-d0a1ce47747e@internetx.com> >> Content-Type: text/plain; charset=windows-1252 >> >> Hi, >> >> after scrolling through the "History" i would wonder if its not >> completely messed up now. Less is more in such Situations.. >> >> Juergen >> >> Am 5/27/2016 um 12:26 AM schrieb esamorokov: >> > Hello All, >> > >> > My name is Evgeny and I have 3 x 3TB in RAIDZ1, where one drive is >> > gone and I accidentally >> > screwed the other two. The data should be fine, just need to revert >> > uberblock in point of time, where i started doing changes. >> > >> > I AM KINDLY ASKING FOR HELP! The pool had all of the family memories >> > for many years :( Thanks in advance! >> > >> > I am not a FreeBSD guru and have been using ZFS for a couple of >> > years, but I know Linux and do some programming/scripting. >> > Since I got that incident I started learning the depth of the ZFS, >> > but I definitely need help on it at this point. >> > Please don't ask me why I did not have backups, I was building >> > backup server in my garage, when it happened >> > >> > History: >> > I was using WEB GUI of FreeNas and it reported a failed drive >> > I shutdown the computer and replaced the drive, but I did not >> > noticed that I accidentally disconnected power of another drive >> > I powered on the server and expanded the pool where there only one >> > drive of the pool was active >> > Then I began to really learn ZFS and messing up with bits >> > At some point I created a backup bit-to-bit images of the two drives >> > from the pool (using R-Studio) >> > >> > >> > Specs: >> > OS: FreeBSD 9.2-RELEASE (FREENAS.amd64) #0 r+2315ea3: Fri Dec 20 >> > 12:48:50 PST 2013 >> > RAID: [root@juicy] ~# camcontrol devlist >> > at scbus1 target 0 lun 0 >> > (pass1,ada1) >> > at scbus2 target 0 lun 0 >> > (ada2,pass2) >> > at scbus3 target 0 lun 0 >> > (pass3,ada3) >> > [root@juicy] ~# zdb >> > zh_vol: >> > version: 5000 >> > name: 'zh_vol' >> > state: 0 >> > txg: 14106447 >> > pool_guid: 2918670121059000644 >> > hostid: 1802987710 >> > hostname: '' >> > vdev_children: 1 >> > vdev_tree: >> > type: 'root' >> > id: 0 >> > guid: 2918670121059000644 >> > create_txg: 4 >> > children[0]: >> > type: 'raidz' >> > id: 0 >> > guid: 14123440993587991088 >> > nparity: 1 >> > metaslab_array: 34 >> > metaslab_shift: 36 >> > ashift: 12 >> > asize: 8995321675776 >> > is_log: 0 >> > create_txg: 4 >> > children[0]: >> > type: 'disk' >> > id: 0 >> > guid: 17624020450804741401 >> > path: '/dev/gptid/6e5cea27-7f52-11e3-9cd8-d43d7ed5b587' >> > whole_disk: 1 >> > DTL: 137 >> > create_txg: 4 >> > children[1]: >> > type: 'disk' >> > id: 1 >> > guid: 3253299067537287428 >> > path: '/dev/gptid/2b70d9c0-8e40-11e3-aa1c-d43d7ed5b587' >> > whole_disk: 1 >> > DTL: 133 >> > create_txg: 4 >> > children[2]: >> > type: 'disk' >> > id: 2 >> > guid: 17999524418015963258 >> > path: '/dev/gptid/1e898758-9488-11e3-a86e-d43d7ed5b587' >> > whole_disk: 1 >> > DTL: 134 >> > create_txg: 4 >> > features_for_read: >> > >> > >> > _______________________________________________ >> > freebsd-fs@freebsd.org mailing list >> > https://lists.freebsd.org/mailman/listinfo/freebsd-fs >> > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >> > >> >> >> ------------------------------ >> >> Message: 4 >> Date: Fri, 27 May 2016 00:38:56 -0700 >> From: Evgeny Sam >> To: jg@internetx.com >> Cc: BlackCat , freebsd-fs@freebsd.org >> Subject: Re: ZFS - RAIDZ1 Recovery >> Message-ID: >> > 4XKK7qiOTtYBka_gHzkVNyXh78ecvhOwqxpMZLdcsupw@mail.gmail.com> >> Content-Type: text/plain; charset=UTF-8 >> >> Hi, >> I don't know if it helps, but right after I recreated the pool with >> absolute paths of the drives (adaX) I made a bit-to-bit image copy of the >> drives. Now I am restoring those images to the NEW DRIVES (similar >> models). >> >> Thank you, >> Evgeny. >> On May 27, 2016 12:30 AM, "InterNetX - Juergen Gotteswinter" < >> jg@internetx.com> wrote: >> >> > Hi, >> > >> > after scrolling through the "History" i would wonder if its not >> > completely messed up now. Less is more in such Situations.. >> > >> > Juergen >> > >> > Am 5/27/2016 um 12:26 AM schrieb esamorokov: >> > > Hello All, >> > > >> > > My name is Evgeny and I have 3 x 3TB in RAIDZ1, where one drive is >> > > gone and I accidentally >> > > screwed the other two. The data should be fine, just need to >> revert >> > > uberblock in point of time, where i started doing changes. >> > > >> > > I AM KINDLY ASKING FOR HELP! The pool had all of the family >> memories >> > > for many years :( Thanks in advance! >> > > >> > > I am not a FreeBSD guru and have been using ZFS for a couple of >> > > years, but I know Linux and do some programming/scripting. >> > > Since I got that incident I started learning the depth of the ZFS, >> > > but I definitely need help on it at this point. >> > > Please don't ask me why I did not have backups, I was building >> > > backup server in my garage, when it happened >> > > >> > > History: >> > > I was using WEB GUI of FreeNas and it reported a failed drive >> > > I shutdown the computer and replaced the drive, but I did not >> > > noticed that I accidentally disconnected power of another drive >> > > I powered on the server and expanded the pool where there only one >> > > drive of the pool was active >> > > Then I began to really learn ZFS and messing up with bits >> > > At some point I created a backup bit-to-bit images of the two >> drives >> > > from the pool (using R-Studio) >> > > >> > > >> > > Specs: >> > > OS: FreeBSD 9.2-RELEASE (FREENAS.amd64) #0 r+2315ea3: Fri Dec 20 >> > > 12:48:50 PST 2013 >> > > RAID: [root@juicy] ~# camcontrol devlist >> > > at scbus1 target 0 lun 0 >> > > (pass1,ada1) >> > > at scbus2 target 0 lun 0 >> > > (ada2,pass2) >> > > at scbus3 target 0 lun 0 >> > > (pass3,ada3) >> > > [root@juicy] ~# zdb >> > > zh_vol: >> > > version: 5000 >> > > name: 'zh_vol' >> > > state: 0 >> > > txg: 14106447 >> > > pool_guid: 2918670121059000644 >> > > hostid: 1802987710 >> > > hostname: '' >> > > vdev_children: 1 >> > > vdev_tree: >> > > type: 'root' >> > > id: 0 >> > > guid: 2918670121059000644 >> > > create_txg: 4 >> > > children[0]: >> > > type: 'raidz' >> > > id: 0 >> > > guid: 14123440993587991088 >> > > nparity: 1 >> > > metaslab_array: 34 >> > > metaslab_shift: 36 >> > > ashift: 12 >> > > asize: 8995321675776 >> > > is_log: 0 >> > > create_txg: 4 >> > > children[0]: >> > > type: 'disk' >> > > id: 0 >> > > guid: 17624020450804741401 >> > > path: >> '/dev/gptid/6e5cea27-7f52-11e3-9cd8-d43d7ed5b587' >> > > whole_disk: 1 >> > > DTL: 137 >> > > create_txg: 4 >> > > children[1]: >> > > type: 'disk' >> > > id: 1 >> > > guid: 3253299067537287428 >> > > path: >> '/dev/gptid/2b70d9c0-8e40-11e3-aa1c-d43d7ed5b587' >> > > whole_disk: 1 >> > > DTL: 133 >> > > create_txg: 4 >> > > children[2]: >> > > type: 'disk' >> > > id: 2 >> > > guid: 17999524418015963258 >> > > path: >> '/dev/gptid/1e898758-9488-11e3-a86e-d43d7ed5b587' >> > > whole_disk: 1 >> > > DTL: 134 >> > > create_txg: 4 >> > > features_for_read: >> > > >> > > >> > > _______________________________________________ >> > > freebsd-fs@freebsd.org mailing list >> > > https://lists.freebsd.org/mailman/listinfo/freebsd-fs >> > > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >> > > >> > >> >> >> ------------------------------ >> >> Subject: Digest Footer >> >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> https://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >> >> ------------------------------ >> >> End of freebsd-fs Digest, Vol 672, Issue 6 >> ****************************************** >> > >