From owner-freebsd-fs@FreeBSD.ORG Tue Apr 22 01:06:38 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 7FB9315D for ; Tue, 22 Apr 2014 01:06:38 +0000 (UTC) Received: from mail-out.apple.com (crispin.apple.com [17.151.62.50]) (using TLSv1 with cipher DES-CBC3-SHA (168/168 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 619F01CCA for ; Tue, 22 Apr 2014 01:06:38 +0000 (UTC) MIME-version: 1.0 Content-type: text/plain; charset=windows-1252 Received: from mail-out.apple.com by local.mail-out.apple.com (Oracle Communications Messaging Server 7.0.5.30.0 64bit (built Oct 22 2013)) id <0N4E00B00O1WJZ00@local.mail-out.apple.com> for freebsd-fs@freebsd.org; Mon, 21 Apr 2014 17:06:32 -0700 (PDT) Received: from relay2.apple.com ([17.128.113.67]) by local.mail-out.apple.com (Oracle Communications Messaging Server 7.0.5.30.0 64bit (built Oct 22 2013)) with ESMTP id <0N4E00M9NOASMJ21@local.mail-out.apple.com> for freebsd-fs@freebsd.org; Mon, 21 Apr 2014 17:06:32 -0700 (PDT) X-AuditID: 11807143-f79f66d0000015d3-73-5355b2886ed0 Received: from ggulchin.apple.com (ggulchin.apple.com [17.199.68.248]) (using TLS with cipher AES128-SHA (128/128 bits)) (Client did not present a certificate) by relay2.apple.com (Apple SCV relay) with SMTP id 50.1C.05587.882B5535; Mon, 21 Apr 2014 17:06:32 -0700 (PDT) Subject: Fwd: ZFS unable to import pool From: Gena Guchin Date: Mon, 21 Apr 2014 17:06:32 -0700 Content-transfer-encoding: quoted-printable Message-id: <50B7A3BC-293C-4A9E-AD13-30582EA4561E@icloud.com> References: <79B67A2F-DE78-4272-BA1D-FD6E6D5F1D07@icloud.com> To: FreeBSD Filesystems X-Mailer: Apple Mail (2.1878) X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFupgluLIzCtJLcpLzFFi42IRPO7yQ7djU2iwQftfNYtjj3+yWcxdcJ3V gcljxqf5LB7Lpx1lDGCK4rJJSc3JLEst0rdL4MrY2viMreBTZsWy/6tYGhgvBXQxcnJICJhI LPq+gwXCFpO4cG89WxcjF4eQwBQmiTnHm5hAEswCehI7rv9i7WLk4OAVMJBYuygSJCwsoC4x t+8bWC8bkP3z6yNWEJtFQFVixsWtjBCt2hLLFr5mBrF5BWwl9j54wAgyRgjIvra9DCQsAjTx 4dpNrBAnyEpMbNzGOIGRdxaSxbMQFs9CMnQBI/MqRoGi1JzESiO9xIKCnFS95PzcTYyg8Gko dN7BeGyZ1SFGAQ5GJR5eCYPQYCHWxLLiytxDjBIczEoivGvTgEK8KYmVValF+fFFpTmpxYcY pTlYlMR59Zj9g4UE0hNLUrNTUwtSi2CyTBycUg2MAncPfZG3S+YyuHtFbPnX06wsiom5DD83 ZGpsVrd0W8u5XvzdIZ8e7QreqZ0xaQfz51k/Kar9df7c5PUPL00wlBB5XRC9bNIxVckjN+7f T1jXbjRrzjfX7QzW5/ZG+j8Oj9y5Ve/RUbty858srpvv1xoqHrBpD1z+T/dP0TZv9X8h6g5f JihGK7EUZyQaajEXFScCADMwWKIbAgAA X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 22 Apr 2014 01:06:38 -0000 Begin forwarded message: > From: Gena Guchin > Subject: Re: ZFS unable to import pool > Date: April 21, 2014 at 4:25:14 PM PDT > To: Hakisho Nukama >=20 > Hakisho,=20 >=20 > I did try it. >=20 >=20 > # zpool import -F -o readonly=3Don storage > cannot import 'storage': one or more devices is currently unavailable >=20 >=20 > # gpart list > Geom name: ada0 > modified: false > state: OK > fwheads: 16 > fwsectors: 63 > last: 1953525134 > first: 34 > entries: 128 > scheme: GPT > Providers: > 1. Name: ada0p1 > Mediasize: 524288 (512K) > Sectorsize: 512 > Stripesize: 4096 > Stripeoffset: 0 > Mode: r0w0e0 > rawuuid: e621bb07-a4a4-11e3-98fc-001d7d090860 > rawtype: 83bd6b9d-7f41-11dc-be0b-001560b84f0f > label: gptboot0 > length: 524288 > offset: 20480 > type: freebsd-boot > index: 1 > end: 1063 > start: 40 > 2. Name: ada0p2 > Mediasize: 4294967296 (4.0G) > Sectorsize: 512 > Stripesize: 4096 > Stripeoffset: 0 > Mode: r1w1e1 > rawuuid: e6633c97-a4a4-11e3-98fc-001d7d090860 > rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b > label: swap0 > length: 4294967296 > offset: 544768 > type: freebsd-swap > index: 2 > end: 8389671 > start: 1064 > 3. Name: ada0p3 > Mediasize: 995909353472 (928G) > Sectorsize: 512 > Stripesize: 4096 > Stripeoffset: 0 > Mode: r1w1e2 > rawuuid: e6953f31-a4a4-11e3-98fc-001d7d090860 > rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b > label: zfs0 > length: 995909353472 > offset: 4295512064 > type: freebsd-zfs > index: 3 > end: 1953525127 > start: 8389672 > Consumers: > 1. Name: ada0 > Mediasize: 1000204886016 (932G) > Sectorsize: 512 > Stripesize: 4096 > Stripeoffset: 0 > Mode: r2w2e5 >=20 > Geom name: ada1 > modified: false > state: OK > fwheads: 16 > fwsectors: 63 > last: 62499999 > first: 63 > entries: 4 > scheme: MBR > Providers: > 1. Name: ada1s1 > Mediasize: 16105775616 (15G) > Sectorsize: 512 > Stripesize: 0 > Stripeoffset: 32256 > Mode: r0w0e0 > attrib: active > rawtype: 165 > length: 16105775616 > offset: 32256 > type: freebsd > index: 1 > end: 31456655 > start: 63 > 2. Name: ada1s2 > Mediasize: 15893692416 (15G) > Sectorsize: 512 > Stripesize: 0 > Stripeoffset: 3220905984 > Mode: r0w0e0 > attrib: active > rawtype: 165 > length: 15893692416 > offset: 16105807872 > type: freebsd > index: 2 > end: 62499023 > start: 31456656 > Consumers: > 1. Name: ada1 > Mediasize: 32000000000 (30G) > Sectorsize: 512 > Mode: r0w0e0 >=20 > Geom name: diskid/DISK-CVEM852600N5032HGN > modified: false > state: OK > fwheads: 16 > fwsectors: 63 > last: 62499999 > first: 63 > entries: 4 > scheme: MBR > Providers: > 1. Name: diskid/DISK-CVEM852600N5032HGNs1 > Mediasize: 16105775616 (15G) > Sectorsize: 512 > Stripesize: 0 > Stripeoffset: 32256 > Mode: r0w0e0 > attrib: active > rawtype: 165 > length: 16105775616 > offset: 32256 > type: freebsd > index: 1 > end: 31456655 > start: 63 > 2. Name: diskid/DISK-CVEM852600N5032HGNs2 > Mediasize: 15893692416 (15G) > Sectorsize: 512 > Stripesize: 0 > Stripeoffset: 3220905984 > Mode: r0w0e0 > attrib: active > rawtype: 165 > length: 15893692416 > offset: 16105807872 > type: freebsd > index: 2 > end: 62499023 > start: 31456656 > Consumers: > 1. Name: diskid/DISK-CVEM852600N5032HGN > Mediasize: 32000000000 (30G) > Sectorsize: 512 > Mode: r0w0e0 >=20 > Geom name: ada2 > modified: false > state: OK > fwheads: 16 > fwsectors: 63 > last: 1953525134 > first: 34 > entries: 128 > scheme: GPT > Providers: > 1. Name: ada2p1 > Mediasize: 524288 (512K) > Sectorsize: 512 > Stripesize: 4096 > Stripeoffset: 0 > Mode: r0w0e0 > rawuuid: e73e1154-a4a4-11e3-98fc-001d7d090860 > rawtype: 83bd6b9d-7f41-11dc-be0b-001560b84f0f > label: gptboot1 > length: 524288 > offset: 20480 > type: freebsd-boot > index: 1 > end: 1063 > start: 40 > 2. Name: ada2p2 > Mediasize: 4294967296 (4.0G) > Sectorsize: 512 > Stripesize: 4096 > Stripeoffset: 0 > Mode: r1w1e1 > rawuuid: e77bd5dd-a4a4-11e3-98fc-001d7d090860 > rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b > label: swap1 > length: 4294967296 > offset: 544768 > type: freebsd-swap > index: 2 > end: 8389671 > start: 1064 > 3. Name: ada2p3 > Mediasize: 995909353472 (928G) > Sectorsize: 512 > Stripesize: 4096 > Stripeoffset: 0 > Mode: r1w1e2 > rawuuid: e7ad15ae-a4a4-11e3-98fc-001d7d090860 > rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b > label: zfs1 > length: 995909353472 > offset: 4295512064 > type: freebsd-zfs > index: 3 > end: 1953525127 > start: 8389672 > Consumers: > 1. Name: ada2 > Mediasize: 1000204886016 (932G) > Sectorsize: 512 > Stripesize: 4096 > Stripeoffset: 0 > Mode: r2w2e5 >=20 >=20 >=20 > thanks for your help! >=20 >=20 > On Apr 21, 2014, at 4:17 PM, Hakisho Nukama wrote: >=20 >> Hi Gena, >>=20 >> a missing cache device shouldn't be a problem. >> There was a problem some years ago, where a pool was lost >> with a missing cache device. >> But that seems to be some ancient past (version 19 or whatever = changed that). >> Otherwise I wouldn't use a cache device myself. >>=20 >> You may try other ZFS implementations to import your pool. >> ZFSonLinux, Illumos. >> https://github.com/zfsonlinux/zfs/issues/1863 >> https://groups.google.com/forum/#!topic/zfs-fuse/TaOCLPQ8mp0 >> https://forums.freebsd.org/viewtopic.php?&t=3D18221 >>=20 >> Have you tried the -o readonly=3Don option for zpool import? >> Can you show your gpart list output? >>=20 >> Best Regards, >> Nukama >>=20 >> On Mon, Apr 21, 2014 at 10:18 PM, Gena Guchin = wrote: >>> Hakisho, >>>=20 >>> this is weird, while I do not see ONLINE next to cache device = ada1s2, it is the same device as logs ada1s1, just different slice. >>> I tried to see the difference between zfs labels on that device. >>>=20 >>>=20 >>> [gena@ggulchin]-pts/0:57# zdb -l /dev/ada1s2 >>> -------------------------------------------- >>> LABEL 0 >>> -------------------------------------------- >>> version: 5000 >>> state: 4 >>> guid: 7108193965515577889 >>> -------------------------------------------- >>> LABEL 1 >>> -------------------------------------------- >>> version: 5000 >>> state: 4 >>> guid: 7108193965515577889 >>> -------------------------------------------- >>> LABEL 2 >>> -------------------------------------------- >>> version: 5000 >>> state: 4 >>> guid: 7108193965515577889 >>> -------------------------------------------- >>> LABEL 3 >>> -------------------------------------------- >>> version: 5000 >>> state: 4 >>> guid: 7108193965515577889 >>> [gena@ggulchin]-pts/0:58# zdb -l /dev/ada1s1 >>> -------------------------------------------- >>> LABEL 0 >>> -------------------------------------------- >>> version: 5000 >>> name: 'storage' >>> state: 1 >>> txg: 14792113 >>> pool_guid: 11699153865862401654 >>> hostid: 3089874380 >>> hostname: 'ggulchin.homeunix.com' >>> top_guid: 15354816574459194272 >>> guid: 15354816574459194272 >>> is_log: 1 >>> vdev_children: 3 >>> vdev_tree: >>> type: 'disk' >>> id: 1 >>> guid: 15354816574459194272 >>> path: '/dev/ada1s1' >>> phys_path: '/dev/ada1s1' >>> whole_disk: 1 >>> metaslab_array: 125 >>> metaslab_shift: 27 >>> ashift: 9 >>> asize: 16100884480 >>> is_log: 1 >>> DTL: 137 >>> create_txg: 10478480 >>> features_for_read: >>> -------------------------------------------- >>> LABEL 1 >>> -------------------------------------------- >>> version: 5000 >>> name: 'storage' >>> state: 1 >>> txg: 14792113 >>> pool_guid: 11699153865862401654 >>> hostid: 3089874380 >>> hostname: 'ggulchin.homeunix.com' >>> top_guid: 15354816574459194272 >>> guid: 15354816574459194272 >>> is_log: 1 >>> vdev_children: 3 >>> vdev_tree: >>> type: 'disk' >>> id: 1 >>> guid: 15354816574459194272 >>> path: '/dev/ada1s1' >>> phys_path: '/dev/ada1s1' >>> whole_disk: 1 >>> metaslab_array: 125 >>> metaslab_shift: 27 >>> ashift: 9 >>> asize: 16100884480 >>> is_log: 1 >>> DTL: 137 >>> create_txg: 10478480 >>> features_for_read: >>> -------------------------------------------- >>> LABEL 2 >>> -------------------------------------------- >>> version: 5000 >>> name: 'storage' >>> state: 1 >>> txg: 14792113 >>> pool_guid: 11699153865862401654 >>> hostid: 3089874380 >>> hostname: 'ggulchin.homeunix.com' >>> top_guid: 15354816574459194272 >>> guid: 15354816574459194272 >>> is_log: 1 >>> vdev_children: 3 >>> vdev_tree: >>> type: 'disk' >>> id: 1 >>> guid: 15354816574459194272 >>> path: '/dev/ada1s1' >>> phys_path: '/dev/ada1s1' >>> whole_disk: 1 >>> metaslab_array: 125 >>> metaslab_shift: 27 >>> ashift: 9 >>> asize: 16100884480 >>> is_log: 1 >>> DTL: 137 >>> create_txg: 10478480 >>> features_for_read: >>> -------------------------------------------- >>> LABEL 3 >>> -------------------------------------------- >>> version: 5000 >>> name: 'storage' >>> state: 1 >>> txg: 14792113 >>> pool_guid: 11699153865862401654 >>> hostid: 3089874380 >>> hostname: 'ggulchin.homeunix.com' >>> top_guid: 15354816574459194272 >>> guid: 15354816574459194272 >>> is_log: 1 >>> vdev_children: 3 >>> vdev_tree: >>> type: 'disk' >>> id: 1 >>> guid: 15354816574459194272 >>> path: '/dev/ada1s1' >>> phys_path: '/dev/ada1s1' >>> whole_disk: 1 >>> metaslab_array: 125 >>> metaslab_shift: 27 >>> ashift: 9 >>> asize: 16100884480 >>> is_log: 1 >>> DTL: 137 >>> create_txg: 10478480 >>> features_for_read: >>>=20 >>>=20 >>> does this mean SSD drive is corrupted? >>> is my pool lost forever? >>>=20 >>> thanks! >>>=20 >>>=20 >>> On Apr 21, 2014, at 2:24 PM, Hakisho Nukama = wrote: >>>=20 >>>> Hi Gena, >>>>=20 >>>> there are several options to import a pool, which might work. >>>> It looks like only one device is missing in raidz1, so the pool >>>> could be importable, if the cache device is also available. >>>> Try to connect it back, this can cause an non-importable pool. >>>>=20 >>>> Try reading the zpool man page and investigate into following = flags: >>>> zpool import -F -o readonly=3Don >>>>=20 >>>> Best Regards, >>>> Nukama >>>>=20 >>>> On Mon, Apr 21, 2014 at 7:29 PM, Gena Guchin = wrote: >>>>> Hello FreeBSD users, >>>>>=20 >>>>> my appologies for reposting, but I'd really need your help! >>>>>=20 >>>>>=20 >>>>> I have this huge problem with my ZFS server. I have accidentally = formatted one of the drives in exported ZFS pool. and now I can=92t = import the pool back. this is extremely important pool for me. device = that is missing is still attached to the system. Any help would be = greatly appreciated. >>>>>=20 >>>>>=20 >>>>>=20 >>>>>=20 >>>>> #uname -a >>>>> FreeBSD XXX 10.0-RELEASE FreeBSD 10.0-RELEASE #0 r260789: Thu Jan = 16 22:34:59 UTC 2014 = root@snap.freebsd.org:/usr/obj/usr/src/sys/GENERIC amd64 >>>>>=20 >>>>> #zpool import >>>>> pool: storage >>>>> id: 11699153865862401654 >>>>> state: UNAVAIL >>>>> status: One or more devices are missing from the system. >>>>> action: The pool cannot be imported. Attach the missing >>>>> devices and try again. >>>>> see: http://illumos.org/msg/ZFS-8000-6X >>>>> config: >>>>>=20 >>>>> storage UNAVAIL missing device >>>>> raidz1-0 DEGRADED >>>>> ada3 ONLINE >>>>> ada4 ONLINE >>>>> ada5 ONLINE >>>>> ada6 ONLINE >>>>> 248348789931078390 UNAVAIL cannot open >>>>> cache >>>>> ada1s2 >>>>> logs >>>>> ada1s1 ONLINE >>>>>=20 >>>>> Additional devices are known to be part of this pool, though = their >>>>> exact configuration cannot be determined. >>>>>=20 >>>>>=20 >>>>> # zpool list >>>>> NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT >>>>> zroot 920G 17.9G 902G 1% 1.00x ONLINE - >>>>>=20 >>>>> # zpool upgrade >>>>> This system supports ZFS pool feature flags. >>>>>=20 >>>>> All pools are formatted using feature flags. >>>>>=20 >>>>> Every feature flags pool has all supported features enabled. >>>>>=20 >>>>> # zfs upgrade >>>>> This system is currently running ZFS filesystem version 5. >>>>>=20 >>>>> All filesystems are formatted with the current version. >>>>>=20 >>>>>=20 >>>>> Thanks a lot! >>>>>=20 >>>>> =97 Gena >>>>> _______________________________________________ >>>>> freebsd-fs@freebsd.org mailing list >>>>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >>>>> To unsubscribe, send any mail to = "freebsd-fs-unsubscribe@freebsd.org" >>>=20 >=20