Date: Mon, 21 Apr 2014 17:06:32 -0700 From: Gena Guchin <ggulchin@icloud.com> To: FreeBSD Filesystems <freebsd-fs@freebsd.org> Subject: Fwd: ZFS unable to import pool Message-ID: <50B7A3BC-293C-4A9E-AD13-30582EA4561E@icloud.com> References: <79B67A2F-DE78-4272-BA1D-FD6E6D5F1D07@icloud.com>
next in thread | previous in thread | raw e-mail | index | archive | help
Begin forwarded message: > From: Gena Guchin <ggulchin@icloud.com> > Subject: Re: ZFS unable to import pool > Date: April 21, 2014 at 4:25:14 PM PDT > To: Hakisho Nukama <nukama@gmail.com> >=20 > Hakisho,=20 >=20 > I did try it. >=20 >=20 > # zpool import -F -o readonly=3Don storage > cannot import 'storage': one or more devices is currently unavailable >=20 >=20 > # gpart list > Geom name: ada0 > modified: false > state: OK > fwheads: 16 > fwsectors: 63 > last: 1953525134 > first: 34 > entries: 128 > scheme: GPT > Providers: > 1. Name: ada0p1 > Mediasize: 524288 (512K) > Sectorsize: 512 > Stripesize: 4096 > Stripeoffset: 0 > Mode: r0w0e0 > rawuuid: e621bb07-a4a4-11e3-98fc-001d7d090860 > rawtype: 83bd6b9d-7f41-11dc-be0b-001560b84f0f > label: gptboot0 > length: 524288 > offset: 20480 > type: freebsd-boot > index: 1 > end: 1063 > start: 40 > 2. Name: ada0p2 > Mediasize: 4294967296 (4.0G) > Sectorsize: 512 > Stripesize: 4096 > Stripeoffset: 0 > Mode: r1w1e1 > rawuuid: e6633c97-a4a4-11e3-98fc-001d7d090860 > rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b > label: swap0 > length: 4294967296 > offset: 544768 > type: freebsd-swap > index: 2 > end: 8389671 > start: 1064 > 3. Name: ada0p3 > Mediasize: 995909353472 (928G) > Sectorsize: 512 > Stripesize: 4096 > Stripeoffset: 0 > Mode: r1w1e2 > rawuuid: e6953f31-a4a4-11e3-98fc-001d7d090860 > rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b > label: zfs0 > length: 995909353472 > offset: 4295512064 > type: freebsd-zfs > index: 3 > end: 1953525127 > start: 8389672 > Consumers: > 1. Name: ada0 > Mediasize: 1000204886016 (932G) > Sectorsize: 512 > Stripesize: 4096 > Stripeoffset: 0 > Mode: r2w2e5 >=20 > Geom name: ada1 > modified: false > state: OK > fwheads: 16 > fwsectors: 63 > last: 62499999 > first: 63 > entries: 4 > scheme: MBR > Providers: > 1. Name: ada1s1 > Mediasize: 16105775616 (15G) > Sectorsize: 512 > Stripesize: 0 > Stripeoffset: 32256 > Mode: r0w0e0 > attrib: active > rawtype: 165 > length: 16105775616 > offset: 32256 > type: freebsd > index: 1 > end: 31456655 > start: 63 > 2. Name: ada1s2 > Mediasize: 15893692416 (15G) > Sectorsize: 512 > Stripesize: 0 > Stripeoffset: 3220905984 > Mode: r0w0e0 > attrib: active > rawtype: 165 > length: 15893692416 > offset: 16105807872 > type: freebsd > index: 2 > end: 62499023 > start: 31456656 > Consumers: > 1. Name: ada1 > Mediasize: 32000000000 (30G) > Sectorsize: 512 > Mode: r0w0e0 >=20 > Geom name: diskid/DISK-CVEM852600N5032HGN > modified: false > state: OK > fwheads: 16 > fwsectors: 63 > last: 62499999 > first: 63 > entries: 4 > scheme: MBR > Providers: > 1. Name: diskid/DISK-CVEM852600N5032HGNs1 > Mediasize: 16105775616 (15G) > Sectorsize: 512 > Stripesize: 0 > Stripeoffset: 32256 > Mode: r0w0e0 > attrib: active > rawtype: 165 > length: 16105775616 > offset: 32256 > type: freebsd > index: 1 > end: 31456655 > start: 63 > 2. Name: diskid/DISK-CVEM852600N5032HGNs2 > Mediasize: 15893692416 (15G) > Sectorsize: 512 > Stripesize: 0 > Stripeoffset: 3220905984 > Mode: r0w0e0 > attrib: active > rawtype: 165 > length: 15893692416 > offset: 16105807872 > type: freebsd > index: 2 > end: 62499023 > start: 31456656 > Consumers: > 1. Name: diskid/DISK-CVEM852600N5032HGN > Mediasize: 32000000000 (30G) > Sectorsize: 512 > Mode: r0w0e0 >=20 > Geom name: ada2 > modified: false > state: OK > fwheads: 16 > fwsectors: 63 > last: 1953525134 > first: 34 > entries: 128 > scheme: GPT > Providers: > 1. Name: ada2p1 > Mediasize: 524288 (512K) > Sectorsize: 512 > Stripesize: 4096 > Stripeoffset: 0 > Mode: r0w0e0 > rawuuid: e73e1154-a4a4-11e3-98fc-001d7d090860 > rawtype: 83bd6b9d-7f41-11dc-be0b-001560b84f0f > label: gptboot1 > length: 524288 > offset: 20480 > type: freebsd-boot > index: 1 > end: 1063 > start: 40 > 2. Name: ada2p2 > Mediasize: 4294967296 (4.0G) > Sectorsize: 512 > Stripesize: 4096 > Stripeoffset: 0 > Mode: r1w1e1 > rawuuid: e77bd5dd-a4a4-11e3-98fc-001d7d090860 > rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b > label: swap1 > length: 4294967296 > offset: 544768 > type: freebsd-swap > index: 2 > end: 8389671 > start: 1064 > 3. Name: ada2p3 > Mediasize: 995909353472 (928G) > Sectorsize: 512 > Stripesize: 4096 > Stripeoffset: 0 > Mode: r1w1e2 > rawuuid: e7ad15ae-a4a4-11e3-98fc-001d7d090860 > rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b > label: zfs1 > length: 995909353472 > offset: 4295512064 > type: freebsd-zfs > index: 3 > end: 1953525127 > start: 8389672 > Consumers: > 1. Name: ada2 > Mediasize: 1000204886016 (932G) > Sectorsize: 512 > Stripesize: 4096 > Stripeoffset: 0 > Mode: r2w2e5 >=20 >=20 >=20 > thanks for your help! >=20 >=20 > On Apr 21, 2014, at 4:17 PM, Hakisho Nukama <nukama@gmail.com> wrote: >=20 >> Hi Gena, >>=20 >> a missing cache device shouldn't be a problem. >> There was a problem some years ago, where a pool was lost >> with a missing cache device. >> But that seems to be some ancient past (version 19 or whatever = changed that). >> Otherwise I wouldn't use a cache device myself. >>=20 >> You may try other ZFS implementations to import your pool. >> ZFSonLinux, Illumos. >> https://github.com/zfsonlinux/zfs/issues/1863 >> https://groups.google.com/forum/#!topic/zfs-fuse/TaOCLPQ8mp0 >> https://forums.freebsd.org/viewtopic.php?&t=3D18221 >>=20 >> Have you tried the -o readonly=3Don option for zpool import? >> Can you show your gpart list output? >>=20 >> Best Regards, >> Nukama >>=20 >> On Mon, Apr 21, 2014 at 10:18 PM, Gena Guchin <ggulchin@icloud.com> = wrote: >>> Hakisho, >>>=20 >>> this is weird, while I do not see ONLINE next to cache device = ada1s2, it is the same device as logs ada1s1, just different slice. >>> I tried to see the difference between zfs labels on that device. >>>=20 >>>=20 >>> [gena@ggulchin]-pts/0:57# zdb -l /dev/ada1s2 >>> -------------------------------------------- >>> LABEL 0 >>> -------------------------------------------- >>> version: 5000 >>> state: 4 >>> guid: 7108193965515577889 >>> -------------------------------------------- >>> LABEL 1 >>> -------------------------------------------- >>> version: 5000 >>> state: 4 >>> guid: 7108193965515577889 >>> -------------------------------------------- >>> LABEL 2 >>> -------------------------------------------- >>> version: 5000 >>> state: 4 >>> guid: 7108193965515577889 >>> -------------------------------------------- >>> LABEL 3 >>> -------------------------------------------- >>> version: 5000 >>> state: 4 >>> guid: 7108193965515577889 >>> [gena@ggulchin]-pts/0:58# zdb -l /dev/ada1s1 >>> -------------------------------------------- >>> LABEL 0 >>> -------------------------------------------- >>> version: 5000 >>> name: 'storage' >>> state: 1 >>> txg: 14792113 >>> pool_guid: 11699153865862401654 >>> hostid: 3089874380 >>> hostname: 'ggulchin.homeunix.com' >>> top_guid: 15354816574459194272 >>> guid: 15354816574459194272 >>> is_log: 1 >>> vdev_children: 3 >>> vdev_tree: >>> type: 'disk' >>> id: 1 >>> guid: 15354816574459194272 >>> path: '/dev/ada1s1' >>> phys_path: '/dev/ada1s1' >>> whole_disk: 1 >>> metaslab_array: 125 >>> metaslab_shift: 27 >>> ashift: 9 >>> asize: 16100884480 >>> is_log: 1 >>> DTL: 137 >>> create_txg: 10478480 >>> features_for_read: >>> -------------------------------------------- >>> LABEL 1 >>> -------------------------------------------- >>> version: 5000 >>> name: 'storage' >>> state: 1 >>> txg: 14792113 >>> pool_guid: 11699153865862401654 >>> hostid: 3089874380 >>> hostname: 'ggulchin.homeunix.com' >>> top_guid: 15354816574459194272 >>> guid: 15354816574459194272 >>> is_log: 1 >>> vdev_children: 3 >>> vdev_tree: >>> type: 'disk' >>> id: 1 >>> guid: 15354816574459194272 >>> path: '/dev/ada1s1' >>> phys_path: '/dev/ada1s1' >>> whole_disk: 1 >>> metaslab_array: 125 >>> metaslab_shift: 27 >>> ashift: 9 >>> asize: 16100884480 >>> is_log: 1 >>> DTL: 137 >>> create_txg: 10478480 >>> features_for_read: >>> -------------------------------------------- >>> LABEL 2 >>> -------------------------------------------- >>> version: 5000 >>> name: 'storage' >>> state: 1 >>> txg: 14792113 >>> pool_guid: 11699153865862401654 >>> hostid: 3089874380 >>> hostname: 'ggulchin.homeunix.com' >>> top_guid: 15354816574459194272 >>> guid: 15354816574459194272 >>> is_log: 1 >>> vdev_children: 3 >>> vdev_tree: >>> type: 'disk' >>> id: 1 >>> guid: 15354816574459194272 >>> path: '/dev/ada1s1' >>> phys_path: '/dev/ada1s1' >>> whole_disk: 1 >>> metaslab_array: 125 >>> metaslab_shift: 27 >>> ashift: 9 >>> asize: 16100884480 >>> is_log: 1 >>> DTL: 137 >>> create_txg: 10478480 >>> features_for_read: >>> -------------------------------------------- >>> LABEL 3 >>> -------------------------------------------- >>> version: 5000 >>> name: 'storage' >>> state: 1 >>> txg: 14792113 >>> pool_guid: 11699153865862401654 >>> hostid: 3089874380 >>> hostname: 'ggulchin.homeunix.com' >>> top_guid: 15354816574459194272 >>> guid: 15354816574459194272 >>> is_log: 1 >>> vdev_children: 3 >>> vdev_tree: >>> type: 'disk' >>> id: 1 >>> guid: 15354816574459194272 >>> path: '/dev/ada1s1' >>> phys_path: '/dev/ada1s1' >>> whole_disk: 1 >>> metaslab_array: 125 >>> metaslab_shift: 27 >>> ashift: 9 >>> asize: 16100884480 >>> is_log: 1 >>> DTL: 137 >>> create_txg: 10478480 >>> features_for_read: >>>=20 >>>=20 >>> does this mean SSD drive is corrupted? >>> is my pool lost forever? >>>=20 >>> thanks! >>>=20 >>>=20 >>> On Apr 21, 2014, at 2:24 PM, Hakisho Nukama <nukama@gmail.com> = wrote: >>>=20 >>>> Hi Gena, >>>>=20 >>>> there are several options to import a pool, which might work. >>>> It looks like only one device is missing in raidz1, so the pool >>>> could be importable, if the cache device is also available. >>>> Try to connect it back, this can cause an non-importable pool. >>>>=20 >>>> Try reading the zpool man page and investigate into following = flags: >>>> zpool import -F -o readonly=3Don >>>>=20 >>>> Best Regards, >>>> Nukama >>>>=20 >>>> On Mon, Apr 21, 2014 at 7:29 PM, Gena Guchin <ggulchin@icloud.com> = wrote: >>>>> Hello FreeBSD users, >>>>>=20 >>>>> my appologies for reposting, but I'd really need your help! >>>>>=20 >>>>>=20 >>>>> I have this huge problem with my ZFS server. I have accidentally = formatted one of the drives in exported ZFS pool. and now I can=92t = import the pool back. this is extremely important pool for me. device = that is missing is still attached to the system. Any help would be = greatly appreciated. >>>>>=20 >>>>>=20 >>>>>=20 >>>>>=20 >>>>> #uname -a >>>>> FreeBSD XXX 10.0-RELEASE FreeBSD 10.0-RELEASE #0 r260789: Thu Jan = 16 22:34:59 UTC 2014 = root@snap.freebsd.org:/usr/obj/usr/src/sys/GENERIC amd64 >>>>>=20 >>>>> #zpool import >>>>> pool: storage >>>>> id: 11699153865862401654 >>>>> state: UNAVAIL >>>>> status: One or more devices are missing from the system. >>>>> action: The pool cannot be imported. Attach the missing >>>>> devices and try again. >>>>> see: http://illumos.org/msg/ZFS-8000-6X >>>>> config: >>>>>=20 >>>>> storage UNAVAIL missing device >>>>> raidz1-0 DEGRADED >>>>> ada3 ONLINE >>>>> ada4 ONLINE >>>>> ada5 ONLINE >>>>> ada6 ONLINE >>>>> 248348789931078390 UNAVAIL cannot open >>>>> cache >>>>> ada1s2 >>>>> logs >>>>> ada1s1 ONLINE >>>>>=20 >>>>> Additional devices are known to be part of this pool, though = their >>>>> exact configuration cannot be determined. >>>>>=20 >>>>>=20 >>>>> # zpool list >>>>> NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT >>>>> zroot 920G 17.9G 902G 1% 1.00x ONLINE - >>>>>=20 >>>>> # zpool upgrade >>>>> This system supports ZFS pool feature flags. >>>>>=20 >>>>> All pools are formatted using feature flags. >>>>>=20 >>>>> Every feature flags pool has all supported features enabled. >>>>>=20 >>>>> # zfs upgrade >>>>> This system is currently running ZFS filesystem version 5. >>>>>=20 >>>>> All filesystems are formatted with the current version. >>>>>=20 >>>>>=20 >>>>> Thanks a lot! >>>>>=20 >>>>> =97 Gena >>>>> _______________________________________________ >>>>> freebsd-fs@freebsd.org mailing list >>>>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >>>>> To unsubscribe, send any mail to = "freebsd-fs-unsubscribe@freebsd.org" >>>=20 >=20
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?50B7A3BC-293C-4A9E-AD13-30582EA4561E>