Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 29 May 2019 17:56:37 +0300
From:      =?UTF-8?B?0JDQu9C10LrRgdCw0L3QtNGAINCf0L7QstC+0LvQvtGG0LrQuNC5?= <tarkhil@webmail.sub.ru>
To:        Mike Tancsa <mike@sentex.net>, fs@freebsd.org
Subject:   Re: Crashed ZFS
Message-ID:  <dbf7953f-1fdc-d7b1-ea50-a070ac04b785@webmail.sub.ru>
In-Reply-To: <460609b1-35a6-6bfc-318c-bd796a3e3239@sentex.net>
References:  <dc2ce6ea-dcf7-5fb2-fadf-b97f27c9bd25@webmail.sub.ru> <460609b1-35a6-6bfc-318c-bd796a3e3239@sentex.net>

next in thread | previous in thread | raw e-mail | index | archive | help
It worked!

On 29.05.2019 16:22, Mike Tancsa wrote:
> I would wait for a few more people to chime with what to do, but I had a
> similar issue (same error IIRC) last week after physically moving the
> disks to a new controller.  I did
> zpool clear -F <pool name>
> zpool export <pool name>
> zpool import <pool name>
>
> The clear gave an error but after the export / import, it came back
> online.  A scrub was done, but showed no errors. Good luck!
>
>      ---Mike
>
>
>
> On 5/29/2019 7:28 AM, Александр Поволоцкий wrote:
>> Hello
>>
>> After power surge, one of my zpools yields errors
>>
>> |root@stor:/home/tarkhil # zpool status -v big_fast_one||
>> ||  pool: big_fast_one||
>> || state: FAULTED||
>> ||status: The pool metadata is corrupted and the pool cannot be opened.||
>> ||action: Recovery is possible, but will result in some data loss.||
>> ||        Returning the pool to its state as of Tue May 28 02:00:35
>> 2019||
>> ||        should correct the problem.  Approximately 5 seconds of data||
>> ||        must be discarded, irreversibly.  Recovery can be attempted||
>> ||        by executing 'zpool clear -F big_fast_one'. A scrub of the
>> pool||
>> ||        is strongly recommended after recovery.||
>> ||   see: http://illumos.org/msg/ZFS-8000-72||
>> ||  scan: none requested||
>> ||config:||
>> ||
>> ||        NAME              STATE     READ WRITE CKSUM||
>> ||        big_fast_one      FAULTED      0     0     1||
>> ||          raidz1-0        ONLINE       0     0     7||
>> ||            gpt/ZA21TJA7  ONLINE       0     0     0||
>> ||            gpt/ZA21P6JQ  ONLINE       0     0     0||
>> ||            gpt/ZA21PJZY  ONLINE       0     0     0||
>> ||            gpt/ZA21T6L6  ONLINE       0     0     0||
>> ||            gpt/ZA21TN3R  ONLINE       0     0     0||
>> |
>>
>> |root@stor:/home/tarkhil # zpool clear -Fn big_fast_one||
>> ||internal error: out of memory|||
>>
>> while there are plenty of RAM|(96 Gb)|
>>
>> |gpart shows everything OK|
>>
>> |root@stor:/home/tarkhil # zdb -AAA -L -e big_fast_one
>>
>> Configuration for import:
>>          vdev_children: 1
>>          version: 5000
>>          pool_guid: 4972776226197917949
>>          name: 'big_fast_one'
>>          state: 0
>>          hostid: 773241384
>>          hostname: 'stor.inf.sudo.su'
>>          vdev_tree:
>>              type: 'root'
>>              id: 0
>>              guid: 4972776226197917949
>>              children[0]:
>>                  type: 'raidz'
>>                  id: 0
>>                  guid: 58821498572043303
>>                  nparity: 1
>>                  metaslab_array: 41
>>                  metaslab_shift: 38
>>                  ashift: 12
>>                  asize: 50004131840000
>>                  is_log: 0
>>                  create_txg: 4
>>                  children[0]:
>>                      type: 'disk'
>>                      id: 0
>>                      guid: 13318923208485210326
>>                      phys_path:
>> 'id1,enc@n50030480005d387f/type@0/slot@e/elmdesc@013/p1'
>>                      whole_disk: 1
>>                      DTL: 57
>>                      create_txg: 4
>>                      path: '/dev/gpt/ZA21TJA7'
>>                  children[1]:
>>                      type: 'disk'
>>                      id: 1
>>                      guid: 5421240647062683539
>>                      phys_path:
>> 'id1,enc@n50030480005d387f/type@0/slot@1/elmdesc@000/p1'
>>                      whole_disk: 1
>>                      DTL: 56
>>                      create_txg: 4
>>                      path: '/dev/gpt/ZA21P6JQ'
>>                  children[2]:
>>                      type: 'disk'
>>                      id: 2
>>                      guid: 17788210514601115893
>>                      phys_path:
>> 'id1,enc@n50030480005d387f/type@0/slot@5/elmdesc@004/p1'
>>                      whole_disk: 1
>>                      DTL: 55
>>                      create_txg: 4
>>                      path: '/dev/gpt/ZA21PJZY'
>>                  children[3]:
>>                      type: 'disk'
>>                      id: 3
>>                      guid: 11411950711187621765
>>                      phys_path:
>> 'id1,enc@n50030480005d387f/type@0/slot@9/elmdesc@008/p1'
>>                      whole_disk: 1
>>                      DTL: 54
>>                      create_txg: 4
>>                      path: '/dev/gpt/ZA21T6L6'
>>                  children[4]:
>>                      type: 'disk'
>>                      id: 4
>>                      guid: 6486033012937503138
>>                      phys_path:
>> 'id1,enc@n50030480005d387f/type@0/slot@d/elmdesc@012/p1'
>>                      whole_disk: 1
>>                      DTL: 52
>>                      create_txg: 4
>>                      path: '/dev/gpt/ZA21TN3R'
>> zdb: can't open 'big_fast_one': File exists
>>
>> ZFS_DBGMSG(zdb):
>> |
>>
>> |root@stor:/home/tarkhil # zdb -AAA -L -u -e big_fast_one
>> zdb: can't open 'big_fast_one': File exists
>> root@stor:/home/tarkhil # zdb -AAA -L -d -e big_fast_one
>> zdb: can't open 'big_fast_one': File exists
>> root@stor:/home/tarkhil # zdb -AAA -L -h -e big_fast_one
>> zdb: can't open 'big_fast_one': File exists
>> |
>>
>> |What should I do? Export and import? Rename zpool.cache and import
>> (it's a remote box, I cannot afford another 3 hours to and from it)?
>> Something else?|
>>
>> |--|
>>
>> |Alex
>> |
>>
>>
>>
>>
>> ---
>> Это сообщение проверено на вирусы антивирусом Avast.
>> https://www.avast.com/antivirus
>> _______________________________________________
>> freebsd-fs@freebsd.org mailing list
>> https://lists.freebsd.org/mailman/listinfo/freebsd-fs
>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"
>>
>>



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?dbf7953f-1fdc-d7b1-ea50-a070ac04b785>