Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 21 Nov 2020 12:47:30 -0500
From:      mike tancsa <mike@sentex.net>
To:        Mateusz Guzik <mjguzik@gmail.com>, Allan Jude <allanjude@freebsd.org>
Cc:        Philip Paeps <philip@freebsd.org>, "Bjoern A. Zeeb" <bz@freebsd.org>, netperf-admin@freebsd.org, netperf-users@freebsd.org
Subject:   Re: zoo reboot Friday Nov 20 14:00 UTC
Message-ID:  <949305ed-c248-1ee1-2c53-552f2c732dbc@sentex.net>
In-Reply-To: <adc30bdf-e485-964a-1c1b-0f2fe3ede704@sentex.net>
References:  <1f8e49ff-e3da-8d24-57f1-11f17389aa84@sentex.net> <CAGudoHH=H4Xok5HG3Hbw7S=6ggdsi%2BN4zHirW50cmLGsLnhd4g@mail.gmail.com> <270b65c0-8085-fe2f-cf4f-7a2e4c17a2e8@sentex.net> <CAGudoHFLy2dxBMGd2AJZ6q6zBsU%2Bn8uLXLSiFZ1QGi_qibySVg@mail.gmail.com> <a716e874-d736-d8d5-9c45-c481f6b3dee7@sentex.net> <CAGudoHELFz7KyzQmRN8pCbgLQXPgCdHyDAQ4pzFLF%2BYswcP87A@mail.gmail.com> <163d1815-fc4a-7987-30c5-0a21e8383c93@sentex.net> <CAGudoHF3c1e2DFSAtyjMpcrbfzmMV5x6kOA_5BT5jyoDyKEHsA@mail.gmail.com> <a1ef98c6-e734-1760-f0cb-a8d31c6acc18@sentex.net> <CAGudoHE%2BxjHdBQAD3cAL84=k-kHDsZNECBGNNOn2LsStL5A7Dg@mail.gmail.com> <f9a074b9-17d3-dcfd-5559-a00e1ac75c07@sentex.net> <c01e037b-bb3a-72de-56dc-335097bb7159@freebsd.org> <CAGudoHF=oqqwt_S07PqYBC71HFR4dW5_bEJ=Lt=JWUvEg5-Jxw@mail.gmail.com> <5a46fa23-b09f-86c2-0cef-a9fbb248f2ec@freebsd.org> <CAGudoHH=LTOEaARFKvkvJ2C4ntk1WbzFTjNhSZ%2B1O=Q2m2kP9Q@mail.gmail.com> <adc30bdf-e485-964a-1c1b-0f2fe3ede704@sentex.net>

next in thread | previous in thread | raw e-mail | index | archive | help
OK, the new zoo is booting off a pair of 500G SSDs we donated.  I am
restoring to the raidz array tank

 pigz -d -c zroot-.0.gz | zfs recv -vF tank/old

mdtancsa@zoo2:~ % zpool status
  pool: tank
 state: ONLINE
config:

        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            ada1p1  ONLINE       0     0     0
            ada3p1  ONLINE       0     0     0
            ada4p1  ONLINE       0     0     0
            ada5p1  ONLINE       0     0     0
            ada6p1  ONLINE       0     0     0

errors: No known data errors

  pool: zooroot
 state: ONLINE
config:

        NAME        STATE     READ WRITE CKSUM
        zooroot     ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            ada7p4  ONLINE       0     0     0
            ada8p4  ONLINE       0     0     0

errors: No known data errors

it seems to be working so far

root@zoo2:/home/mdtancsa # zfs list -t snapshot
NAME                USED  AVAIL     REFER  MOUNTPOINT
tank/old@HOURLY30     0B      -      141K  -
tank/old@HOURLY40     0B      -      141K  -
tank/old@HOURLY50     0B      -      141K  -
tank/old@HOURLY00     0B      -      141K  -
tank/old@HOURLY10     0B      -      141K  -
tank/old@HOURLY20     0B      -      141K  -
tank/old@prev-1       0B      -      141K  -
tank/old@1            0B      -      141K  -
tank/old@2            0B      -      141K  -
tank/old@3            0B      -      141K  -
tank/old@4            0B      -      141K  -
tank/old@5            0B      -      141K  -
tank/old@6            0B      -      141K  -
tank/old@0            0B      -      141K  -
root@zoo2:/home/mdtancsa #

I imagine it will take a while

After the "level 0" is done,

pigz -d -c zroot-.1.gz | zfs recv -v tank/old

Unfortunately, I set up these backup scripts many years ago before I had
a sense of zfs and saw it all through the lens of dump/restore :(  It
was one of those, "I should get to fixing the backup soon" :(

    ---Mike

On 11/21/2020 11:18 AM, mike tancsa wrote:
> Just going to reinstall now. I will boot from 2 new SSDs and then use 4
> 4TB in RAIDZ
>
> On 11/21/2020 12:47 AM, Mateusz Guzik wrote:
>> root@zoo2:/home/mjg #  zdb -l /dev/gptid/db15e826-1a9c-11eb-8d25-0cc47a1f2fa0
>> ------------------------------------
>> LABEL 0
>> ------------------------------------
>>     version: 5000
>>     name: 'zroot'
>>     state: 0
>>     txg: 40630433
>>     pool_guid: 11911329414887727775
>>     errata: 0
>>     hostid: 3594518197
>>     hostname: 'zoo2.sentex.ca'
>>     top_guid: 7321270789669113643
>>     guid: 9170931574354766059
>>     vdev_children: 4
>>     vdev_tree:
>>         type: 'mirror'
>>         id: 3
>>         guid: 7321270789669113643
>>         metaslab_array: 26179
>>         metaslab_shift: 32
>>         ashift: 9
>>         asize: 482373533696
>>         is_log: 0
>>         create_txg: 40274122
>>         children[0]:
>>             type: 'disk'
>>             id: 0
>>             guid: 9170931574354766059
>>             path: '/dev/gptid/db15e826-1a9c-11eb-8d25-0cc47a1f2fa0'
>>             whole_disk: 1
>>             create_txg: 40274122
>>         children[1]:
>>             type: 'disk'
>>             id: 1
>>             guid: 4871900363652985181
>>             path: '/dev/mfid1p2'
>>             whole_disk: 1
>>             create_txg: 40274122
>>     features_for_read:
>>         com.delphix:hole_birth
>>         com.delphix:embedded_data
>>     labels = 0 1 2 3
>>
>>
>> On 11/21/20, Allan Jude <allanjude@freebsd.org> wrote:
>>> On 2020-11-20 21:56, Mateusz Guzik wrote:
>>>> root@zoo2:/home/mjg # zpool import
>>>>    pool: zroot
>>>>      id: 11911329414887727775
>>>>   state: FAULTED
>>>> status: The pool metadata is corrupted.
>>>>  action: The pool cannot be imported due to damaged devices or data.
>>>> 	The pool may be active on another system, but can be imported using
>>>> 	the '-f' flag.
>>>>    see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-72
>>>>  config:
>>>>
>>>> 	zroot                                           FAULTED  corrupted data
>>>> 	  mirror-0                                      DEGRADED
>>>> 	    replacing-0                                 DEGRADED
>>>> 	      1517819109053923011                       OFFLINE
>>>> 	      ada0p3                                    ONLINE
>>>> 	    ada1                                        ONLINE
>>>> 	  mirror-1                                      ONLINE
>>>> 	    ada3p3                                      ONLINE
>>>> 	    ada4p3                                      ONLINE
>>>> 	  mirror-2                                      ONLINE
>>>> 	    ada5p3                                      ONLINE
>>>> 	    ada6p3                                      ONLINE
>>>> 	  mirror-3                                      ONLINE
>>>> 	    gptid/db15e826-1a9c-11eb-8d25-0cc47a1f2fa0  ONLINE
>>>> 	    gptid/d98a2545-1a9c-11eb-8d25-0cc47a1f2fa0  ONLINE
>>>>
>>>>
>>>> On 11/21/20, Allan Jude <allanjude@freebsd.org> wrote:
>>>>> On 2020-11-20 18:05, mike tancsa wrote:
>>>>>> OK. Although looks like I will have to pull it in from backups now :(
>>>>>>
>>>>>>
>>>>>> root@zoo2:/home/mdtancsa # zpool import -f -R /mnt zroot
>>>>>> cannot import 'zroot': I/O error
>>>>>>         Destroy and re-create the pool from
>>>>>>         a backup source.
>>>>>> root@zoo2:/home/mdtancsa #
>>>>>>
>>>>>> all the disks are there :(  Not sure why its not importing ?
>>>>>>
>>>>> Can you get the output of just:
>>>>>
>>>>> zpool import
>>>>>
>>>>> To try to see what the issue might be
>>>>>
>>>>> --
>>>>> Allan Jude
>>>>>
>>> The special vdev appears to be being see as just a plain mirror vdev,
>>> that is odd.
>>>
>>> zdb -l /dev/gptid/db15e826-1a9c-11eb-8d25-0cc47a1f2fa0
>>>
>>>
>>> --
>>> Allan Jude
>>>
>



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?949305ed-c248-1ee1-2c53-552f2c732dbc>