Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 05 Nov 2020 22:51:31 +0000
From:      bugzilla-noreply@freebsd.org
To:        bugs@FreeBSD.org
Subject:   [Bug 250816] ZFS cannot import its own export on AWS EC2 12.1 & 12.2-RELEASE
Message-ID:  <bug-250816-227-R3LrugBUjX@https.bugs.freebsd.org/bugzilla/>
In-Reply-To: <bug-250816-227@https.bugs.freebsd.org/bugzilla/>
References:  <bug-250816-227@https.bugs.freebsd.org/bugzilla/>

next in thread | previous in thread | raw e-mail | index | archive | help
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D250816

--- Comment #14 from Gunther Schadow <raj@gusw.net> ---
Here is the zdb output, running the testcase from the start:

------------------------------------------------------------
# mkdir zfstc
# truncate -s 100M zfstc/0
# truncate -s 100M zfstc/1
# mkdir zfstd
# for i in zfstc/* ; do ln -s /dev/$(mdconfig -a -t vnode -f $i)
zfstd/$(basename $i) ; done
#
# zpool create -o feature@embedded_data=3Denabled -o feature@lz4_compress=
=3Denabled
-O dedup=3Don -O compression=3Dlz4 te
# zpool list
NAME       SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP  HEAL=
TH=20
ALTROOT
testpool   176M   180K   176M        -         -     1%     0%  1.00x  ONLI=
NE=20
-
------------------------------------------------------------

Now I ran the zdb with the pool just created, before exporting:

------------------------------------------------------------
# zdb -e -p zfstd -CC testpool

Configuration for import:
        vdev_children: 1
        version: 5000
        pool_guid: 1836577300510068694
        name: 'testpool'
        state: 0
        hostid: 2817290760
        hostname: 'geli'
        vdev_tree:
            type: 'root'
            id: 0
            guid: 1836577300510068694
            children[0]:
                type: 'raidz'
                id: 0
                guid: 13558473444627327763
                nparity: 1
                metaslab_array: 68
                metaslab_shift: 24
                ashift: 9
                asize: 200278016
                is_log: 0
                create_txg: 4
                children[0]:
                    type: 'disk'
                    id: 0
                    guid: 10226269325811407084
                    whole_disk: 1
                    create_txg: 4
                    path: '/usr/home/schadow/zfstd/0'
                children[1]:
                    type: 'disk'
                    id: 1
                    guid: 4864983578256370556
                    whole_disk: 1
                    create_txg: 4
                    path: '/usr/home/schadow/zfstd/1'
        load-policy:
            load-request-txg: 18446744073709551615
            load-rewind-policy: 2
zdb: can't open 'testpool': File exists
------------------------------------------------------------------

Now to export and try again

------------------------------------------------------------------
# zpool export testpool
# zdb -e -p zfstd -CC testpool

Configuration for import:
        vdev_children: 1
        version: 5000
        pool_guid: 1836577300510068694
        name: 'testpool'
        state: 1
        hostid: 2817290760
        hostname: 'geli'
        vdev_tree:
            type: 'root'
            id: 0
            guid: 1836577300510068694
            children[0]:
                type: 'raidz'
                id: 0
                guid: 13558473444627327763
                nparity: 1
                metaslab_array: 68
                metaslab_shift: 24
                ashift: 9
                asize: 200278016
                is_log: 0
                create_txg: 4
                children[0]:
                    type: 'disk'
                    id: 0
                    guid: 10226269325811407084
                    whole_disk: 1
                    create_txg: 4
                    path: '/usr/home/schadow/zfstd/0'
                children[1]:
                    type: 'disk'
                    id: 1
                    guid: 4864983578256370556
                    whole_disk: 1
                    create_txg: 4
                    path: '/usr/home/schadow/zfstd/1'
        load-policy:
            load-request-txg: 18446744073709551615
            load-rewind-policy: 2

MOS Configuration:
        version: 5000
        name: 'testpool'
        state: 1
        txg: 44
        pool_guid: 1836577300510068694
        hostid: 2817290760
        hostname: 'geli'
        com.delphix:has_per_vdev_zaps
        vdev_children: 1
        vdev_tree:
            type: 'root'
            id: 0
            guid: 1836577300510068694
            create_txg: 4
            children[0]:
                type: 'raidz'
                id: 0
                guid: 13558473444627327763
                nparity: 1
                metaslab_array: 68
                metaslab_shift: 24
                ashift: 9
                asize: 200278016
                is_log: 0
                create_txg: 4
                com.delphix:vdev_zap_top: 65
                children[0]:
                    type: 'disk'
                    id: 0
                    guid: 10226269325811407084
                    path: '/dev/md0'
                    whole_disk: 1
                    create_txg: 4
                    com.delphix:vdev_zap_leaf: 66
                children[1]:
                    type: 'disk'
                    id: 1
                    guid: 4864983578256370556
                    path: '/dev/md1'
                    whole_disk: 1
                    create_txg: 4
                    com.delphix:vdev_zap_leaf: 67
        features_for_read:
            com.delphix:embedded_data
            com.delphix:hole_birth
------------------------------------------------------------------------

Finally again the test that indeed the import problem still exists:

------------------------------------------------------------------------
# zpool import -d zfstd
   pool: testpool
     id: 1836577300510068694
  state: UNAVAIL
 status: One or more devices contains corrupted data.
 action: The pool cannot be imported due to damaged devices or data.
   see: http://illumos.org/msg/ZFS-8000-5E
 config:

        testpool                  UNAVAIL  insufficient replicas
          raidz1-0                UNAVAIL  insufficient replicas
            10226269325811407084  UNAVAIL  corrupted data
            4864983578256370556   UNAVAIL  corrupted data
# zpool import -d zfstd testpool
cannot import 'testpool': invalid vdev configuration
------------------------------------------------------------------------

And now to test your hypothesis that we have to have /dev/md* nodes, not
symlinks

But I cannot even find an import option where I could identify individual
vnodes as if the dir option is all we have?

     zpool import [-d dir | -c cachefile] [-D]
     zpool import [-o mntopts] [-o property=3Dvalue] ...
           [--rewind-to-checkpoint] [-d dir | -c cachefile] [-D] [-f] [-m]
           [-N] [-R root] [-F [-n]] -a
     zpool import [-o mntopts] [-o property=3Dvalue] ...
           [--rewind-to-checkpoint] [-d dir | -c cachefile] [-D] [-f] [-m]
           [-N] [-R root] [-t] [-F [-n]] pool | id [newpool]

But OK, I get it now, the -d option is to point to an alternative /dev/
directory, and it is not required:

------------------------------------------------------------------------
# zpool import
   pool: testpool
     id: 1836577300510068694
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

        testpool    ONLINE
          raidz1-0  ONLINE
            md0     ONLINE
            md1     ONLINE
# zpool import testpool
# zpool list
NAME       SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP  HEAL=
TH=20
ALTROOT
testpool   176M   231K   176M        -         -     2%     0%  1.00x  ONLI=
NE=20
-
-------------------------------------------------------------------------

It actually worked!

And now instead of symlinks, let me build this zfstd directory with real de=
vice
nodes to test

-------------------------------------------------------------------------
# zpool export testpool
# mkdir zfstd2
# ls -l zfstd
total 0
lrwxr-xr-x  1 root  schadow  8 Nov  5 22:02 0 -> /dev/md0
lrwxr-xr-x  1 root  schadow  8 Nov  5 22:02 1 -> /dev/md1
root@geli:/home/schadow/zfstd2 # (cd /dev ; tar cf - md[01]) |(cd cd zfstd2=
 ;
tar xvf -)
x md0
x md1
# ls -l zfstd2
total 0
crw-r-----  1 root  operator  0x6b Nov  5 22:02 md0
crw-r-----  1 root  operator  0x6c Nov  5 22:02 md1
# ls -l /dev/md[01]
crw-r-----  1 root  operator  0x6b Nov  5 22:02 /dev/md0
crw-r-----  1 root  operator  0x6c Nov  5 22:02 /dev/md1
# zpool import -d zfstd2
# zpool list
no pools available
# md5 zfstd*/*
MD5 (zfstd/0) =3D 0d48de20f5717fe54be0bdef93eb8358
MD5 (zfstd/1) =3D 2c4e7de0b3359bd75f17b49d3dcab394
md5: zfstd2/md0: Operation not supported
md5: zfstd2/md1: Operation not supported
----------------------------------------------------------------------------

So, I don't know what the purpose of the -d is if the symlinks don't work,
because with the new devfs way of creating device nodes no longer with mkno=
d,
or copyable with tar, I cannot confine these nodes to a device.

Are you telling me I don't even need to make them vnode devices? That I cou=
ld
just use files?

----------------------------------------------------------------------------
# zpool list
no pools available
# mdconfig -d -u md0
# mdconfig -d -u md1
# mdconfig -l
# zpool import -d zfstc
   pool: testpool
     id: 1836577300510068694
  state: UNAVAIL
 status: One or more devices contains corrupted data.
 action: The pool cannot be imported due to damaged devices or data.
   see: http://illumos.org/msg/ZFS-8000-5E
 config:

        testpool                  UNAVAIL  insufficient replicas
          raidz1-0                UNAVAIL  insufficient replicas
            10226269325811407084  UNAVAIL  corrupted data
            4864983578256370556   UNAVAIL  corrupted data
# ls -l zfstc
total 204864
-rw-r--r--  1 root  schadow  104857600 Nov  5 22:15 0
-rw-r--r--  1 root  schadow  104857600 Nov  5 22:15 1
----------------------------------------------------------------------------

So you are telling me it can import directly from files, but that doesn't w=
ork.
OK, OK, I get it now, you want me to also create the pool without these md
vnodes ...

----------------------------------------------------------------------------
# rm -rf zfst*
# mkdir zfstc
# truncate -s 100M zfstc/0
# truncate -s 100M zfstc/1
# zpool create -o feature@embedded_data=3Denabled -o feature@lz4_compress=
=3Denabled
-O dedup=3Don -O compression=3Dlz4 testpool raidz zfstc/*
cannot open 'zfstc/0': no such GEOM provider
must be a full path or shorthand device name
----------------------------------------------------------

see, that's what I thought, I had to use these vnode md devices because zpo=
ol
create does not operate on filed directly.

--=20
You are receiving this mail because:
You are the assignee for the bug.=



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?bug-250816-227-R3LrugBUjX>