Date: Mon, 02 Nov 2020 19:01:02 +0000 From: bugzilla-noreply@freebsd.org To: bugs@FreeBSD.org Subject: [Bug 250816] AWS EC2 ZFS cannot import its own export! Message-ID: <bug-250816-227@https.bugs.freebsd.org/bugzilla/>
next in thread | raw e-mail | index | archive | help
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D250816 Bug ID: 250816 Summary: AWS EC2 ZFS cannot import its own export! Product: Base System Version: 12.2-RELEASE Hardware: amd64 OS: Any Status: New Severity: Affects Some People Priority: --- Component: kern Assignee: bugs@FreeBSD.org Reporter: raj@gusw.net On a fresh deployment of the most recent official FreeBSD-12.2 EC2 AMI on Amazon. No complicated configurations. Only one added line in rc.conf=20 zfs_enable=3D"YES" without which zfs wouldn't even work. The summary overview is this: 1. zpool create .... works and creates the pool shown with zpool list 2. zpool export ... without error 3. zpool import ... says that one or more devices are corrupt Here is a (ba)sh script, you can just run this yourself: <script> mkdir zfstc truncate -s 100M zfstc/0 truncate -s 100M zfstc/1 mkdir zfstd for i in zfstc/* ; do ln -s /dev/$(mdconfig -a -t vnode -f $i) zfstd/$(base= name $i) ; done zpool create -o feature@embedded_data=3Denabled -o feature@lz4_compress=3De= nabled -O dedup=3Don -O compression=3Dlz4 testpool raidz $(for i in zfstd/* ; do r= eadlink $i ; done) zpool list zpool export testpool zpool import -d zfstd for i in zfstd/* ; do mdconfig -d -u $(readlink $i) && rm $i ; done rm zfstc/* truncate -s 100M zfstc/0 truncate -s 100M zfstc/1 for i in zfstc/* ; do ln -s /dev/$(mdconfig -a -t vnode -f $i) zfstd/$(base= name $i) ; done zpool create testpool raidz $(for i in zfstd/* ; do readlink $i ; done) zpool list zpool export testpool zpool import -d zfstd for i in zfstd/* ; do mdconfig -d -u $(readlink $i) && rm $i ; done rm zfstc/* truncate -s 100M zfstc/0 truncate -s 100M zfstc/1 for i in zfstc/* ; do ln -s /dev/$(mdconfig -a -t vnode -f $i) zfstd/$(base= name $i) ; done zpool create testpool mirror $(for i in zfstd/* ; do readlink $i ; done) zpool list zpool export testpool zpool import -d zfstd for i in zfstd/* ; do mdconfig -d -u $(readlink $i) && rm $i ; done rm -r zfstc zfstd </script> You see in it repeated attempts changing the options and zfs device type, n= one of which makes any difference. Here is the log on another system where it all worked: <log> # mkdir zfstc # truncate -s 100M zfstc/0 # truncate -s 100M zfstc/1 # mkdir zfstd # for i in zfstc/* ; do ln -s /dev/$(mdconfig -a -t vnode -f $i) zfstd/$(basename $i) ; done # # zpool create -o feature@embedded_data=3Denabled -o feature@lz4_compress= =3Denabled -O dedup=3Don -O compression=3Dlz4 testpool raidz $(for i in zfstd/* ; do r= eadlink $i ; done) # zpool list NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEAL= TH=20 ALTROOT testpool 176M 186K 176M - - 1% 0% 1.00x ONLI= NE=20 - # zpool export testpool # zpool import -d zfstd pool: testpool id: 14400958070908437474 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: testpool ONLINE raidz1-0 ONLINE md10 ONLINE md11 ONLINE # # for i in zfstd/* ; do mdconfig -d -u $(readlink $i) && rm $i ; done # rm zfstc/* # truncate -s 100M zfstc/0 # truncate -s 100M zfstc/1 # for i in zfstc/* ; do ln -s /dev/$(mdconfig -a -t vnode -f $i) zfstd/$(basename $i) ; done # # zpool create testpool raidz $(for i in zfstd/* ; do readlink $i ; done) # zpool list NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEAL= TH=20 ALTROOT testpool 176M 156K 176M - - 1% 0% 1.00x ONLI= NE=20 - # zpool export testpool # zpool import -d zfstd pool: testpool id: 7399105644867648490 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: testpool ONLINE raidz1-0 ONLINE md10 ONLINE md11 ONLINE # # for i in zfstd/* ; do mdconfig -d -u $(readlink $i) && rm $i ; done # rm zfstc/* # truncate -s 100M zfstc/0 # truncate -s 100M zfstc/1 # for i in zfstc/* ; do ln -s /dev/$(mdconfig -a -t vnode -f $i) zfstd/$(basename $i) ; done # # zpool create testpool mirror $(for i in zfstd/* ; do readlink $i ; done) # zpool list NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEAL= TH=20 ALTROOT testpool 80M 67.5K 79.9M - - 1% 0% 1.00x ONLI= NE=20 - # zpool export testpool # zpool import -d zfstd pool: testpool id: 18245765184438368558 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: testpool ONLINE mirror-0 ONLINE md10 ONLINE md11 ONLINE # # for i in zfstd/* ; do mdconfig -d -u $(readlink $i) && rm $i ; done # rm -r zfstc zfstd </log> Now here on the new system where it fails: <log> [root@geli ~]# mkdir zfstc [root@geli ~]# truncate -s 100M zfstc/0 [root@geli ~]# truncate -s 100M zfstc/1 [root@geli ~]# mkdir zfstd [root@geli ~]# for i in zfstc/* ; do ln -s /dev/$(mdconfig -a -t vnode -f $= i) zfstd/$(basename $i) ; done [root@geli ~]# [root@geli ~]# zpool create -o feature@embedded_data=3Denabled -o feature@lz4_compress=3Denabled -O dedup=3Don -O compression=3Dlz4 testpool = raidz $(for i in zfstd/* ; do readlink $i ; done) [root@geli ~]# zpool list NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEAL= TH=20 ALTROOT testpool 176M 182K 176M - - 1% 0% 1.00x ONLI= NE=20 - [root@geli ~]# zpool export testpool [root@geli ~]# zpool import -d zfstd pool: testpool id: 3796165815934978103 state: UNAVAIL status: One or more devices contains corrupted data. action: The pool cannot be imported due to damaged devices or data. see: http://illumos.org/msg/ZFS-8000-5E config: testpool UNAVAIL insufficient replicas raidz1-0 UNAVAIL insufficient replicas 7895035226656775877 UNAVAIL corrupted data 5600170865066624323 UNAVAIL corrupted data [root@geli ~]# [root@geli ~]# for i in zfstd/* ; do mdconfig -d -u $(readlink $i) && rm $i= ; done [root@geli ~]# rm zfstc/* [root@geli ~]# truncate -s 100M zfstc/0 [root@geli ~]# truncate -s 100M zfstc/1 [root@geli ~]# for i in zfstc/* ; do ln -s /dev/$(mdconfig -a -t vnode -f $= i) zfstd/$(basename $i) ; done [root@geli ~]# [root@geli ~]# zpool create testpool raidz $(for i in zfstd/* ; do readlink= $i ; done) [root@geli ~]# zpool list NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEAL= TH=20 ALTROOT testpool 176M 146K 176M - - 1% 0% 1.00x ONLI= NE=20 - [root@geli ~]# zpool export testpool [root@geli ~]# zpool import -d zfstd pool: testpool id: 17325954959132513026 state: UNAVAIL status: One or more devices contains corrupted data. action: The pool cannot be imported due to damaged devices or data. see: http://illumos.org/msg/ZFS-8000-5E config: testpool UNAVAIL insufficient replicas raidz1-0 UNAVAIL insufficient replicas 7580076550357571857 UNAVAIL corrupted data 9867268050600021997 UNAVAIL corrupted data [root@geli ~]# [root@geli ~]# [root@geli ~]# for i in zfstd/* ; do mdconfig -d -u $(readlink $i) && rm $i= ; done [root@geli ~]# rm zfstc/* [root@geli ~]# truncate -s 100M zfstc/0 [root@geli ~]# truncate -s 100M zfstc/1 [root@geli ~]# for i in zfstc/* ; do ln -s /dev/$(mdconfig -a -t vnode -f $= i) zfstd/$(basename $i) ; done [root@geli ~]# [root@geli ~]# zpool create testpool mirror $(for i in zfstd/* ; do readlin= k $i ; done) [root@geli ~]# zpool list NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEAL= TH=20 ALTROOT testpool 80M 73K 79.9M - - 3% 0% 1.00x ONLI= NE=20 - [root@geli ~]# zpool export testpool [root@geli ~]# zpool import -d zfstd pool: testpool id: 7703888355221758527 state: UNAVAIL status: One or more devices contains corrupted data. action: The pool cannot be imported due to damaged devices or data. see: http://illumos.org/msg/ZFS-8000-5E config: testpool UNAVAIL insufficient replicas mirror-0 UNAVAIL insufficient replicas 23134336724506526 UNAVAIL corrupted data 16413307577104054419 UNAVAIL corrupted data [root@geli ~]# [root@geli ~]# for i in zfstd/* ; do mdconfig -d -u $(readlink $i) && rm $i= ; done [root@geli ~]# rm -r zfstc zfstd <log> If you are wondering if there is anything wrong with the md vnode device, I= can assure you that there is not, since I produced the MD5 hash on the underlyi= ng chunk files and through the /dev/md?? device with the same result. If you are wondering whether it is the create or export that is faulty or t= he import, I have proof that it is the import that is faulty. Why? Because I discovered this problem when I moved such files from the other FreeBSD syst= em to the new one, and failed on the import like that. First thing was run md5 hash over the files to see if they were corrupted. But no. And same files w= ith same checksum could be imported again on the old system. --=20 You are receiving this mail because: You are the assignee for the bug.=
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?bug-250816-227>