From owner-freebsd-bugs@freebsd.org Thu Nov 5 22:51:31 2020 Return-Path: Delivered-To: freebsd-bugs@mailman.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.nyi.freebsd.org (Postfix) with ESMTP id 380EE2D2F1F for ; Thu, 5 Nov 2020 22:51:31 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from mailman.nyi.freebsd.org (mailman.nyi.freebsd.org [IPv6:2610:1c1:1:606c::50:13]) by mx1.freebsd.org (Postfix) with ESMTP id 4CRzKH0wp4z3l3K for ; Thu, 5 Nov 2020 22:51:31 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: by mailman.nyi.freebsd.org (Postfix) id 1FC9A2D2E6B; Thu, 5 Nov 2020 22:51:31 +0000 (UTC) Delivered-To: bugs@mailman.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.nyi.freebsd.org (Postfix) with ESMTP id 1F8E92D2C65 for ; Thu, 5 Nov 2020 22:51:31 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "mxrelay.nyi.freebsd.org", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4CRzKH0D2Lz3lPL for ; Thu, 5 Nov 2020 22:51:31 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2610:1c1:1:606c::50:1d]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (Client did not present a certificate) by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id EED401B343 for ; Thu, 5 Nov 2020 22:51:30 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org ([127.0.1.5]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id 0A5MpUH4023380 for ; Thu, 5 Nov 2020 22:51:30 GMT (envelope-from bugzilla-noreply@freebsd.org) Received: (from www@localhost) by kenobi.freebsd.org (8.15.2/8.15.2/Submit) id 0A5MpUZk023379 for bugs@FreeBSD.org; Thu, 5 Nov 2020 22:51:30 GMT (envelope-from bugzilla-noreply@freebsd.org) X-Authentication-Warning: kenobi.freebsd.org: www set sender to bugzilla-noreply@freebsd.org using -f From: bugzilla-noreply@freebsd.org To: bugs@FreeBSD.org Subject: [Bug 250816] ZFS cannot import its own export on AWS EC2 12.1 & 12.2-RELEASE Date: Thu, 05 Nov 2020 22:51:31 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 12.2-RELEASE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Many People X-Bugzilla-Who: raj@gusw.net X-Bugzilla-Status: Open X-Bugzilla-Resolution: X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: bugs@FreeBSD.org X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-bugs@freebsd.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Bug reports List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 05 Nov 2020 22:51:31 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D250816 --- Comment #14 from Gunther Schadow --- Here is the zdb output, running the testcase from the start: ------------------------------------------------------------ # mkdir zfstc # truncate -s 100M zfstc/0 # truncate -s 100M zfstc/1 # mkdir zfstd # for i in zfstc/* ; do ln -s /dev/$(mdconfig -a -t vnode -f $i) zfstd/$(basename $i) ; done # # zpool create -o feature@embedded_data=3Denabled -o feature@lz4_compress= =3Denabled -O dedup=3Don -O compression=3Dlz4 te # zpool list NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEAL= TH=20 ALTROOT testpool 176M 180K 176M - - 1% 0% 1.00x ONLI= NE=20 - ------------------------------------------------------------ Now I ran the zdb with the pool just created, before exporting: ------------------------------------------------------------ # zdb -e -p zfstd -CC testpool Configuration for import: vdev_children: 1 version: 5000 pool_guid: 1836577300510068694 name: 'testpool' state: 0 hostid: 2817290760 hostname: 'geli' vdev_tree: type: 'root' id: 0 guid: 1836577300510068694 children[0]: type: 'raidz' id: 0 guid: 13558473444627327763 nparity: 1 metaslab_array: 68 metaslab_shift: 24 ashift: 9 asize: 200278016 is_log: 0 create_txg: 4 children[0]: type: 'disk' id: 0 guid: 10226269325811407084 whole_disk: 1 create_txg: 4 path: '/usr/home/schadow/zfstd/0' children[1]: type: 'disk' id: 1 guid: 4864983578256370556 whole_disk: 1 create_txg: 4 path: '/usr/home/schadow/zfstd/1' load-policy: load-request-txg: 18446744073709551615 load-rewind-policy: 2 zdb: can't open 'testpool': File exists ------------------------------------------------------------------ Now to export and try again ------------------------------------------------------------------ # zpool export testpool # zdb -e -p zfstd -CC testpool Configuration for import: vdev_children: 1 version: 5000 pool_guid: 1836577300510068694 name: 'testpool' state: 1 hostid: 2817290760 hostname: 'geli' vdev_tree: type: 'root' id: 0 guid: 1836577300510068694 children[0]: type: 'raidz' id: 0 guid: 13558473444627327763 nparity: 1 metaslab_array: 68 metaslab_shift: 24 ashift: 9 asize: 200278016 is_log: 0 create_txg: 4 children[0]: type: 'disk' id: 0 guid: 10226269325811407084 whole_disk: 1 create_txg: 4 path: '/usr/home/schadow/zfstd/0' children[1]: type: 'disk' id: 1 guid: 4864983578256370556 whole_disk: 1 create_txg: 4 path: '/usr/home/schadow/zfstd/1' load-policy: load-request-txg: 18446744073709551615 load-rewind-policy: 2 MOS Configuration: version: 5000 name: 'testpool' state: 1 txg: 44 pool_guid: 1836577300510068694 hostid: 2817290760 hostname: 'geli' com.delphix:has_per_vdev_zaps vdev_children: 1 vdev_tree: type: 'root' id: 0 guid: 1836577300510068694 create_txg: 4 children[0]: type: 'raidz' id: 0 guid: 13558473444627327763 nparity: 1 metaslab_array: 68 metaslab_shift: 24 ashift: 9 asize: 200278016 is_log: 0 create_txg: 4 com.delphix:vdev_zap_top: 65 children[0]: type: 'disk' id: 0 guid: 10226269325811407084 path: '/dev/md0' whole_disk: 1 create_txg: 4 com.delphix:vdev_zap_leaf: 66 children[1]: type: 'disk' id: 1 guid: 4864983578256370556 path: '/dev/md1' whole_disk: 1 create_txg: 4 com.delphix:vdev_zap_leaf: 67 features_for_read: com.delphix:embedded_data com.delphix:hole_birth ------------------------------------------------------------------------ Finally again the test that indeed the import problem still exists: ------------------------------------------------------------------------ # zpool import -d zfstd pool: testpool id: 1836577300510068694 state: UNAVAIL status: One or more devices contains corrupted data. action: The pool cannot be imported due to damaged devices or data. see: http://illumos.org/msg/ZFS-8000-5E config: testpool UNAVAIL insufficient replicas raidz1-0 UNAVAIL insufficient replicas 10226269325811407084 UNAVAIL corrupted data 4864983578256370556 UNAVAIL corrupted data # zpool import -d zfstd testpool cannot import 'testpool': invalid vdev configuration ------------------------------------------------------------------------ And now to test your hypothesis that we have to have /dev/md* nodes, not symlinks But I cannot even find an import option where I could identify individual vnodes as if the dir option is all we have? zpool import [-d dir | -c cachefile] [-D] zpool import [-o mntopts] [-o property=3Dvalue] ... [--rewind-to-checkpoint] [-d dir | -c cachefile] [-D] [-f] [-m] [-N] [-R root] [-F [-n]] -a zpool import [-o mntopts] [-o property=3Dvalue] ... [--rewind-to-checkpoint] [-d dir | -c cachefile] [-D] [-f] [-m] [-N] [-R root] [-t] [-F [-n]] pool | id [newpool] But OK, I get it now, the -d option is to point to an alternative /dev/ directory, and it is not required: ------------------------------------------------------------------------ # zpool import pool: testpool id: 1836577300510068694 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: testpool ONLINE raidz1-0 ONLINE md0 ONLINE md1 ONLINE # zpool import testpool # zpool list NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEAL= TH=20 ALTROOT testpool 176M 231K 176M - - 2% 0% 1.00x ONLI= NE=20 - ------------------------------------------------------------------------- It actually worked! And now instead of symlinks, let me build this zfstd directory with real de= vice nodes to test ------------------------------------------------------------------------- # zpool export testpool # mkdir zfstd2 # ls -l zfstd total 0 lrwxr-xr-x 1 root schadow 8 Nov 5 22:02 0 -> /dev/md0 lrwxr-xr-x 1 root schadow 8 Nov 5 22:02 1 -> /dev/md1 root@geli:/home/schadow/zfstd2 # (cd /dev ; tar cf - md[01]) |(cd cd zfstd2= ; tar xvf -) x md0 x md1 # ls -l zfstd2 total 0 crw-r----- 1 root operator 0x6b Nov 5 22:02 md0 crw-r----- 1 root operator 0x6c Nov 5 22:02 md1 # ls -l /dev/md[01] crw-r----- 1 root operator 0x6b Nov 5 22:02 /dev/md0 crw-r----- 1 root operator 0x6c Nov 5 22:02 /dev/md1 # zpool import -d zfstd2 # zpool list no pools available # md5 zfstd*/* MD5 (zfstd/0) =3D 0d48de20f5717fe54be0bdef93eb8358 MD5 (zfstd/1) =3D 2c4e7de0b3359bd75f17b49d3dcab394 md5: zfstd2/md0: Operation not supported md5: zfstd2/md1: Operation not supported ---------------------------------------------------------------------------- So, I don't know what the purpose of the -d is if the symlinks don't work, because with the new devfs way of creating device nodes no longer with mkno= d, or copyable with tar, I cannot confine these nodes to a device. Are you telling me I don't even need to make them vnode devices? That I cou= ld just use files? ---------------------------------------------------------------------------- # zpool list no pools available # mdconfig -d -u md0 # mdconfig -d -u md1 # mdconfig -l # zpool import -d zfstc pool: testpool id: 1836577300510068694 state: UNAVAIL status: One or more devices contains corrupted data. action: The pool cannot be imported due to damaged devices or data. see: http://illumos.org/msg/ZFS-8000-5E config: testpool UNAVAIL insufficient replicas raidz1-0 UNAVAIL insufficient replicas 10226269325811407084 UNAVAIL corrupted data 4864983578256370556 UNAVAIL corrupted data # ls -l zfstc total 204864 -rw-r--r-- 1 root schadow 104857600 Nov 5 22:15 0 -rw-r--r-- 1 root schadow 104857600 Nov 5 22:15 1 ---------------------------------------------------------------------------- So you are telling me it can import directly from files, but that doesn't w= ork. OK, OK, I get it now, you want me to also create the pool without these md vnodes ... ---------------------------------------------------------------------------- # rm -rf zfst* # mkdir zfstc # truncate -s 100M zfstc/0 # truncate -s 100M zfstc/1 # zpool create -o feature@embedded_data=3Denabled -o feature@lz4_compress= =3Denabled -O dedup=3Don -O compression=3Dlz4 testpool raidz zfstc/* cannot open 'zfstc/0': no such GEOM provider must be a full path or shorthand device name ---------------------------------------------------------- see, that's what I thought, I had to use these vnode md devices because zpo= ol create does not operate on filed directly. --=20 You are receiving this mail because: You are the assignee for the bug.=