Date: Sat, 05 Oct 2019 14:13:34 +0000 From: bugzilla-noreply@freebsd.org To: bugs@FreeBSD.org Subject: [Bug 241083] zfs: zpool import seems to probe all snapshots Message-ID: <bug-241083-227@https.bugs.freebsd.org/bugzilla/>
next in thread | raw e-mail | index | archive | help
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D241083 Bug ID: 241083 Summary: zfs: zpool import seems to probe all snapshots Product: Base System Version: 12.0-STABLE Hardware: Any OS: Any Status: New Severity: Affects Only Me Priority: --- Component: kern Assignee: bugs@FreeBSD.org Reporter: d8zNeCFG@aon.at Scenario: - Homegrown server, 6 disks - Each disk partitioned: ca. 3/4 for the first zpool ("hal.1"), ca. 1/5 for= the second zpool, rest for misc. stuff - The first zpool is imported at boot (from unencrypted "3/4" partitions), = it contains about 300 zfs filesystems + 20 volumes, where the latter are frequently partitioned (MBR, BSD, MBR + BSD, because they are used as iSCSI disks for virtualized installations of various operating systems) - The second zpool is manually imported (when needed) from the encrypted partitions (each of the "1/5" partitions is encrypted), it contains about 20 filesystems (incl. volumes) - Backups are kept by regularly taking recursive snapshots on both pools, a= bout 10 backups are kept at any time - This results in over 500 device entries in /dev/zvol (e.g., "/dev/zvol/hal.1/1/vdisks/925@backup.2019-09-21.21:59:27s2a") - At some point, the second zpool is imported Result: - The import takes a long time - Using "gstat -a", it can be observed that there is much activity on each = of the devices in /dev/zvol/... during the time it takes to import the second = pool Expected result: - Importing a pool should not look at snapshot but only "normal" devices, w= here "normal" could be defined either as not containing a "@" or as being read/w= rite or by trying the devices with the shortest path first (or maybe there is a better discriminator) in order to speed up the import - Maybe (by default) it should not look at devices in /dev/zvol at all beca= use normally one would not want to create a pool from devices in another pool --=20 You are receiving this mail because: You are the assignee for the bug.=
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?bug-241083-227>