Date: Thu, 17 Mar 2022 12:25:28 +0000 From: bugzilla-noreply@freebsd.org To: bugs@FreeBSD.org Subject: [Bug 262617] ZFS pool on USB drive does not mount correct on startup Message-ID: <bug-262617-227@https.bugs.freebsd.org/bugzilla/>
next in thread | raw e-mail | index | archive | help
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D262617 Bug ID: 262617 Summary: ZFS pool on USB drive does not mount correct on startup Product: Base System Version: 13.0-RELEASE Hardware: amd64 OS: Any Status: New Severity: Affects Many People Priority: --- Component: kern Assignee: bugs@FreeBSD.org Reporter: donaldcallen@gmail.com Created attachment 232513 --> https://bugs.freebsd.org/bugzilla/attachment.cgi?id=3D232513&action= =3Dedit dmesg I have a 2TB Seagate Barracuda drive with SATA/USB adapter that I use for backups and archives. I created a ZFS pool (named Primary) on this drive us= ing the entire drive, not a partition, as the ZFS docs recommend. This drive was previously used with a Linux system and had a GPT partition scheme with one partition containing an ext4 filesystem. If I have this drive connected to my system when I power it on, the first t= hing to note is that I get some chatter in /var/log/messages about a corrupted G= PT partition table: Mar 17 07:59:38 pangloss kernel: GEOM: da0: the primary GPT table is corrup= t or invalid. Mar 17 07:59:38 pangloss kernel: GEOM: da0: using the secondary instead -- recovery strongly advised. da0 is the device address of the disk. When the system comes up, 'df' indicates that the pool is mounted where it = is supposed to be and the space utilization numbers are correct. But 'ls' of t= he mountpoint returns absolutely nothing. None of the files are accessible. As root, I umount the pool and remount and now it mounts correctly. If the drive is not connected to the system at boot time and I later connect it, I get similar messages about a corrupt GPT primary table. If I try to m= ount with zfs mount Primary the mount fails. I don't have the error messages in front of me as I write this. I will try to duplicate and get the exact sequence and report in a subsequent comment. zfs status Primary said that the pool did not exist. I then rebooted the system with the drive plugged in and got the same behavior described above (blank filesystem at first, everything ok after umount/mount). dmesg attached. --=20 You are receiving this mail because: You are the assignee for the bug.=
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?bug-262617-227>