From nobody Thu Jun 22 12:30:28 2023 X-Original-To: bugs@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 4Qn08446Sxz4gDbm for ; Thu, 22 Jun 2023 12:30:28 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "mxrelay.nyi.freebsd.org", Issuer "R3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4Qn0841Vyfz3kLj for ; Thu, 22 Jun 2023 12:30:28 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=freebsd.org; s=dkim; t=1687437028; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=IL3/vBFCAWbtauWV2/8cBdz9oT3M0vvlyri4dozdHnk=; b=MIHQWIzpzZMMvLvCCqs4DXrv8ON8rYUZEEpqP7UPXI8TnYkP/93GYx8SMBZLAaKeqLksQa 0flG4IVFatWOPjnHEDwu56zL1olkJ0pk5gvplj4v0syPaodTbDe9fcKb0tvtU3FF/Ch7ae XD1pIuLojoqQ+m6MZtZQEH479uciUuQ8nn5gtdVvw9L3JjD4dStjDZNW0ZY8+1Kh4oAGTY gSsVRckBBIl8sPwDDCpaBXiJwbxUAR22EjlcjFzjeXdEc+Np5wq46Z1OkXbsrRe8KlcYwW DDHCwVUjNgcRuFuQlVUPNtsRz7AXKqHkIW8PG/VhTbQunikvLUDokj/vNfUszw== ARC-Authentication-Results: i=1; mx1.freebsd.org; none ARC-Seal: i=1; s=dkim; d=freebsd.org; t=1687437028; a=rsa-sha256; cv=none; b=A0/d7UMRWIwBBpOEMM9eL1lJTd2mrmNJWvqjq9D4WTySZ+UgQreM+aeX0CZ9Jylw3zDHRv 5c74Q4HWBBqhl0sCvDDZJf4KEn8Ifb7xzTix3p2b6TiqTPy/zvEYj62fBwr5s3V3ecl3GA BvnVbL0rIGrprNpuaDvx5LHXMAe7gZ5pRpLOaEeAtoAORezeuZ8/hHjonyB8Ka8+RQE8mr d4OaCxKs3+gr6CzeYdn5l0h8baZ2mGkkOqYIIHKYjwoOX7qoX8I4gPZlJuqQhL4t4vw1rF +WsSJKfATKemuthhHWiAxpGkgyygCAfwXJk0QjudUrUp7wdZfNfQBJYJ2Wg37g== Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2610:1c1:1:606c::50:1d]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (Client did not present a certificate) by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id 4Qn0840b6wzNp4 for ; Thu, 22 Jun 2023 12:30:28 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org ([127.0.1.5]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id 35MCUSeL047354 for ; Thu, 22 Jun 2023 12:30:28 GMT (envelope-from bugzilla-noreply@freebsd.org) Received: (from www@localhost) by kenobi.freebsd.org (8.15.2/8.15.2/Submit) id 35MCUSUd047353 for bugs@FreeBSD.org; Thu, 22 Jun 2023 12:30:28 GMT (envelope-from bugzilla-noreply@freebsd.org) X-Authentication-Warning: kenobi.freebsd.org: www set sender to bugzilla-noreply@freebsd.org using -f From: bugzilla-noreply@freebsd.org To: bugs@FreeBSD.org Subject: [Bug 271989] zfs root mount error 6 after upgrade from 11.1-release to 13.2-release Date: Thu, 22 Jun 2023 12:30:28 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 13.2-RELEASE X-Bugzilla-Keywords: needs-qa X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: freebsd-bugs@virtualtec.ch X-Bugzilla-Status: Open X-Bugzilla-Resolution: X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: bugs@FreeBSD.org X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated List-Id: Bug reports List-Archive: https://lists.freebsd.org/archives/freebsd-bugs List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-bugs@freebsd.org MIME-Version: 1.0 X-ThisMailContainsUnwantedMimeParts: N https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D271989 --- Comment #9 from Markus Wild --- (In reply to Dan Langille from comment #8) there you go, the bogus=20 pool_guid: 18320603570228782289 is what causes your kernel to fail to load the pool, since it shows up in your console messages as mismatched comparisons against the vdevs the kernel found. This is most likely -as with my installation- the result of originally installing=20 the zpool on the entire disk, and then later removing that pool and reducing the=20 zfs partition and recreating the pool. From what I reverse engineered, a zp= ool seems to put 2 labels at the beginning of its assigned disk space and 2 labels at the end, most likely in an effort to be able to restore those labels should someone/something accidentally overwrite them. The stupidity of the whole thing is: the kernel code to load the zfs root filesystem seems to first scan the "entire disk device" for these 4 labels,= and if it finds any, will insist in using them and NOT consider any valid labels of partitions in the GPT partition table. zpool import doesn't do this, it's just the mount code in the kernel. There is a "zpool labelclear" command which is supposed to clear these wrong old labels, but I personally didn't trust it to not go ahead and=20 clear the labels of ALL zfs instances on the disk if you let it loose on the entire disk device. The man page is not very clear in this respect, and searching=20 for this shows I was not the only one confused on the exact behavior of tha= t=20 command. What I did in my case is: - use gpart to add an additional temporary swap partition to fill the disk: gpart add -t freebsd-swap nvd0 - this resulted in a nvd0p5 in my case - then I did dd if=3D/dev/zero of=3D/dev/nvd0p5 bs=3D1024M to clear that temp partition, and thus the end of the disk from the old=20 zpool label - remove the temp partition again: gpart delete -i 5 nvd0 if you check the device again after this (zdb -l), it shouldn't find any labels anymore. What I'd expect for the future, and why I didn't ask for this bug report=20 to be closed after I fixed my problem: - kernel mount code should first check all valid zfs partitions for labels - only if no labels are found in valid partitions should it also consider t= he entire disk device (nvd0, ada0, etc) to cover the cases where people defi= ne a zpool like "mirror /dev/ada0 /dev/ada1". I know this works for data poo= ls, but I'm not sure you could actually boot from such a pool. Cheers, Markus --=20 You are receiving this mail because: You are the assignee for the bug.=