From nobody Wed Mar 30 22:22:56 2022 X-Original-To: questions@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 03C521A3DA0A for ; Wed, 30 Mar 2022 22:23:11 +0000 (UTC) (envelope-from rcarter@pinyon.org) Received: from out2.migadu.com (out2.migadu.com [188.165.223.204]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 4KTLY94YVPz4mcB for ; Wed, 30 Mar 2022 22:23:09 +0000 (UTC) (envelope-from rcarter@pinyon.org) Message-ID: DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pinyon.org; s=key1; t=1648678982; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=uX7QIPnRZqEGJY3UmlAV2vvnv8yJ3j+oR9rmLH7S1Lc=; b=I9V/dCaE3X9Vnj9SUwadVKFRR9TwC2EZoGz5n1oIOAmWX6PiVHt2jsQaQF9IgJrG33X9xx T/k8pvCzRMK8zStT//02VjGovCRg30RgMs1sCSi5AcMIg2upAMSk59fPhS+t7jR5/ydXIH QdIhXs5a2jXRvY7PcTuYr9eNS1GJSeI= Date: Wed, 30 Mar 2022 15:22:56 -0700 List-Id: User questions List-Archive: https://lists.freebsd.org/archives/freebsd-questions List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-questions@freebsd.org X-BeenThere: freebsd-questions@freebsd.org MIME-Version: 1.0 Subject: Re: difficulties replacing a ZFS installer zroot pool with a new zroot pool on a new disk Content-Language: en-US To: questions@freebsd.org References: <899c1dd2-30f5-5e3d-f4bb-91d29011c8be@pinyon.org> X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: "Russell L. Carter" In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Migadu-Flow: FLOW_OUT X-Migadu-Auth-User: pinyon.org X-Rspamd-Queue-Id: 4KTLY94YVPz4mcB X-Spamd-Bar: --- Authentication-Results: mx1.freebsd.org; dkim=pass header.d=pinyon.org header.s=key1 header.b="I9V/dCaE"; dmarc=none; spf=pass (mx1.freebsd.org: domain of rcarter@pinyon.org designates 188.165.223.204 as permitted sender) smtp.mailfrom=rcarter@pinyon.org X-Spamd-Result: default: False [-3.48 / 15.00]; ARC_NA(0.00)[]; NEURAL_HAM_MEDIUM(-1.00)[-0.999]; R_DKIM_ALLOW(-0.20)[pinyon.org:s=key1]; FROM_HAS_DN(0.00)[]; R_SPF_ALLOW(-0.20)[+ip4:188.165.223.204]; TO_MATCH_ENVRCPT_ALL(0.00)[]; MIME_GOOD(-0.10)[text/plain]; TO_DN_NONE(0.00)[]; DMARC_NA(0.00)[pinyon.org]; RCPT_COUNT_ONE(0.00)[1]; NEURAL_HAM_LONG(-1.00)[-1.000]; DKIM_TRACE(0.00)[pinyon.org:+]; NEURAL_HAM_SHORT(-0.98)[-0.980]; MLMMJ_DEST(0.00)[questions]; RCVD_COUNT_ZERO(0.00)[0]; FROM_EQ_ENVFROM(0.00)[]; MIME_TRACE(0.00)[0:+]; ASN(0.00)[asn:16276, ipnet:188.165.0.0/16, country:FR]; MID_RHS_MATCH_FROM(0.00)[] X-ThisMailContainsUnwantedMimeParts: N Greetings, I am going to top post because all of the previous discussion is moot given what I have found out to get the problem solved, where the problem is to simply replace a ZFS system pool's drive with a new drive. To begin from the start: I installed the new NVMe SSD drive and I was able to boot the USB install image and install a new FreeBSD system on it. On reboot I first tried keeping the old SATA drive as it was. However the motherboard BIOS (CSM enabled, legacy, ASUS Prime X570-PRO) refused all of my efforts to set the boot drive to the new SSD. I finally resorted to disconnecting the data cable of the old SATA drive, and the new SSD booted fine. I then powered down the motherboard, reattached the old SATA data cable, and booted. The motherboard again refused to boot the new NVMe SSD. After about an hour of fighting the BIOS, I gave up, set the SATA drive as "hot pluggable" in the BIOS, and rebooted with the SATA data cable disconnected. Once the NVMe SSD was booted, I reattached the SATA data cable and it showed up in the 'zpool import' list. 'zpool import zroot' was not a happy solution as it collided with the new SSD zroot pool. I eventually worked out that I should rename the old pool zroot.old on import. That was also not a happy solution as it continued to automatically mount itself on top of the new SSD zroot pool. I then worked out that I need to specify an altroot: zpool import -o altroot=/mnt/zroot.old zroot.old This at first glance appeared to work. But it did NOT. I was left with the complaint I made in my original email to the freebsd-questions list: Where are my subsidiary datasets, and especially their data? zfs mount -a, for instance, did nothing, very quietly. I tried zfs-mount(8)ing several of the subsidiary datasets (eg zroot.old/usr/src), and that worked! But still I was missing some important stuff, like /root and /usr/local. To get /root, I made a wild guess, and tried: zfs mount zroot.old/ROOT/default And that brought all my subsidiary datasets (and the data) back. I would submit that looking back on 30+ years of successfully performing this exercise, that having the old SSD zroot.orig automatically import and mount everything, but not automatically import and mount everything when set given an altroot, is confusing. I would add that manually performing zfs mount zroot.old/ROOT/default (but not, say zfs unmount zroot.old/usr; zfs mount zroot.old/src) mount everything I was missing, is also confusing and unintuitive. I have two more ZFS system pools to upgrade to SSDs. I am going to try the following procedure: 1) unplug the old SATA drive and install FreeBSD to the new SSD. 2) shutdown, reattach old drive, reboot If the new drive boots (instead of the old drive), I boot to single user. I then try zpool import -o altroot /mnt/zrool.old zroot zroot.old 3) If that works, I try: zfs mount zpool.old/ROOT/default 4) If it doesn't work, I'll probably have to zpool export the pool and iterate. I forget exactly how I got the old pool renamed in the above. Anyway, onwards. Russell On 3/29/22 19:12, David Christensen wrote: > On 3/29/22 16:16, Russell L. Carter wrote: >> Greetings, >> After many hours, I am stuck trying to replace my spinning rust >> drive with a new SSD. >> >> Basically I have renamed the old drive pool 'zroot.old' and imported >> it so that it mounts to /mnt/zroot2: >> >> root@bruno> zfs list | grep zroot.old >> zroot.old                                       89.6G   523G       96K >> /mnt/zroot2/mnt/zroot.old >> zroot.old/ROOT                                  37.6G   523G       96K >> none >> zroot.old/ROOT/default                          37.6G   523G     37.6G >> /mnt/zroot2 >> zroot.old/export                                 264K   523G       88K >> /mnt/zroot2/mnt/zroot.old/export >> zroot.old/export/packages                        176K   523G       88K >> /mnt/zroot2/mnt/zroot.old/export/packages >> zroot.old/export/packages/stable-amd64-default    88K   523G       88K >> /mnt/zroot2/mnt/zroot.old/export/packages/stable-amd64-default >> zroot.old/tmp                                    144K   523G      144K >> /mnt/zroot2/tmp >> zroot.old/usr                                   37.8G   523G       96K >> /mnt/zroot2/usr >> zroot.old/usr/home                               582M   523G      582M >> /mnt/zroot2/usr/home >> zroot.old/usr/obj                               6.14G   523G     6.14G >> /mnt/zroot2/usr/obj >> zroot.old/usr/ports                             27.8G   523G     27.8G >> /mnt/zroot2/usr/ports >> zroot.old/usr/src                               3.27G   523G     3.27G >> /mnt/zroot2/usr/src >> zroot.old/var                                   1.89M   523G       96K >> /mnt/zroot2/var >> zroot.old/var/audit                               96K   523G       96K >> /mnt/zroot2/var/audit >> zroot.old/var/crash                               96K   523G       96K >> /mnt/zroot2/var/crash >> zroot.old/var/log                               1.32M   523G     1.32M >> /mnt/zroot2/var/log >> zroot.old/var/mail                               120K   523G      120K >> /mnt/zroot2/var/mail >> zroot.old/var/tmp                                176K   523G      176K >> /mnt/zroot2/var/tmp >> zroot.old/vm                                    14.1G   523G      615M >> /mnt/zroot2/vm >> zroot.old/vm/debianv9base                       3.79G   523G      120K >> /mnt/zroot2/vm/debianv9base >> zroot.old/vm/debianv9base/disk0                 3.79G   523G     3.57G  - >> zroot.old/vm/debianv9n2                         9.70G   523G      160K >> /mnt/zroot2/vm/debianv9n2 >> zroot.old/vm/debianv9n2/disk0                   9.70G   523G     11.3G  - >> root@bruno> zfs mount -a >> root@bruno> >> >> The problem is that /mnt/zroot2/usr/home, /mnt/zroot2/usr, >> /mnt/zroot2/usr/src are all empty: >> >> root@bruno> ls /mnt/zroot.old/usr >> root@bruno> >> >> Even though I can look at the individual datasets and theey're >> still using the same amount data as original.  This is a bit >> unhelpful for migrating over the old configuration. >> >> The oddball mounting is just the result of several 10s of attempts to >> import and mount so that a) the original zroot pool doesn't clobber the >> new one, and b) attempts to make the datasets visible. >> >> So can someone enlighten me on the proper way to do this, and possibly >> give a hint how I can get those original datasets visible?   This is >> definitely a new wrinkle for a geezer who has been doing such things >> without (nontrivial) problems for 30 years now. >> >> Yeah yeah, this is also my backup drive and I should have replicated >> infra over to another system...  I'm a gonna do that next. >> >> Thanks very much, >> Russell > > > I recall attempting to install two ZFS FreeBSD OS disks in the same > machine at the same time, and the results were very confusing.  I > suggest that you install only one ZFS FreeBSD OS disk at any given time. > > > If you need to work on the FreeBSD OS disk without booting it, I would > boot FreeBSD installer media and use the live system/ shell to access > the ZFS pools and datasets.  I expect that you will want to set the > "altroot" property when you import any pools.  I am unclear if you will > need to export the ZFS boot pool ("bootpool") or the ZFS root pool > ("zroot.old"?) if you import them (?). > > > If the HDD and SSD both have the same interface (e.g. SAS or SATA), if > the SSD is the same size or larger than the HDD, and if you can revert > your changes to the HDD so that it is a working FreeBSD instance again, > you should be able to use a live distribution to clone the HDD to the > SSD using dd(1), power down, remove the HDD and live media, connect the > SSD to the interface port the HDD was connected to, and boot the SSD.  I > would use a Linux live distribution without ZFS support, to ensure that > the live distribution does not interact with any ZFS content on the HDD > or SSD before or after the clone. > > > David > > >