Date: Sun, 21 Aug 2016 11:05:31 -0500 From: Brandon J. Wandersee <brandon.wandersee@gmail.com> To: Victor Sudakov <vas@mpeks.tomsk.su> Cc: "Kevin P. Neal" <kpn@neutralgood.org>, freebsd-questions@freebsd.org Subject: Re: Root on ZFS, LiveCD and BE Message-ID: <86lgzq1510.fsf@WorkBox.Home> In-Reply-To: <20160820105438.GA59960@admin.sibptus.transneft.ru> References: <20160815145419.GA3619@admin.sibptus.transneft.ru> <20160815231414.GA84125@neutralgood.org> <20160820105438.GA59960@admin.sibptus.transneft.ru>
next in thread | previous in thread | raw e-mail | index | archive | help
Victor Sudakov writes: > Kevin P. Neal wrote: >> > >> > Which is the appropriate maillist to ask questions about Root-on-ZFS >> > systems? Like the ones below: >> >> It's a reasonable place to start. Folks around here will point you in the >> right direction if the answer requires more expertise. > > Folks, please do :-) > >> > 2. Sometimes it's necessary to mount a system's root from another > >> > system (LiveCD, mfsBSD or similar). To do that, you have to import a > >> > ystem's zroot pool on another system. Is it safe? Is it not going to > >> > render the *original* system unbootable? >> >> You can use the altroot property to be certain that anything automatically >> mounted gets mounted underneath the directory specified by that property. >> Give the zpool import command the "-o altroot=/some/path" option, and make >> sure that /some/path already exists. >> >> You can also import with the "-N" option to prevent automatic mounting of >> datasets in the imported pool. >> >> But, really, take a look at the mountpoint property for the root filesystem > > Kevin, this is all good advice how not to screw up the second (fixit) system > into which the zroot of the first (original) one is being temporarily imported. > However, I am more concerned about not screwing up the first, > *original* system whose zroot is temporarily mounted to another fixit > system and then again used as a bootable root pool. I'll chime in here, as I think I understand Victor correctly: You have a production system that presently will not boot, and want to mount it in the LiveCD/Fixit environment for maintenance. However, you're concerned about (a) the mount points for the imported pool being stacked atop those of the running system, leaving it unresponsive; and (b) potential harm that the imported pool might suffer from the import process. If I'm mistaken about this, please let me know. ;) The 'altroot=' property Kevin mentioned takes care of concern (a). The command `zpool import -R <mountpoint>` mounts the pool at the alternate mountpoint; so `zpool import -R /mnt` temporarily sets '/mnt' as the root of the imported pool, as though you had mounted the root partition of a traditional filesystem there. Regarding concern (b), importing a pool should never cause any damage to it, even if it is in a 'degraded' state. (If the pool is in a 'faulted' state, such that it cannot be used at all, `zpool import` will simply fail with an error message.) ZFS is designed so that everything that's already on the disk is all but guaranteed to remain intact and consistent in the event of an unclean dismount or shutdown, or failure to boot. Note that this includes the mistake that leads to the scenario in concern (a): you mount the ZFS pool over the LiveCD/Fixit environment, leaving both unresponsive. You could just cut the power to the machine, and the pool should be just fine (there is always the possiblity that it won't be, but it's quite unlikely). Now, when you try to import a pool that was previously part of another running system, `zpool` will tell you that "forcing" the import is the only way to get the result you want. Doing so is safe; ZFS is just concerned because its record shows that the pool was last seen attached to a running system and it wasn't properly exported, and so wants to make certain you aren't trying to use a pool on two different systems simultaneously. So to sum up the past two paragraphs, importing the pool should never have any effect at all beyond mounting the datasets, and those datasets should always be in a clean state. As for your question regarding boot environments, I'll limit myself to the suggestion that if your pool isn't booting, I would guess the most likely cause (given the use of beadm) is that the 'bootfs=' property is not properly set. I looked over the bug report you filed and noticed that you're using the development version beadm. I'd recommend using the stable release of beadm and seeing if that works. You could also `chroot` into the system while it's mounted in the LiveCD environment and use beadm to gather some information. -- :: Brandon J. Wandersee :: brandon.wandersee@gmail.com :: -------------------------------------------------- :: 'The best design is as little design as possible.' :: --- Dieter Rams ----------------------------------
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?86lgzq1510.fsf>