Date: Tue, 16 Jan 2018 13:11:54 +0100 (CET) From: =?ISO-8859-1?Q?Trond_Endrest=F8l?= <Trond.Endrestol@fagskolen.gjovik.no> To: FreeBSD questions <freebsd-questions@freebsd.org> Subject: Re: Dualboot and ZFS Message-ID: <alpine.BSF.2.21.1801161251040.69908@mail.fig.ol.no> In-Reply-To: <20180116112814.GA18197@admin.sibptus.transneft.ru> References: <20180115051308.GA45168@admin.sibptus.transneft.ru> <VI1PR02MB12007D071EA5398373D2189CF6EB0@VI1PR02MB1200.eurprd02.prod.outlook.com> <20180115125241.GB60956@admin.sibptus.transneft.ru> <VI1PR02MB1200C7F0066F361E60A6CBEDF6EB0@VI1PR02MB1200.eurprd02.prod.outlook.com> <20180115144747.GA65526@admin.sibptus.transneft.ru> <VI1PR02MB120018D174817F8FFB2981D5F6EB0@VI1PR02MB1200.eurprd02.prod.outlook.com> <20180115151526.GA66342@admin.sibptus.transneft.ru> <a7920f859b666cff48f4f73ee1b2f954@dweimer.net> <20180116034929.GB89443@admin.sibptus.transneft.ru> <alpine.BSF.2.21.1801160934560.69908@mail.fig.ol.no> <20180116112814.GA18197@admin.sibptus.transneft.ru>
next in thread | previous in thread | raw e-mail | index | archive | help
On Tue, 16 Jan 2018 18:28+0700, Victor Sudakov wrote: > Trond Endrest?l wrote: > > > > I couldn't resist attempting a proof of concept, so here it is. > > Before I follow your steps, two comments: > > > > !!! Show the resulting disklabel !!! > > > > [script]# gpart show ada0s3 > > => 0 67108864 ada0s3 BSD (32G) > > 0 58720256 1 freebsd-zfs (28G) > > 58720256 8388608 2 freebsd-swap (4.0G) > > How funny! I did not even know that fstype in the disklabel can be "ZFS". I > have only seen "swap" and "4.2BSD" so far. > > $ gpart add -t freebsd-zfs md0s1 && disklabel md0s1 > md0s1a added > # /dev/md0s1: > 8 partitions: > # size offset fstype [fsize bsize bps/cpg] > a: 4095 0 ZFS > c: 4095 0 unused 0 0 # "raw" part, don't edit > $ > > [dd] > > > > > !!! Create our zpool, YMMV !!! > > > > !!! Create our initial BE, YMMV !!! > > Do you know how to create a beadm-friendly zroot manually (like the one > created automatically by bsdinstall)? I have created my own recipe based on the guides published elsewhere, including those on the FreeBSD wiki, and as usual I have applied thoughts from my own lurid mind. Have a look at my files at https://ximalas.info/~trond/create-zfs/canmount/ I create the disk layout manually, see 00-create-gpart-layout-UEFI.txt and 00-create-gpart-layout.txt for some ideas. I use a SysV approach when creating the ZFS filesystem layout and installing the system, i.e. lots and lots of environment variables. See 01-create-zfs-layout.sh, 02-temp-mountpoints.sh, 03b-install-stable-9-10-11-or-head.sh, and 04-final-mountpoints.sh. For special cases such as my mail server, I edited 01-create-zfs-layout.sh to suit the two pools, one for the system and another one for user data. All mail related filesystems ended up in the data pool. Between steps 3 and 4, I edit various files, set the root password and the timezone, ensuring sendmail's files are up & running in /etc/mail, all done from within chroot $DESTDIR. I know some like to use snapshots and clones as a safety belt before they upgrade their main BE. I do the opposite, I create a snapshot and a clone, install the new world and kernel into the clone, merge config files, update the bootfs pool property, and reboot into the new clone. To me, this saves time while giving me plenty of seatbelts to boot from should I need to. Running -CURRENT on some of my VMs has forced me to use my old clones to recover from clang bugs, etc. On VMs I restrain the number of snapshots/clones/BE to 3. The current BE and the 2 previous ones, and the snapshots that tie them all together. Physical systems usually have more than enough storage, and I clean up the long list of BEs about once a year (zfs promote, zfs destroy -Rv). Here's a good execise on creating snapshots and clones, and how to clean them up: https://ximalas.info/2015/06/23/an-exercise-on-zfs-clones/ -- Trond.
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?alpine.BSF.2.21.1801161251040.69908>