Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 4 May 2017 18:34:24 -0400
From:      Peter Pauly <ppauly@gmail.com>
To:        galtsev@kicp.uchicago.edu
Cc:        Ben Woods <woodsb02@gmail.com>, freebsd-questions@freebsd.org
Subject:   Re: Tips for shell based partitioning during install
Message-ID:  <CAKXfWbR3Hkn=w0ERRZVfTO3haz-yz=F5Ex61SDOKWiD39S4PSw@mail.gmail.com>
In-Reply-To: <14228.128.135.52.6.1493935221.squirrel@cosmo.uchicago.edu>
References:  <CAKXfWbTqBUUzaGxz3U7HEV9FoSzqg3XD%2BUrrEkqbVU5E1W-oQA@mail.gmail.com> <CAOc73CDw5fXjfQubmW29V1xOV=_o_D90c_X=bNWst3GeLxxTOQ@mail.gmail.com> <CAKXfWbSTZQ1GUxpNS=MEkxoJxu=p%2BeSeE-KWkj_EBo1pk6CF%2BA@mail.gmail.com> <14228.128.135.52.6.1493935221.squirrel@cosmo.uchicago.edu>

next in thread | previous in thread | raw e-mail | index | archive | help
>>That sounds like extremely huge amount of total swap space.

At this point it's just a test system (proof of concept), but I welcome all
criticism. I'm using sixteen 100GB drives in VMware.

On Thu, May 4, 2017 at 6:00 PM, Valeri Galtsev <galtsev@kicp.uchicago.edu>
wrote:

>
> On Thu, May 4, 2017 4:48 pm, Peter Pauly wrote:
> > I got a lot farther, but my system won't boot. I get this message:
> >
> > ZFS: i/o error - all block copies unavailable
> > ZFS: can't read MOS of pool zroot
> > gptzfsboot: failed to mount default pool zroot
> >
> >
> > I'm pasting my build instructions below starting with the part where I
> > begin partitioning. Maybe you'll see some obvious thing that I did wron=
g
> > (thanks):
> > 8. Under Partitioning, choose Shell
> > 9. Create GPT Disks:
> > gpart create -s gpt da0
> > gpart create -s gpt da1
> > gpart create -s gpt da2
> > gpart create -s gpt da3
> > gpart create -s gpt da4
> > gpart create -s gpt da5
> > gpart create -s gpt da6
> > gpart create -s gpt da7
> > gpart create -s gpt da8
> > gpart create -s gpt da9
> > gpart create -s gpt da10
> > gpart create -s gpt da11
> > gpart create -s gpt da12
> > gpart create -s gpt da13
> > gpart create -s gpt da14
> > gpart create -s gpt da15
> > 10. Add the boot partition to each drive:
> > gpart add -s 512k -t freebsd-boot da0
> > gpart add -s 512k -t freebsd-boot da1
> > gpart add -s 512k -t freebsd-boot da2
> > gpart add -s 512k -t freebsd-boot da3
> > gpart add -s 512k -t freebsd-boot da4
> > gpart add -s 512k -t freebsd-boot da5
> > gpart add -s 512k -t freebsd-boot da6
> > gpart add -s 512k -t freebsd-boot da7
> > gpart add -s 512k -t freebsd-boot da8
> > gpart add -s 512k -t freebsd-boot da9
> > gpart add -s 512k -t freebsd-boot da10
> > gpart add -s 512k -t freebsd-boot da11
> > gpart add -s 512k -t freebsd-boot da12
> > gpart add -s 512k -t freebsd-boot da13
> > gpart add -s 512k -t freebsd-boot da14
> > gpart add -s 512k -t freebsd-boot da15
> > 11. Add the swap partition to each drive:
> > gpart add -s 8G -t freebsd-swap -l swap0 da0
> > gpart add -s 8G -t freebsd-swap -l swap1 da1
> > gpart add -s 8G -t freebsd-swap -l swap2 da2
> > gpart add -s 8G -t freebsd-swap -l swap3 da3
> > gpart add -s 8G -t freebsd-swap -l swap4 da4
> > gpart add -s 8G -t freebsd-swap -l swap5 da5
> > gpart add -s 8G -t freebsd-swap -l swap6 da6
> > gpart add -s 8G -t freebsd-swap -l swap7 da7
> > gpart add -s 8G -t freebsd-swap -l swap8 da8
> > gpart add -s 8G -t freebsd-swap -l swap9 da9
> > gpart add -s 8G -t freebsd-swap -l swap10 da10
> > gpart add -s 8G -t freebsd-swap -l swap11 da11
> > gpart add -s 8G -t freebsd-swap -l swap12 da12
> > gpart add -s 8G -t freebsd-swap -l swap13 da13
> > gpart add -s 8G -t freebsd-swap -l swap14 da14
> > gpart add -s 8G -t freebsd-swap -l swap15 da15
>
> That sounds like extremely huge amount of total swap space. Imagine, the
> machine starts swapping in and out of swap these amounts of data. It will
> put the machine on its knees, it will be totally unreponsive, I figure. D=
o
> I miss something? Am I wrong about something?
>
> Valeri
>
>
> > 12. Add the main partition to the remaining space on each drive:
> > gpart add -t freebsd-zfs -l disk0 da0
> > gpart add -t freebsd-zfs -l disk1 da1
> > gpart add -t freebsd-zfs -l disk2 da2
> > gpart add -t freebsd-zfs -l disk3 da3
> > gpart add -t freebsd-zfs -l disk4 da4
> > gpart add -t freebsd-zfs -l disk5 da5
> > gpart add -t freebsd-zfs -l disk6 da6
> > gpart add -t freebsd-zfs -l disk7 da7
> > gpart add -t freebsd-zfs -l disk8 da8
> > gpart add -t freebsd-zfs -l disk9 da9
> > gpart add -t freebsd-zfs -l disk10 da10
> > gpart add -t freebsd-zfs -l disk11 da11
> > gpart add -t freebsd-zfs -l disk12 da12
> > gpart add -t freebsd-zfs -l disk13 da13
> > gpart add -t freebsd-zfs -l disk14 da14
> > gpart add -t freebsd-zfs -l disk15 da15
> > 13. Install the Protective MBR (pmbr) and gptzfsboot loader to all
> drives:
> > gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da0
> > gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da1
> > gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da2
> > gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da3
> > gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da4
> > gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da5
> > gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da6
> > gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da7
> > gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da8
> > gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da9
> > gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da10
> > gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da11
> > gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da12
> > gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da13
> > gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da14
> > gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da15
> > 14. Load ZFS kernel module:
> > kldload /boot/kernel/opensolaris.ko
> > kldload /boot/kernel/zfs.ko
> > 15. Create the zfs pool:
> > zpool create -o altroot=3D/mnt -O compress=3Dlz4 -O atime=3Doff -m none=
 -f
> zroot
> > raidz2 /dev/gpt/disk0 /dev/gpt/disk1 /dev/gpt/disk2 /dev/gpt/disk3
> > /dev/gpt/disk4 /dev/gpt/disk5 /dev/gpt/disk6 /dev/gpt/disk7
> /dev/gpt/disk8
> > /dev/gpt/disk9 /dev/gpt/disk10 /dev/gpt/disk11 /dev/gpt/disk12
> > /dev/gpt/disk13 /dev/gpt/disk14 /dev/gpt/disk15
> > 16. Create the zfs datasets:
> > zfs create -o mountpoint=3Dnone zroot/ROOT
> > zfs create -o mountpoint=3D/ zroot/ROOT/default
> > zfs create -o mountpoint=3D/tmp -o exec=3Don -o setuid=3Doff zroot/tmp
> > zfs create -o mountpoint=3D/usr -o canmount=3Doff zroot/usr
> > zfs create zroot/usr/home
> > zfs create -o setuid=3Doff zroot/usr/ports
> > zfs create zroot/usr/src
> > zfs create -o mountpoint=3D/var -o canmount=3Doff zroot/var
> > zfs create -o exec=3Doff -o setuid=3Doff zroot/var/audit
> > zfs create -o exec=3Doff -o setuid=3Doff zroot/var/crash
> > zfs create -o exec=3Doff -o setuid=3Doff zroot/var/log
> > zfs create -o atime=3Don zroot/var/mail
> > zfs create -o setuid=3Doff zroot/var/tmp
> > 17. Set the mount point of the root for newly created datasets
> > zfs set mountpoint=3D/zroot zroot
> > 18. Set correct permissions on the temp directories:
> > chmod 1777 /mnt/tmp
> > chmod 1777 /mnt/var/tmp
> > 19. Tell zfs where to find the boot file system:
> > zpool set bootfs=3Dzroot/ROOT/default zroot
> > 20. Create a directory for the zpool cache and tell zfs where to find i=
t:
> > mkdir -p /mnt/boot/zfs
> > zpool set cachefile=3D/mnt/boot/zfs/zpool.cache zroot
> > 21. Set the canmount=3Dnoauto so that default boot environment (BE) doe=
s
> not
> > get mounted if a different boot environment is chosen from the boot men=
u:
> > zfs set canmount=3Dnoauto zroot/ROOT/default
> > 22. Add command which will be picked up later to build rc.conf: and
> > loader.conf
> > echo =E2=80=98zfs_enable=3D\=E2=80=9DYES\=E2=80=9D=E2=80=99 >> /tmp/bsd=
install_etc/rc.conf.zfs
> > echo =E2=80=98kern.geom.label.disk_ident.enable=3D\=E2=80=9D0\=E2=80=9D=
=E2=80=99 >>
> > /tmp/bsdinstall_boot/loader.conf.zfs
> > echo =E2=80=98kern.geom.label.gptid.enable=3D\=E2=80=9D0\=E2=80=9D=E2=
=80=99 >>
> > /tmp/bsdinstall_boot/loader.conf.zfs
> > 23. exit
> > 24. At this point, the base system will be installed.
> > 25. Enter the root password.
> > 26. Under Network Configuration, select  vmx0
> > 27. Would you like to configure IPv4 for this interface:  Yes
> > 28. Would you like to use DHCP to configure this interface:  No
> > 29. Enter the following information:  IP Address: 10.1.2.3, Subnet Mask=
:
> > 255.255.255.0, Default Router: 10.1.2.1
> > 30. Would you like to configure IPv6 for this interface:  No
> > 31. Search:  <blank>, IPv4 DNS #1: 8.8.8.8, IPv4 DNS #2: 8.8.4.4
> > 32. On the Time Zone Selector screen, choose America -- North and South=
,
> > then scroll down and choose New York.
> > Choose Eastern Time and when asked, does EST look reasonable, choose Ye=
s
> > 33. Choose Skip then Skip again to skip the time adjustment.
> > 34. On the services you would like to have started at boot screen,
> > choose:,
> > sshd, ntpd and dumpdev
> > 35. On the system hardening page, ENABLE ALL OPTIONS.
> > 36. Would you like to add users to the installed system now:  No
> > 37. On the final configuration screen, choose Exit
> > 38. Before exiting, would you like to open a shell?  Yes
> > 39. zpool set cachefile=3D zroot
> > exit
> > 40. Installation is complete screen:  Reboot
> >
> >
> >
> > On Wed, May 3, 2017 at 4:46 PM, Ben Woods <woodsb02@gmail.com> wrote:
> >
> >> On Wed, 3 May 2017 at 2:31 am, Peter Pauly <ppauly@gmail.com> wrote:
> >>
> >>> I'm using the option in the installer where you go out to a shell
> >>> prompt
> >>> during the partitioning step in the installer on FreeBSD 11-Release a=
nd
> >>> booted off of the CD. All is going well until I get to this step:
> >>>
> >>> zpool create zroot raidz2 /dev/gpt/disk0 /dev/gpt/disk1 /dev/gpt/disk=
3
> >>> ...
> >>> etc.
> >>> cannot mount '/zroot': failed to create mountpoint
> >>>
> >>> The instructions when going out to the shell say I have to mount the
> >>> filesystem under /mnt but /mnt is read-only.
> >>>
> >>> I tried to use the Guided Auto (ZFS) but it doesn't work with more th=
an
> >>> 10
> >>> drives.
> >>>
> >>> What am I doing wrong?
> >>
> >>
> >> Hi Peter,
> >>
> >> When I do manual zfs partitioning during installs, i follow the comman=
ds
> >> used by the actual bsdinstall scripts (the ones that would have been
> >> executed if I used the auto mode).
> >>
> >> A copy of them can be viewed online here:
> >>
> >> https://svnweb.freebsd.org/base/head/usr.sbin/bsdinstall/
> >> scripts/zfsboot?view=3Dmarkup#l1313
> >>
> >>
> >> Essentially the zpool create command needs to have:
> >>
> >> zpool create -o altroot=3D/mnt -O compress=3Dlz4 -O atime=3Doff -m non=
e -f
> >> zroot
> >> raidz2 /dev/gpt/disk0 ...
> >>
> >> Regards,
> >> Ben
> >>
> >>> --
> >>
> >> --
> >> From: Benjamin Woods
> >> woodsb02@gmail.com
> >>
> > _______________________________________________
> > freebsd-questions@freebsd.org mailing list
> > https://lists.freebsd.org/mailman/listinfo/freebsd-questions
> > To unsubscribe, send any mail to
> > "freebsd-questions-unsubscribe@freebsd.org"
>
>
> ++++++++++++++++++++++++++++++++++++++++
> Valeri Galtsev
> Sr System Administrator
> Department of Astronomy and Astrophysics
> Kavli Institute for Cosmological Physics
> University of Chicago
> Phone: 773-702-4247
> ++++++++++++++++++++++++++++++++++++++++
>



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAKXfWbR3Hkn=w0ERRZVfTO3haz-yz=F5Ex61SDOKWiD39S4PSw>