From owner-freebsd-questions@FreeBSD.ORG Sat Feb 19 13:18:01 2011 Return-Path: Delivered-To: freebsd-questions@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A81971065670 for ; Sat, 19 Feb 2011 13:18:01 +0000 (UTC) (envelope-from DStaal@usa.net) Received: from mail.magehandbook.com (173-8-4-45-WashingtonDC.hfc.comcastbusiness.net [173.8.4.45]) by mx1.freebsd.org (Postfix) with ESMTP id 5AE528FC19 for ; Sat, 19 Feb 2011 13:18:00 +0000 (UTC) Received: from [192.168.1.50] (Mac-Pro.magehandbook.com [192.168.1.50]) by mail.magehandbook.com (Postfix) with ESMTP id 4A2C32845B; Sat, 19 Feb 2011 08:18:00 -0500 (EST) Date: Sat, 19 Feb 2011 08:18:00 -0500 From: Daniel Staal To: Matthew Seaman , freebsd-questions@freebsd.org Message-ID: In-Reply-To: <4D5FB121.6090102@infracaninophile.co.uk> References: <97405dd7ad34c6cbecebfdda327d1e83.squirrel@www.magehandbook.com> <4D5FB121.6090102@infracaninophile.co.uk> X-Mailer: Mulberry/4.0.8 (Mac OS X) MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit Content-Disposition: inline Cc: Subject: Re: ZFS-only booting on FreeBSD X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 19 Feb 2011 13:18:01 -0000 --As of February 19, 2011 12:01:37 PM +0000, Matthew Seaman is alleged to have said: >> Let's say I install a FreeBSD system using a ZFS-only filesystem into a >> box with hotswapable hard drives, configured with some redundancy. Time >> passes, one of the drives fails, and it is replaced and rebuilt using the >> ZFS tools. (Possibly on auto, or possibly by just doing a 'zpool >> replace'.) >> >> Is that box still bootable? (It's still running, but could it *boot*?) > > Why wouldn't it be? The configuration in the Wiki article sets aside a > small freebsd-boot partition on each drive, and the instructions tell > you to install boot blocks as part of that partitioning process. You > would have to repeat those steps when you install your replacement drive > before you added the new disk into your zpool. > > So long as the BIOS can read the bootcode from one or other drives, and > can then access /boot/zfs/zpool.cache to learn about what zpools you > have, then the system should boot. So, assuming a forgetful sysadmin (or someone who is new didn't know about the setup in the first place) is that a yes or a no for the one-drive replaced case? It definitely is a 'no' for the all-drives replaced case, as I suspected: You would need to have repeated the partitioning manually. (And not letting ZFS handle it.) >> If not, what's the minimum needed to support booting from another disk, >> and using the ZFS filesystem for everything else? > > This situation is described in the Boot ZFS system from UFS article > here: http://wiki.freebsd.org/RootOnZFS/UFSBoot > > I use this sort of setup for one system where the zpool has too many > drives in it for the BIOS to cope with; works very well booting from a > USB key. Thanks; I wasn't sure if that procedure would work if the bootloader was on a different physical disk than the rest of the filesystem. Nice to hear from someone who's tried it that it works. ;) > In fact, while the partitioning layout described in the > http://wiki.freebsd.org/RootOnZFS articles is great for holding the OS > and making it bootable, for using ZFS to manage serious quantities of > disk storage, other strategies might be better. It would probably be a > good idea to have two zpools: one for the bulk of the space built from > whole disks (ie. without using gpart or similar partitioning), in > addition to your bootable zroot pool. Quite apart from wringing the > maximum usable space out of your available disks, this also makes it > much easier to replace failed disks or use hot spares. If a single disk failure in the zpool can render the machine unbootable, it's better yet to have a dedicated bootloader drive: It increases the mean time between failures of your boot device (and therefore your machine), and it reduces the 'gotcha' value. In a hot-swap environment booting directly off of ZFS you could fail a reboot a month (or more...) after the disk replacement, and finding your problem then will be a headache until someone remembers this setup tidbit. If the 'fail to boot' only happens once *all* the original drives have been replaced the mean time between failures is better in the ZFS situation, but the 'gotcha' value becomes absolutely huge: Since you can replace one (or two, or more) disks without issue, the problem will likely take years to develop. Ah well, price of the bleeding edge. ;) Daniel T. Staal --------------------------------------------------------------- This email copyright the author. Unless otherwise noted, you are expressly allowed to retransmit, quote, or otherwise use the contents for non-commercial purposes. This copyright will expire 5 years after the author's death, or in 30 years, whichever is longer, unless such a period is in excess of local copyright law. ---------------------------------------------------------------