From owner-freebsd-questions@FreeBSD.ORG Thu Mar 7 04:41:34 2013 Return-Path: Delivered-To: freebsd-questions@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 4ABCE26D for ; Thu, 7 Mar 2013 04:41:34 +0000 (UTC) (envelope-from FreeBSD@shaneware.biz) Received: from ipmail05.adl6.internode.on.net (unknown [IPv6:2001:44b8:8060:ff02:300:1:6:5]) by mx1.freebsd.org (Postfix) with ESMTP id D742FE5E for ; Thu, 7 Mar 2013 04:41:33 +0000 (UTC) Received: from ppp247-71.static.internode.on.net (HELO leader.local) ([203.122.247.71]) by ipmail05.adl6.internode.on.net with ESMTP; 07 Mar 2013 15:11:31 +1030 Message-ID: <51381A79.2070305@ShaneWare.Biz> Date: Thu, 07 Mar 2013 15:11:29 +1030 From: Shane Ambler User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:17.0) Gecko/17.0 Thunderbird/17.0 MIME-Version: 1.0 To: Doug Poland Subject: Re: Booting from an aribrary disk in ZFS RAIDZ on 8.x References: <20130305184446.GA81297@polands.org> <5136B047.3000700@ShaneWare.Biz> <20130306042456.GA92710@polands.org> In-Reply-To: <20130306042456.GA92710@polands.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: FreeBSD-questions X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 07 Mar 2013 04:41:34 -0000 On 06/03/2013 14:54, Doug Poland wrote: > On Wed, Mar 06, 2013 at 01:26:07PM +1030, Shane Ambler wrote: >> On 06/03/2013 05:14, Doug Poland wrote: >>> I have 6 disks in a RAIDZ configuration. All disks were sliced >>> the same with gpart (da(n)p1,p2,p3) with bootcode written to >>> index 1, swap on index 2 and freebsd-zfs on index 3. >>> >>> Given this configuration, I should be able to boot from any of >>> the 6 disks in the RAIDZ. If this is a true statement, how do I >>> make that happen from the loader prompt? >> >> You don't boot from an individual disk you boot from a zpool - all >> disks are linked together making one zpool "disk". >> > Something has to pick a physical device from which to boot, does it > not?. All the HP Smart Array 6i controller knows is I have 6 RAID 0 > disks to present to the OS. I meant to add if the bootcode is installed on each disk then pointing the bios to any individual disk as the primary boot device will lead to the boot process loading the zpool. Installing it on each disk gives the redundancy to match the raid in the zpool. If you only have one disk with bootcode and it is the one that needs replacing then you can't boot. Then having 100 disks in a pool with bootcode would be overkill, but the consistency may be easier to maintain. >> I'm guessing that you ask as your machine isn't booting. You >> probably need to boot from a cd and do adjustments. >> > Not exactly, I have a failing disk in slot 0, which corresponds to > da0 in my device list (AKA gpt/disk0). I want to make sure I can > boot if I pull this disk and replace it. If the zpool redundancy is sufficient for the zpool to work without the drive it shouldn't make any difference as to how the disk "disappears" only that the data is accessible/rebuildable. > I've had issues with this RAID controller in the past where it won't > present the new disk to the OS. I've had to reboot, go into the > RAID config and tell it it's a single RAID 0 device (stupid, I > know). When you think about it, as a raid controller it shouldn't make assumptions as to how to use the new disk, should it add it to an existing raid set, replace a missing drive or show it as a new single drive? Being able to specify per socket as permanently jbod could be useful feature though. > The roll of /boot/zfs/zpool.cache is a mystery to me. I belive it > somehow tells ZFS what devices are in use. What if a disk goes > offline or is removed? > As I understand it the zpool.cache contains the zpools mounted by the system. After reboot it then re-imports each zpool in the cache. I believe a recent commit enabled the vfs.root.mountfrom zpool to be imported even if there was no cache available. From what I have heard and seen the data about the zpool it belongs to and the role the disk plays in the zpool is stored on each disk and duplicated at the beginning and end of the disk. In my early experiments after starting clean even after gparting and zeroing out the start of the disks, zpool still says it belongs to a pool.