Date: Thu, 21 Oct 2010 19:32:46 +0200 From: "Emil Smolenski" <ambsd@raisa.eu.org> To: "Matthew Seaman" <m.seaman@infracaninophile.co.uk> Cc: freebsd-stable@freebsd.org Subject: Re: BIOS limitations on size of bootable zpool? Message-ID: <op.vkxscukzqvde5b@bolt.zol> In-Reply-To: <4CC02B85.1050604@infracaninophile.co.uk> References: <4CC02B85.1050604@infracaninophile.co.uk>
next in thread | previous in thread | raw e-mail | index | archive | help
On Thu, 21 Oct 2010 14:01:09 +0200, Matthew Seaman <m.seaman@infracaninophile.co.uk> wrote: > Dear all, > > I'm happy that gptzfsloader will work with just about any zpool > configuration you could imagine, but... > > We have an HP DL185 G5 with a P400 raid array, fully populated with 12 > drives. Since there's no JBOD mode (or at least, not one you can get to > from the BIOS configuration screens), the array is configured as 12 > single disk RAID0 arrays. As I posted about previously, we had FreeBSD > 8.1-STABLE installed on a 6 disk raidz1, and everything was happy. > However, we were having some difficulty adding a second vdev -- another > raidz1 using the other 6 drives. > > Well, to cut a long story short: eventually we did this by hot-plugging > disks 7 -- 12 after FreeBSD was up and running. Everything was cool and > dandy, and we had the server running on all drives after setting up gpt > partition tables and doing a 'zpool add'. > > Until we tested rebooting. > > On attempted reboot, the loader reported 8 drives, and subsequently ZFS > flailed with the dreaded "ZFS: i/o error - all block copies unavailable" > error. Now, we've had a poke through FreeBSD sources, and as far as we > can tell, FreeBSD will work with up to 31 devices being reported from > the BIOS. Is this correct, and the limitation is in what the hardware > is reporting to the loader at the early stages of booting? > > Any good tricks for getting round this sort of limitation? Our current > plan is to set up a USB memstick with /boot on it, by adapting the > instructions here: http://wiki.freebsd.org/RootOnZFS/UFSBoot -- which > isn't ideal as the memstick will be a single point of failure. I think I've encountered the same problem as you. In my configuration there is also HP server with HP SmartArray with six disks configured as single disk RAID0 logical units. I don't think it is related to size of the pool (try to test with small GPT partitions -- for me it also doesn't work). I think it is something with this specific hardware and ZFS. I will provide more details soon, but for know could you test following configurations: - mirror (it works for me), - raidz(2) (it doesn't work for me), - raidz(2) but without SmartArray controller -- adX or adaX disks (it works for me). Please try 'status' command while seeing "ZFS: i/o error..." message. -- am
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?op.vkxscukzqvde5b>