Date: Sat, 16 May 2009 18:35:25 +0000 From: Pegasus Mc Cleaft <ken@mthelicon.com> To: svn-src-all@freebsd.org Cc: svn-src-head@freebsd.org, Doug Rabson <dfr@freebsd.org>, src-committers@freebsd.org Subject: Re: svn commit: r192194 - in head/sys: boot/i386/zfsboot boot/zfs cddl/boot/zfs Message-ID: <200905161835.26281.ken@mthelicon.com> In-Reply-To: <200905161048.n4GAmKRh057122@svn.freebsd.org> References: <200905161048.n4GAmKRh057122@svn.freebsd.org>
next in thread | previous in thread | raw e-mail | index | archive | help
On Saturday 16 May 2009 10:48:20 Doug Rabson wrote: > Author: dfr > Date: Sat May 16 10:48:20 2009 > New Revision: 192194 > URL: http://svn.freebsd.org/changeset/base/192194 > > Log: > Add support for booting from raidz1 and raidz2 pools. > > Modified: > head/sys/boot/i386/zfsboot/zfsboot.c > head/sys/boot/zfs/zfsimpl.c > head/sys/cddl/boot/zfs/README > head/sys/cddl/boot/zfs/zfsimpl.h > head/sys/cddl/boot/zfs/zfssubr.c > I think there may be a bug when you boot the machine from a drive that is a member of a zfs-mirror and you have raidz pools elsewhere. On reboot, I would get message saying there was no bootable kernel and dropped me down to the "OK" prompt. At that point, lsdev would show all the pools (both zfs-mirror and zraid's) and "ls" would return an error saying there were to many open files. I was able to work around the problem by pulling all the drives in the zraid pool into single user, attach all the drives and use atacontrol attach to bring them online before going to multi-user and hitting /etc/rc.d/zfs start. The only thing I haven't tried, and may be the key to the problem is reloading the boot-strap on the bootable drives. Would that make any difference? Peg
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200905161835.26281.ken>