Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 21 Dec 2013 22:17:52 -0500
From:      Adam McDougall <mcdouga9@egr.msu.edu>
To:        Devin Teske <dteske@freebsd.org>
Cc:        stable@freebsd.org, Devin.Teske@fisglobal.com
Subject:   Re: bsdinstall, zfs booting, gpt partition order suitable for volume expansion
Message-ID:  <52B659E0.8020904@egr.msu.edu>
In-Reply-To: <52B4C3FE.2050706@egr.msu.edu>
References:  <20131210175323.GB1728@egr.msu.edu> <93C924DB-E760-4830-B5E2-3A20160AD322@fisglobal.com> <2D40298B-39FA-4BA9-9AC2-6006AA0E0C9C@fisglobal.com> <73E28A82-E9FE-4B25-8CE6-8B0543183E7F@fisglobal.com> <20131218135326.GM1728@egr.msu.edu> <1AD35F39-35EB-4AAD-B4B1-AF21B2B6F6BA@fisglobal.com> <20131218163145.GA1630@egr.msu.edu> <A3745A4C-50F5-4411-BE21-6DACF3883715@fisglobal.com> <52B4C3FE.2050706@egr.msu.edu>

next in thread | previous in thread | raw e-mail | index | archive | help
On 12/20/2013 17:26, Adam McDougall wrote:
> On 12/19/2013 02:19, Teske, Devin wrote:
>>
>> On Dec 18, 2013, at 8:31 AM, Adam McDougall wrote:
>>
>>> [snip]
>>> I have posted /tmp/bsdinstall_log at: http://p.bsd-unix.net/ps9qmfqc2
>>>
>>
>> I think this logging stuff I put so much effort into is really paying dividends.
>> I'm finding it really easy to debug issues that others have run into.
>>
>>
>>> The corresponding procedure:
>>>
>>> Virtualbox, created VM with 4 2.0TB virtual hard disks
>>> Install
>>> Continue with default keymap 
>>> Hostname: test
>>> Distribution Select: OK    
>>> Partitioning: ZFS
>>> Pool Type/Disks: stripe, select ada0-3 and hit OK
>>> Install
>>> Last Chance! YES
>>>
>>
>> I've posted the following commits to 11.0-CURRENT:
>>
>> http://svnweb.freebsd.org/base?view=revision&revision=259597
>> http://svnweb.freebsd.org/base?view=revision&revision=259598
>>
>> As soon as a new ISO is rolled, can you give the above another go?
>> I rolled my own ISO with the above and tested cleanly.
>>
> 
> I did some testing with 11.0-HEAD-r259612-JPSNAP and 4 disk raidz, 4
> disk mirror worked, 1-3 disk stripe worked but 4 disk stripe got "ZFS:
> i/o error - all block copies unavailable" although the parts where this
> happens during the loader varies".  Sometimes the loader would fault,
> sometimes it just can't load kernel, sometimes it prints some of the
> color text and sometimes not even that far.  Might depend on the
> install?  Also I did not try exhaustive combinations such as 2-3 in a
> mirror, 4 in a raidz2, or anything more than 4 disks.  I'll try to test
> a 10 ISO tomorrow if I can, either a fresh JPSNAP or RC3 if it is ready
> by the time I am, maybe both.

Good news, I believe this was "hardware" error.  VirtualBox (in sata
mode along with a virtual cdrom) and XenServer 6.0/6.2 appear to make a
maximum of 3 virtual hard disks visible to the FreeBSD bootloader.  This
is easier to tell when booting from the CD since you can see it
enumerate them, but if you are booting from disks, it may not get that
far.  Interestingly, when you tell VirtualBox to use scsi disks, you max
out at 4 bootable instead of 3.  Installation then works on 4 disks but
not 5 (understandably).  Thus the symptoms are appropriate and it is not
a fault of the installer/installation.  I've heard of similar issues on
real hardware but since this is a new install, nothing should be lost.

Thanks for making the improvements and bug fixes!

The below issue stands, but I'd say is not urgent for 10.0.

> 
> I also found another issue, not very dire: If you install to X number of
> disks as "zpool", then reinstall on (X-1 or less) disks as "zpool", the
> install fails with: "cannot import 'zroot': more than one matching pool
> import by numeric ID instead"
> because it sees both the old and the new zroot (makes sense, since it
> should not be touching disks we didn't ask about):
> 
> DEBUG: zfs_create_boot: Temporarily exporting ZFS pool(s)...
> DEBUG: zfs_create_boot: zpool export "zroot"
> DEBUG: zfs_create_boot: retval=0 <no output>
> DEBUG: zfs_create_boot: gnop destroy "ada0p3.nop"
> DEBUG: zfs_create_boot: retval=0 <no output>
> DEBUG: zfs_create_boot: gnop destroy "ada1p3.nop"
> DEBUG: zfs_create_boot: retval=0 <no output>
> DEBUG: zfs_create_boot: gnop destroy "ada2p3.nop"
> DEBUG: zfs_create_boot: retval=0 <no output>
> DEBUG: zfs_create_boot: Re-importing ZFS pool(s)...
> DEBUG: zfs_create_boot: zpool import -o altroot="/mnt" "zroot"
> DEBUG: zfs_create_boot: retval=1 <output below>
> cannot import 'zroot': more than one matching pool
> import by numeric ID instead
> DEBUG: f_dialog_max_size: dialog --print-maxsize = [MaxSize: 25, 80]
> DEBUG: f_getvar: var=[height] value=[6] r=0
> DEBUG: f_getvar: var=[width] value=[54] r=0
> 
> Full log at: http://p.bsd-unix.net/p2juq9y25
> 
> Workaround: use a different pool name, or use a shell to manually zpool
> labelclear the locations with the old zpool label (advanced user operation)
> 
> Suggested solution: avoid exporting and importing the pool?  I don't
> think you need to unload gnop, zfs should be able to find the underlying
> partition fine on its own the next boot and the install would go quicker
> without the export and import.  Or were you doing it for another reason
> such as the cache file?
> 
> Alternative: would it be possible to determine the numeric ID before
> exporting so it can use it to import?  But that would be adding
> complexity as opposed to removing complexity by eliminating the
> export/import if possible.
> _______________________________________________
> freebsd-stable@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-stable
> To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org"
> 




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?52B659E0.8020904>