From owner-freebsd-stable@FreeBSD.ORG Fri Dec 20 22:26:09 2013 Return-Path: Delivered-To: stable@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 65CE7FC2; Fri, 20 Dec 2013 22:26:09 +0000 (UTC) Received: from mail.egr.msu.edu (dauterive.egr.msu.edu [35.9.37.168]) by mx1.freebsd.org (Postfix) with ESMTP id 13099103B; Fri, 20 Dec 2013 22:26:08 +0000 (UTC) Received: from dauterive (localhost [127.0.0.1]) by mail.egr.msu.edu (Postfix) with ESMTP id 5C3D1EEE; Fri, 20 Dec 2013 17:26:07 -0500 (EST) X-Virus-Scanned: amavisd-new at egr.msu.edu Received: from mail.egr.msu.edu ([127.0.0.1]) by dauterive (dauterive.egr.msu.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id j7DsKwRcDTRO; Fri, 20 Dec 2013 17:26:07 -0500 (EST) Received: from EGR authenticated sender Message-ID: <52B4C3FE.2050706@egr.msu.edu> Date: Fri, 20 Dec 2013 17:26:06 -0500 From: Adam McDougall User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101 Thunderbird/24.2.0 MIME-Version: 1.0 To: Devin Teske Subject: Re: bsdinstall, zfs booting, gpt partition order suitable for volume expansion References: <20131210175323.GB1728@egr.msu.edu> <93C924DB-E760-4830-B5E2-3A20160AD322@fisglobal.com> <2D40298B-39FA-4BA9-9AC2-6006AA0E0C9C@fisglobal.com> <73E28A82-E9FE-4B25-8CE6-8B0543183E7F@fisglobal.com> <20131218135326.GM1728@egr.msu.edu> <1AD35F39-35EB-4AAD-B4B1-AF21B2B6F6BA@fisglobal.com> <20131218163145.GA1630@egr.msu.edu> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: "stable@FreeBSD.org" , "Teske, Devin" X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 20 Dec 2013 22:26:09 -0000 On 12/19/2013 02:19, Teske, Devin wrote: > > On Dec 18, 2013, at 8:31 AM, Adam McDougall wrote: > >> [snip] >> I have posted /tmp/bsdinstall_log at: http://p.bsd-unix.net/ps9qmfqc2 >> > > I think this logging stuff I put so much effort into is really paying dividends. > I'm finding it really easy to debug issues that others have run into. > > >> The corresponding procedure: >> >> Virtualbox, created VM with 4 2.0TB virtual hard disks >> Install >> Continue with default keymap >> Hostname: test >> Distribution Select: OK >> Partitioning: ZFS >> Pool Type/Disks: stripe, select ada0-3 and hit OK >> Install >> Last Chance! YES >> > > I've posted the following commits to 11.0-CURRENT: > > http://svnweb.freebsd.org/base?view=revision&revision=259597 > http://svnweb.freebsd.org/base?view=revision&revision=259598 > > As soon as a new ISO is rolled, can you give the above another go? > I rolled my own ISO with the above and tested cleanly. > I did some testing with 11.0-HEAD-r259612-JPSNAP and 4 disk raidz, 4 disk mirror worked, 1-3 disk stripe worked but 4 disk stripe got "ZFS: i/o error - all block copies unavailable" although the parts where this happens during the loader varies". Sometimes the loader would fault, sometimes it just can't load kernel, sometimes it prints some of the color text and sometimes not even that far. Might depend on the install? Also I did not try exhaustive combinations such as 2-3 in a mirror, 4 in a raidz2, or anything more than 4 disks. I'll try to test a 10 ISO tomorrow if I can, either a fresh JPSNAP or RC3 if it is ready by the time I am, maybe both. I also found another issue, not very dire: If you install to X number of disks as "zpool", then reinstall on (X-1 or less) disks as "zpool", the install fails with: "cannot import 'zroot': more than one matching pool import by numeric ID instead" because it sees both the old and the new zroot (makes sense, since it should not be touching disks we didn't ask about): DEBUG: zfs_create_boot: Temporarily exporting ZFS pool(s)... DEBUG: zfs_create_boot: zpool export "zroot" DEBUG: zfs_create_boot: retval=0 DEBUG: zfs_create_boot: gnop destroy "ada0p3.nop" DEBUG: zfs_create_boot: retval=0 DEBUG: zfs_create_boot: gnop destroy "ada1p3.nop" DEBUG: zfs_create_boot: retval=0 DEBUG: zfs_create_boot: gnop destroy "ada2p3.nop" DEBUG: zfs_create_boot: retval=0 DEBUG: zfs_create_boot: Re-importing ZFS pool(s)... DEBUG: zfs_create_boot: zpool import -o altroot="/mnt" "zroot" DEBUG: zfs_create_boot: retval=1 cannot import 'zroot': more than one matching pool import by numeric ID instead DEBUG: f_dialog_max_size: dialog --print-maxsize = [MaxSize: 25, 80] DEBUG: f_getvar: var=[height] value=[6] r=0 DEBUG: f_getvar: var=[width] value=[54] r=0 Full log at: http://p.bsd-unix.net/p2juq9y25 Workaround: use a different pool name, or use a shell to manually zpool labelclear the locations with the old zpool label (advanced user operation) Suggested solution: avoid exporting and importing the pool? I don't think you need to unload gnop, zfs should be able to find the underlying partition fine on its own the next boot and the install would go quicker without the export and import. Or were you doing it for another reason such as the cache file? Alternative: would it be possible to determine the numeric ID before exporting so it can use it to import? But that would be adding complexity as opposed to removing complexity by eliminating the export/import if possible.