From owner-freebsd-fs@FreeBSD.ORG Thu Sep 1 17:18:05 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 910CD106564A for ; Thu, 1 Sep 2011 17:18:05 +0000 (UTC) (envelope-from trent@snakebite.org) Received: from exchange.liveoffice.com (exchla3.liveoffice.com [64.70.67.188]) by mx1.freebsd.org (Postfix) with ESMTP id 7222F8FC0C for ; Thu, 1 Sep 2011 17:18:05 +0000 (UTC) Received: from EXCASUM03.exchhosting.com (192.168.11.203) by exhub03.exchhosting.com (192.168.11.104) with Microsoft SMTP Server (TLS) id 8.2.213.0; Thu, 1 Sep 2011 10:17:52 -0700 Received: from [10.211.55.3] (35.11.55.172) by exchange.liveoffice.com (192.168.11.203) with Microsoft SMTP Server (TLS) id 8.2.213.0; Thu, 1 Sep 2011 10:17:52 -0700 Message-ID: <4E5FBE3E.7020706@snakebite.org> Date: Thu, 1 Sep 2011 13:17:50 -0400 From: Trent Nelson User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:5.0) Gecko/20110624 Thunderbird/5.0 MIME-Version: 1.0 To: Daniel Mayfield , "freebsd-fs@freebsd.org" References: <4E5F811A.2040307@snakebite.org> <7FAD4A4D-2465-4A80-A445-1D34424F8BB6@3geeks.org> In-Reply-To: <7FAD4A4D-2465-4A80-A445-1D34424F8BB6@3geeks.org> Content-Type: text/plain; charset="ISO-8859-1"; format=flowed Content-Transfer-Encoding: 7bit Cc: Subject: Re: gptzfsboot and 4k sector raidz X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 01 Sep 2011 17:18:05 -0000 On 01-Sep-11 12:30 PM, Daniel Mayfield wrote: > > On Sep 1, 2011, at 7:56 AM, Trent Nelson wrote: > >> On 01-Sep-11 2:11 AM, Daniel Mayfield wrote: >>> I just set this up on an Athlon64 machine I have w/ 4 WD EARS >>> 2TB disks. I followed the instructions here: >>> http://www.leidinger.net/blog/2011/05/03/another-root-on-zfs-howto-optimized-for-4k-sector-drives/, >>>but just building a single pool so three partitions per disk (boot, >>> swap and zfs). I'm using the mfsBSD image to do the boot code. >>> When I reboot to actually come up from ZFS, the loader spins for >>> half a second and then the machine reboots. I've seen a number >>> of bug reports on gptzfsboot and 4k sector pools, but I never saw >>> one fail so early. What data would the ZFS people need to help >>> fix this? >> >> FWIW, I experienced the exact same issue about a week ago with four >> new WD EARS 2TB disks. I contemplated looking into fixing it, >> until I noticed the crazy disk usage with 4K sectors. On my old >> box, my /usr/src dataset was ~450MB (mirrored 512-byte drives), on >> the new box with the 2TB 4k sector drives, /usr/src was >> 1.5-something GB. Exact same settings. > > I noticed that the free data space was also bigger. I tried it with > raidz on the 512B sectors and it claimed to have only 5.3T of space. > With 4KB sectors, it claimed to have 7.25T of space. Seems like > something is wonky in the space calculations? Hmmmm. It didn't occur to me that the space calculations might be wonky. That could explain why I was seeing disk usage much higher on 4K than 512-bytes for all my zfs datasets. Here's my zpool/zfs output w/ 512-byte sectors (4-disk raidz): [root@flanker/ttypts/0(~)#] zpool list tank NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT tank 7.12T 698G 6.44T 9% 1.16x ONLINE - [root@flanker/ttypts/0(~)#] zfs list tank NAME USED AVAIL REFER MOUNTPOINT tank 604G 4.74T 46.4K legacy It's a raidz1-0 of four 2TB disks, so the space available should be (4-1=3)*2TB=6TB? Although I presume that's 6-marketing-terabtyes, which translates to ... 6000000000000/(1024^4)=5. And I've got 64k boot, 8G swap, 16G scratch on each drive *before* the tank, so eh, I guess 4.74T sounds about right. The 7.12T reported by zpool doesn't seem to be taking into account the reduced space from the raidz parity. *shrug* Enough about sizes; what's your read/write performance like between 512-byte/4K? I didn't think to test performance in the 4K configuration; I really wish I had, now. Trent.