From owner-freebsd-fs@FreeBSD.ORG Thu Sep 1 13:07:09 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 738A6106564A for ; Thu, 1 Sep 2011 13:07:09 +0000 (UTC) (envelope-from trent@snakebite.org) Received: from exchange.liveoffice.com (exchla3.liveoffice.com [64.70.67.188]) by mx1.freebsd.org (Postfix) with ESMTP id 514B58FC12 for ; Thu, 1 Sep 2011 13:07:09 +0000 (UTC) Received: from EXCASUM03.exchhosting.com (192.168.11.203) by exhub05.exchhosting.com (192.168.11.101) with Microsoft SMTP Server (TLS) id 8.2.213.0; Thu, 1 Sep 2011 05:57:00 -0700 Received: from [10.211.55.3] (35.11.55.172) by exchange.liveoffice.com (192.168.11.203) with Microsoft SMTP Server (TLS) id 8.2.213.0; Thu, 1 Sep 2011 05:57:00 -0700 Message-ID: <4E5F811A.2040307@snakebite.org> Date: Thu, 1 Sep 2011 08:56:58 -0400 From: Trent Nelson User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:5.0) Gecko/20110624 Thunderbird/5.0 MIME-Version: 1.0 To: Daniel Mayfield References: In-Reply-To: Content-Type: text/plain; charset="ISO-8859-1"; format=flowed Content-Transfer-Encoding: 7bit Cc: "freebsd-fs@freebsd.org" Subject: Re: gptzfsboot and 4k sector raidz X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 01 Sep 2011 13:07:09 -0000 On 01-Sep-11 2:11 AM, Daniel Mayfield wrote: > I just set this up on an Athlon64 machine I have w/ 4 WD EARS 2TB > disks. I followed the instructions here: > http://www.leidinger.net/blog/2011/05/03/another-root-on-zfs-howto-optimized-for-4k-sector-drives/, > but just building a single pool so three partitions per disk (boot, > swap and zfs). I'm using the mfsBSD image to do the boot code. When > I reboot to actually come up from ZFS, the loader spins for half a > second and then the machine reboots. I've seen a number of bug > reports on gptzfsboot and 4k sector pools, but I never saw one fail > so early. What data would the ZFS people need to help fix this? FWIW, I experienced the exact same issue about a week ago with four new WD EARS 2TB disks. I contemplated looking into fixing it, until I noticed the crazy disk usage with 4K sectors. On my old box, my /usr/src dataset was ~450MB (mirrored 512-byte drives), on the new box with the 2TB 4k sector drives, /usr/src was 1.5-something GB. Exact same settings. This appeared to be the case for *everything*; every file system/zfs dataset seemed to be consuming 2-3 times more space on the 4K-sector box. So, combine that with the fact that I couldn't boot into it anyway, and I ditched the 4k-sector effort and just re-built with raidz as per normal (i.e. with 512-byte sectors). One week later? Disk usage is sensible, as expected, but performance (especially writing) is pretty horrid. As much as I'd like to blame raidz overhead, I'm not sure it's the problem; I've got a gstripe of 4x16GB partitions at the start of each 2TB as /scratch; dd'ing /dev/zero to that doesn't yield write speeds faster than ~20-30MB/s if I'm lucky. Writing to the raidz partition nets about 15-20MB/s in very bursty peaks. NFS and Samba performance are even worse; 2-3MB/s sustained if I'm lucky, with the odd burst of 20MB/s every so often. (The box is a lowly dual-core Athlon 1800 w/ 8GB RAM, 8-stable from yesterday.) So, uh, no solution from my end, but perhaps some more problems for you to run into if you get it to boot ;-) Trent.