Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 1 Sep 2011 11:30:23 -0500
From:      Daniel Mayfield <dan@3geeks.org>
To:        freebsd-fs@freebsd.org
Subject:   Re: gptzfsboot and 4k sector raidz
Message-ID:  <7FAD4A4D-2465-4A80-A445-1D34424F8BB6@3geeks.org>
In-Reply-To: <4E5F811A.2040307@snakebite.org>
References:  <F335600A-0364-455F-A276-43E23B0E597E@3geeks.org> <4E5F811A.2040307@snakebite.org>

next in thread | previous in thread | raw e-mail | index | archive | help

On Sep 1, 2011, at 7:56 AM, Trent Nelson wrote:

> On 01-Sep-11 2:11 AM, Daniel Mayfield wrote:
>> I just set this up on an Athlon64 machine I have w/ 4 WD EARS 2TB
>> disks.  I followed the instructions here:
>> =
http://www.leidinger.net/blog/2011/05/03/another-root-on-zfs-howto-optimiz=
ed-for-4k-sector-drives/,
>> but just building a single pool so three partitions per disk (boot,
>> swap and zfs).  I'm using the mfsBSD image to do the boot code.  When
>> I reboot to actually come up from ZFS, the loader spins for half a
>> second and then the machine reboots.  I've seen a number of bug
>> reports on gptzfsboot and 4k sector pools, but I never saw one fail
>> so early.  What data would the ZFS people need to help fix this?
>=20
> FWIW, I experienced the exact same issue about a week ago with four =
new WD EARS 2TB disks.  I contemplated looking into fixing it, until I =
noticed the crazy disk usage with 4K sectors.  On my old box, my =
/usr/src dataset was ~450MB (mirrored 512-byte drives), on the new box =
with the 2TB 4k sector drives, /usr/src was 1.5-something GB.  Exact =
same settings.

I noticed that the free data space was also bigger.  I tried it with =
raidz on the 512B sectors and it claimed to have only 5.3T of space.  =
With 4KB sectors, it claimed to have 7.25T of space.  Seems like =
something is wonky in the space calculations?

daniel=



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?7FAD4A4D-2465-4A80-A445-1D34424F8BB6>