Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 7 Feb 2014 16:41:56 -0600
From:      Dustin Wenz <dustinwenz@ebureau.com>
To:        Mark Felder <feld@freebsd.org>
Cc:        "<freebsd-fs@freebsd.org>" <freebsd-fs@freebsd.org>
Subject:   Re: Using the *real* sector/block size of a mass storage device for ZFS
Message-ID:  <8B5D8D0C-ADDE-49B3-87A9-DE1105E32BF9@ebureau.com>
In-Reply-To: <1391808195.4799.80708189.5CAD8A4E@webmail.messagingengine.com>
References:  <1487AF77-7731-4AF8-8E44-FF814BB8A717@ebureau.com> <1391808195.4799.80708189.5CAD8A4E@webmail.messagingengine.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Thanks for the information!

I'm curious as to why gnop is the best way to accomplish this... FreeBSD =
10 seems to automatically set ashift: 12 when a new vdev is created. I =
definitely appreciate the control that gnop provides, however.

Am I correct in assuming that it is absolutely impossible to convert an =
existing ashift:9 vdev to ashift:12? Some of my pools are approaching =
1PB in size; transferring the data off and back again would be =
inconvenient.

I suppose I should just be thankful that ZFS is warning me about this =
now, before I need to build any really large storage pools.=20

	- .Dustin

On Feb 7, 2014, at 3:23 PM, Mark Felder <feld@freebsd.org> wrote:

>=20
>=20
> On Fri, Feb 7, 2014, at 14:44, Dustin Wenz wrote:
>> We have been upgrading systems from FreeBSD 9.2 to 10.0-RELEASE, and =
I'm
>> noticing that all of my zpools now show this status: "One or more =
devices
>> are configured to use a non-native block size. Expect reduced
>> performance." Specifically, each disk reports: "block size: 512B
>> configured, 4096B native".
>>=20
>> I've checked these disks with diskinfo and smartctl, and they report =
a
>> sector size of 512B. I understand that modern disks often use larger
>> sectors due to addressing limits, but I'm unsure how ZFS can disagree
>> with these other tools.
>>=20
>> In any case, it looks like I will need to rebuild every zpool. There =
are
>> many thousands of disks involved and the process will take months (if =
not
>> years). How can I be I sure that this is done correctly this time? =
Will
>> ZFS automatically choose the correct block size, assuming that it's
>> really capable of this?
>>=20
>> In the meantime, how can I turn off that warning message on all of my
>> disks? "zpool status -x" is almost worthless due to the extreme =
number of
>> errors reported.
>>=20
>=20
> ZFS is doing the right thing by telling you that you should expect
> degraded performance. The best way to fix this is to use the gnop =
method
> when you build your zpools:
>=20
> gnop create -S 4096 /dev/da0
> gnop create -S 4096 /dev/da1
> zpool create data mirror /dev/da0.nop /dev/da1.nop
>=20
> Next reboot or import of the zpool will use the regular device names
> with the correct ashift for 4K drives.
>=20
> The drive manufacturers handled this transition extremely poorly.
> _______________________________________________
> freebsd-fs@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?8B5D8D0C-ADDE-49B3-87A9-DE1105E32BF9>