From owner-freebsd-fs@FreeBSD.ORG Fri Feb 7 22:19:17 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 66442C07; Fri, 7 Feb 2014 22:19:17 +0000 (UTC) Received: from wonkity.com (wonkity.com [67.158.26.137]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id E93701FA2; Fri, 7 Feb 2014 22:19:16 +0000 (UTC) Received: from wonkity.com (localhost [127.0.0.1]) by wonkity.com (8.14.7/8.14.7) with ESMTP id s17MJFMB022458; Fri, 7 Feb 2014 15:19:15 -0700 (MST) (envelope-from wblock@wonkity.com) Received: from localhost (wblock@localhost) by wonkity.com (8.14.7/8.14.7/Submit) with ESMTP id s17MJFXP022455; Fri, 7 Feb 2014 15:19:15 -0700 (MST) (envelope-from wblock@wonkity.com) Date: Fri, 7 Feb 2014 15:19:15 -0700 (MST) From: Warren Block To: Mark Felder Subject: Re: Using the *real* sector/block size of a mass storage device for ZFS In-Reply-To: <1391808195.4799.80708189.5CAD8A4E@webmail.messagingengine.com> Message-ID: References: <1487AF77-7731-4AF8-8E44-FF814BB8A717@ebureau.com> <1391808195.4799.80708189.5CAD8A4E@webmail.messagingengine.com> User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (wonkity.com [127.0.0.1]); Fri, 07 Feb 2014 15:19:15 -0700 (MST) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 07 Feb 2014 22:19:17 -0000 On Fri, 7 Feb 2014, Mark Felder wrote: > On Fri, Feb 7, 2014, at 14:44, Dustin Wenz wrote: >> We have been upgrading systems from FreeBSD 9.2 to 10.0-RELEASE, and I'm >> noticing that all of my zpools now show this status: "One or more devices >> are configured to use a non-native block size. Expect reduced >> performance." Specifically, each disk reports: "block size: 512B >> configured, 4096B native". >> >> I've checked these disks with diskinfo and smartctl, and they report a >> sector size of 512B. I understand that modern disks often use larger >> sectors due to addressing limits, but I'm unsure how ZFS can disagree >> with these other tools. >> >> In any case, it looks like I will need to rebuild every zpool. There are >> many thousands of disks involved and the process will take months (if not >> years). How can I be I sure that this is done correctly this time? Will >> ZFS automatically choose the correct block size, assuming that it's >> really capable of this? >> >> In the meantime, how can I turn off that warning message on all of my >> disks? "zpool status -x" is almost worthless due to the extreme number of >> errors reported. >> > > ZFS is doing the right thing by telling you that you should expect > degraded performance. The best way to fix this is to use the gnop method > when you build your zpools: > > gnop create -S 4096 /dev/da0 > gnop create -S 4096 /dev/da1 > zpool create data mirror /dev/da0.nop /dev/da1.nop > > Next reboot or import of the zpool will use the regular device names > with the correct ashift for 4K drives. But remember that this does not fix alignment, and if the partitions are not aligned with 4K blocks, at least write performance will suffer. > The drive manufacturers handled this transition extremely poorly. They may have been forced by desiring compatibility with all the systems that expect 512-byte blocks. :)