Date: Wed, 17 Dec 2014 13:41:06 -0700 From: jd1008 <jd1008@gmail.com> To: Mike Tancsa <mike@sentex.net>, freebsd-questions@freebsd.org Subject: Re: zfs and 512/4096 sector sizes Message-ID: <5491EA62.2080401@gmail.com> In-Reply-To: <5491E82D.8090105@sentex.net> References: <5491E462.2020902@sentex.net> <5491E5A0.9090306@gmail.com> <5491E61B.9070505@bluerosetech.com> <5491E775.5010403@gmail.com> <5491E82D.8090105@sentex.net>
next in thread | previous in thread | raw e-mail | index | archive | help
On 12/17/2014 01:31 PM, Mike Tancsa wrote: > On 12/17/2014 3:28 PM, jd1008 wrote: >>> Does the zpool clear command make it go away? > > > Nope, tried that :( > > # zpool clear tank1 ada11 > # zpool status > pool: tank1 > state: ONLINE > status: One or more devices are configured to use a non-native block > size. > Expect reduced performance. > action: Replace affected devices with devices that support the > configured block size, or migrate data to a properly configured > pool. > scan: resilvered 898G in 8h4m with 0 errors on Wed Dec 17 15:01:18 2014 > config: > > NAME STATE READ WRITE CKSUM > tank1 ONLINE 0 0 0 > raidz1-0 ONLINE 0 0 0 > ada12 ONLINE 0 0 0 > ada10 ONLINE 0 0 0 > ada6 ONLINE 0 0 0 > ada14 ONLINE 0 0 0 > raidz1-1 ONLINE 0 0 0 > ada11 ONLINE 0 0 0 block size: 512B > configured, 4096B native > > One last suggestion: dismantle the whole pool (i.e. remove the drives) and rebuild it fresh using only 512 byte sector drives. Whole adding the drives back in, one at a time, recheck the status after each add and see if the error status appears.
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?5491EA62.2080401>