Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 28 Aug 2014 10:27:59 -0400
From:      Paul Kraus <paul@kraus-haus.org>
To:        Scott Bennett <bennett@sdf.org>
Cc:        freebsd-questions@freebsd.org, freebsd@qeng-ho.org, Trond.Endrestol@fagskolen.gjovik.no
Subject:   Re: gvinum raid5 vs. ZFS raidz
Message-ID:  <25B567A0-6639-41EE-AB3E-96AFBA3F11B7@kraus-haus.org>
In-Reply-To: <201408280636.s7S6a5OZ022667@sdf.org>
References:  <201408020621.s726LsiA024208@sdf.org> <alpine.BSF.2.11.1408020356250.1128@wonkity.com> <53DCDBE8.8060704@qeng-ho.org> <201408060556.s765uKJA026937@sdf.org> <53E1FF5F.1050500@qeng-ho.org> <201408070831.s778VhJc015365@sdf.org> <alpine.BSF.2.11.1408071034510.64214@mail.fig.ol.no> <201408070936.s779akMv017524@sdf.org> <alpine.BSF.2.11.1408071226020.64214@mail.fig.ol.no> <201408071106.s77B6JCI005742@sdf.org> <5B99AAB4-C8CB-45A9-A6F0-1F8B08221917@kraus-haus.org> <201408220940.s7M9e6pZ008296@sdf.org> <7971D6CA-AEE3-447D-8D09-8AC0B9CC6DBE@kraus-haus.org> <201408260641.s7Q6feBc004970@sdf.org> <9588077E-1198-45AF-8C4A-606C46C6E4F8@kraus-haus.org> <201408280636.s7S6a5OZ022667@sdf.org>

next in thread | previous in thread | raw e-mail | index | archive | help
On Aug 28, 2014, at 2:36, Scott Bennett <bennett@sdf.org> wrote:

> Paul Kraus <paul@kraus-haus.org> wrote:

>> Wow. That implies you are hitting a drive with a very high =
uncorrectable error rate since the drive did not report any errors and =
the data is corrupt. I have yet to run into one of those.
>=20
>     How would an uncorrectable error be detected by the drive without =
any
> parity checking or hardware-implemented write-with-verify?

I suppose my point was that an operation that is NOT flagged by the =
drive as failing and DOES return faulty data is, by definition, an =
uncorrectable error (as far as the drive is concerned). The point is =
that an uncorrectable error (from the drive standpoint) is just that, an =
error that the drive CANNOT detect.

>     Are you using any drives larger than 1 TB?

I have been testing with a bunch of 2TB (3 HGST and 1 WD). I have been =
using ZFS and it has not reported *any* checksum errors.

I have put one of the 4 into production service (I needed a replacement =
for a failed 1TB and did not have any more 1TB in stock). It has been =
running for a couple weeks now with no checksum errors reported. My =
zpool is 5 x 1TB RAIDz2 and it has about 2TB of data on it right now.

>  If so, try copying a 1.1 TB
> file to one of them, and then trying comparing the copy against the =
original.

Hurmmm. I have not worked with individual files that large. What =
filesystem are you using here?=20

> Out of the three drives I could test that way, I got that kind of =
result on
> two every time I tried it.  One of the two was a new Samsung (i.e., a
> Seagate), and the other was a refurbished Seagate supplied as a =
replacement
> under warranty.  The third got a clean copy the first time and two =
bytes with
> single-bit errors on the second try.  That one was also a refurbished =
Seagate
> provided under warranty.

If you use ZFS on these drives and copy the same file do you get any =
checksum errors?

--
Paul Kraus
paul@kraus-haus.org




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?25B567A0-6639-41EE-AB3E-96AFBA3F11B7>