Date: Fri, 22 Aug 2014 09:12:45 -0400 From: Paul Kraus <paul@kraus-haus.org> To: Scott Bennett <bennett@sdf.org> Cc: Trond.Endrestol@fagskolen.gjovik.no, freebsd-questions@freebsd.org, freebsd@qeng-ho.org Subject: Re: gvinum raid5 vs. ZFS raidz Message-ID: <7971D6CA-AEE3-447D-8D09-8AC0B9CC6DBE@kraus-haus.org> In-Reply-To: <201408220940.s7M9e6pZ008296@sdf.org> References: <201408020621.s726LsiA024208@sdf.org> <alpine.BSF.2.11.1408020356250.1128@wonkity.com> <53DCDBE8.8060704@qeng-ho.org> <201408060556.s765uKJA026937@sdf.org> <53E1FF5F.1050500@qeng-ho.org> <201408070831.s778VhJc015365@sdf.org> <alpine.BSF.2.11.1408071034510.64214@mail.fig.ol.no> <201408070936.s779akMv017524@sdf.org> <alpine.BSF.2.11.1408071226020.64214@mail.fig.ol.no> <201408071106.s77B6JCI005742@sdf.org> <5B99AAB4-C8CB-45A9-A6F0-1F8B08221917@kraus-haus.org> <201408220940.s7M9e6pZ008296@sdf.org>
next in thread | previous in thread | raw e-mail | index | archive | help
On Aug 22, 2014, at 5:40, Scott Bennett <bennett@sdf.org> wrote: > Paul Kraus <paul@kraus-haus.org> wrote: >> Take a look at the manufacturer data sheets for this drives. All of = the ones that I have looked at over the past ten years have included the = ?uncorrectable error rate? and it is generally 1 in 10e-14 for ?consumer = grade drives? and 1 in 1e-15 for ?enterprise grade drives?. That right = there shows the order of magnitude difference in this error rate between = consumer and enterprise drives. >=20 > I'll assume you meant the reciprocals of those ratios or possibly = even > 1/10 of the reciprocals. ;-) Uhhh, yeah, my bad. > What I'm seeing here is ~2 KB of errors out > of ~1.1TB, which is an error rate (in bytes, not bits) of ~1.82e+09, = and the > majority of the erroneous bytes I looked at had multibit errors. I = consider > that to be a huge change in the actual device error rates, specs be = damned. That seems like a very high error rate. Is the drive reporting those = errors or are they getting past the drive=92s error correction and = showing up as checksum errors in ZFS ? A drive that is throwing that = many errors is clearly defective or dying. > While I was out of town, I came across a trade magazine article = that > said that as the areal density of bits approaches the theoretical = limit for > the recording technology currently in production, the error rate = climbs ever > more steeply, and that the drives larger than 1 TB are now making that = effect > easily demonstrable. :-( It took perpendicular recording to make >1TB drives possible at all.=20 > The article went on to describe superficially a new > recording technology due to appear on the mass market in 2015 that = will allow > much higher bit densities, while drastically improving the error rate = (at > least until densities eventually close in on that technology's limit). = So > it may turn out that next year consumers will begin to move past the = hump in > error rates and will find that hardware RAID will have become = acceptably safe > once again. The description of the new recording technology looked = like a > really spiffed up version of the magneto-optical disks of the 1990s. = In the > meantime, though, the current crops of large-capacity disks apparently > require software solutions like ZFS to preserve data integrity. I do not know the root cause of the uncorrectable errors, but they seem = to vary with product line and not capacity. Whether that means the = Enterprise drives with the order of magnitude better uncorrectable error = rate has better coatings on the platters or better heads or better = electronics or better QC I do not know. So I don=92t know how mud this = new technology will effect those errors. -- Paul Kraus paul@kraus-haus.org
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?7971D6CA-AEE3-447D-8D09-8AC0B9CC6DBE>