Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 31 Aug 2014 02:49:30 -0500
From:      Scott Bennett <bennett@sdf.org>
To:        paul@kraus-haus.org
Cc:        freebsd-questions@freebsd.org, freebsd@qeng-ho.org, Trond.Endrestol@fagskolen.gjovik.no
Subject:   Re: gvinum raid5 vs. ZFS raidz
Message-ID:  <201408310749.s7V7nVsf025094@sdf.org>
In-Reply-To: <58E30C52-A12C-4D9E-95D6-5BFB7A05FE46@kraus-haus.org>
References:  <201408020621.s726LsiA024208@sdf.org> <alpine.BSF.2.11.1408020356250.1128@wonkity.com> <53DCDBE8.8060704@qeng-ho.org> <201408060556.s765uKJA026937@sdf.org> <53E1FF5F.1050500@qeng-ho.org> <201408070831.s778VhJc015365@sdf.org> <alpine.BSF.2.11.1408071034510.64214@mail.fig.ol.no> <201408070936.s779akMv017524@sdf.org> <alpine.BSF.2.11.1408071226020.64214@mail.fig.ol.no> <201408071106.s77B6JCI005742@sdf.org> <5B99AAB4-C8CB-45A9-A6F0-1F8B08221917@kraus-haus.org> <201408220940.s7M9e6pZ008296@sdf.org> <7971D6CA-AEE3-447D-8D09-8AC0B9CC6DBE@kraus-haus.org> <201408260641.s7Q6feBc004970@sdf.org> <9588077E-1198-45AF-8C4A-606C46C6E4F8@kraus-haus.org> <201408280636.s7S6a5OZ022667@sdf.org> <25B567A0-6639-41EE-AB3E-96AFBA3F11B7@kraus-haus.org> <201408300147.s7U1leJP024616@sdf.org> <58E30C52-A12C-4D9E-95D6-5BFB7A05FE46@kraus-haus.org>

next in thread | previous in thread | raw e-mail | index | archive | help
Paul Kraus <paul@kraus-haus.org> wrote:
> On Aug 29, 2014, at 21:47, Scott Bennett <bennett@sdf.org> wrote:
> > Paul Kraus <paul@kraus-haus.org> wrote:
> <snip>
> >> I have been testing with a bunch of 2TB (3 HGST and 1 WD). I have been using ZFS and it has not reported *any* checksum errors.
> >> 
> >     What sort of testing?  Unless the data written with errors are read back,
> > how would ZFS know about any checksum errors?  Does ZFS implement write-with-
> > verify?  Copying some humongous file and then reading it back for comparison
> > (or, with ZFS, just reading them) ought to bring the checksums into play.  Of
> > course, a scrub should do that, too.
>
> I typically run a scrub on any new drive after writing a bunch of data to it, specifically to look for infant mortality :-)

     Looks like a good idea.  Whenever I get the raidz2 set up and some
sizable amount of data loaded into it, I intend to do the same.  However,
because the capacity of the 6-drive raidz2 will be about four times the
original UFS2 capacity, I suppose I'll need to find a way to expand the
dump file in other ways, so as to cover the misbehaving tracks on the
individual drives.
>
> >     I have never bought the enterprise-grade drives--though I may begin doing
> > so after having read the information you've brought up here--so the difference
> > in drive quality at the outset may explain why your results so far have been
> > so much better than mine.
>
> Don?t go by what *I* say, go the manufacturer?s web sites and download and read the full specifications on the drives you are looking at. None of the sales sites (Newegg, CDW, etc.) post the full specs, yet they are all (still) available from the Seagate / Western Digital / HGST etc. web sites.

     Yes, I understood that from what you had already written.  What I meant
was that I hadn't been aware that the manufacturers were selling the drives
divided into two differing grades of reliability.  From now on, the issue
will be a matter of my budget vs. the price differences.
>
> I am just starting to play with a different WD Enterprise series, so far all my testing (and use) has been with the RE series, I just got two 1TB SE series (which are also 5 year warranty and claim to be Enterprise grade, rated for 24x7 operation). I put them into service today and expect to be loading data on them tomorrow or Monday. So now I will have Seagate ES, ES.2, HGST Ultrastar (various P/N), and WD RE, SE drives in use.
>
     Okay.  Thanks again for the info.  Just out of curiosity, where do you
usually find those Hitachi drives?
> <snip>
>
> >>> If so, try copying a 1.1 TB
> >>> file to one of them, and then trying comparing the copy against the original.
> >> 
> >> Hurmmm. I have not worked with individual files that large. What filesystem are you using here? 
> > 
> >     At the moment, all of my file systems on hard drives are UFS2.
>
> I wonder if it an issue with a single file larger than 1TB ? just wondering out loud here.

     Well, all I can say is that it is not supposed to be.  After all, file
systems that were very large were the reason for going from UFS1 to UFS2.
>
> <snip>
>
> >     My expectation is that I will end up contacting one or more manufacturers
> > to try to replace at least two drives based on whatever ZFS detects, but I
> > would be glad to be mistaken about that for now.  If two are that bad, then
> > I hope that ZFS can keep things running until the replacements show up here.
>
> I have never had to warranty a drive for uncorrectable errors, they have been a small enough percentage that I did not worry about them, and when the error rate gets big enough other things start going wrong as well. At least that has been my experience.
>
     I would count yourself very lucky if I were you, although my previous
remark regarding the difference in reliability grades still holds.


                                  Scott Bennett, Comm. ASMELG, CFIAG
**********************************************************************
* Internet:   bennett at sdf.org   *xor*   bennett at freeshell.org  *
*--------------------------------------------------------------------*
* "A well regulated and disciplined militia, is at all times a good  *
* objection to the introduction of that bane of all free governments *
* -- a standing army."                                               *
*    -- Gov. John Hancock, New York Journal, 28 January 1790         *
**********************************************************************



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201408310749.s7V7nVsf025094>