Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 31 Aug 2014 13:12:32 -0400
From:      Paul Kraus <paul@kraus-haus.org>
To:        Scott Bennett <bennett@sdf.org>
Cc:        freebsd-questions@freebsd.org, freebsd@qeng-ho.org, Trond.Endrestol@fagskolen.gjovik.no
Subject:   Re: gvinum raid5 vs. ZFS raidz
Message-ID:  <CE1C33C0-D67A-4801-B55B-74C8EEBDEAC6@kraus-haus.org>
In-Reply-To: <201408310749.s7V7nVsf025094@sdf.org>
References:  <201408020621.s726LsiA024208@sdf.org> <alpine.BSF.2.11.1408020356250.1128@wonkity.com> <53DCDBE8.8060704@qeng-ho.org> <201408060556.s765uKJA026937@sdf.org> <53E1FF5F.1050500@qeng-ho.org> <201408070831.s778VhJc015365@sdf.org> <alpine.BSF.2.11.1408071034510.64214@mail.fig.ol.no> <201408070936.s779akMv017524@sdf.org> <alpine.BSF.2.11.1408071226020.64214@mail.fig.ol.no> <201408071106.s77B6JCI005742@sdf.org> <5B99AAB4-C8CB-45A9-A6F0-1F8B08221917@kraus-haus.org> <201408220940.s7M9e6pZ008296@sdf.org> <7971D6CA-AEE3-447D-8D09-8AC0B9CC6DBE@kraus-haus.org> <201408260641.s7Q6feBc004970@sdf.org> <9588077E-1198-45AF-8C4A-606C46C6E4F8@kraus-haus.org> <201408280636.s7S6a5OZ022667@sdf.org> <25B567A0-6639-41EE-AB3E-96AFBA3F11B7@kraus-haus.org> <201408300147.s7U1leJP024616@sdf.org> <58E30C52-A12C-4D9E-95D6-5BFB7A05FE46@kraus-haus.org> <201408310749.s7V7nVsf025094@sdf.org>

next in thread | previous in thread | raw e-mail | index | archive | help
On Aug 31, 2014, at 3:49, Scott Bennett <bennett@sdf.org> wrote:

> Paul Kraus <paul@kraus-haus.org> wrote:

>> I typically run a scrub on any new drive after writing a bunch of =
data to it, specifically to look for infant mortality :-)
>=20
>     Looks like a good idea.  Whenever I get the raidz2 set up and some
> sizable amount of data loaded into it, I intend to do the same.  =
However,
> because the capacity of the 6-drive raidz2 will be about four times =
the
> original UFS2 capacity, I suppose I'll need to find a way to expand =
the
> dump file in other ways, so as to cover the misbehaving tracks on the
> individual drives.

I=92m not sure I would worry about exercising the entire range of tracks =
on the platters, if a platter has a problem (heads or coating) it will =
likely show up all over the platter. If the problem is specific to a =
region, I would expect the drive to be able to remap the bad sectors (as =
we previously discussed).

>> Don?t go by what *I* say, go the manufacturer?s web sites and =
download and read the full specifications on the drives you are looking =
at. None of the sales sites (Newegg, CDW, etc.) post the full specs, yet =
they are all (still) available from the Seagate / Western Digital / HGST =
etc. web sites.
>=20
>     Yes, I understood that from what you had already written.  What I =
meant
> was that I hadn't been aware that the manufacturers were selling the =
drives
> divided into two differing grades of reliability.  =46rom now on, the =
issue
> will be a matter of my budget vs. the price differences.

Sorry If I was being overly descriptive, I am more of a math and science =
guy than an english guy, so my writing is often not the most clear. When =
I started buying Enterprise instead of Desktop drives the price =
difference was under $20 for a $100 drive. The biggest reason I started =
buying the Enterprise drives is that they are RATED for 24x7 operation, =
while Desktop are typically designed for 8x5 (but rarely do they say :-) =
While I do have my desktop and laptop systems setup to spin down the =
drives when not in use (and I leave some of them booted 24x7), my =
server(s) run 24x7 and THAT is where I pay for the Enterprise drives. I =
treat the drives in the laptop / desktop systems as disposable and do =
NOT keep any important data only on them (I rsync my laptop to the =
server a couple times per week and use TimeMachine when at the office).

<snip>

>     Okay.  Thanks again for the info.  Just out of curiosity, where do =
you
> usually find those Hitachi drives?


Newegg =85 Once they lean red how to ship drives without destroying them =
I started buying drives from them :-)

<snip>

>> I wonder if it an issue with a single file larger than 1TB ? just =
wondering out loud here.
>=20
>     Well, all I can say is that it is not supposed to be.  After all, =
file
> systems that were very large were the reason for going from UFS1 to =
UFS2.

I realized that I proposed something ludicrous (the problem with =
thinking =93aloud=94), if the FS did not support -files- larger than =
1TB, then the write operation would have failed when you got to that =
point. Yes, I remember FSes that could not handle a -file- larger than =
2GB!

Note that there is a difference between the size of a filesystem and the =
size of the largest -file- that filesystem may contain.

<snip>

>> I have never had to warranty a drive for uncorrectable errors, they =
have been a small enough percentage that I did not worry about them, and =
when the error rate gets big enough other things start going wrong as =
well. At least that has been my experience.
>>=20
>     I would count yourself very lucky if I were you, although my =
previous
> remark regarding the difference in reliability grades still holds.

I have not tried to use Desktop drives in a Servers (either my own or a =
client=92s) for well over a decade. I do not remember much about drive =
failures before that. Back then my need for capacity was growing faster =
than drives were failing, so I was upgrading before the drives failed. I =
still have a pile of 9GB SCSI drives (and some 18GB and 36GB) kicking =
around from those days. Not to mention the drawer full of 500MB (yes, =
0.5GB) drives I harvested from an old Sun SS1000 before I sold it =85 I =
should have left the drives in it.

--
Paul Kraus
paul@kraus-haus.org




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CE1C33C0-D67A-4801-B55B-74C8EEBDEAC6>