Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 14 Jul 2009 09:23:06 -0700 (PDT)
From:      Richard Mahlerwein <mahlerrd@yahoo.com>
To:        Matthew Seaman <m.seaman@infracaninophile.co.uk>
Cc:        Free BSD Questions list <freebsd-questions@freebsd.org>
Subject:   Re: ZFS or UFS for 4TB hardware RAID6?
Message-ID:  <719914.10546.qm@web51006.mail.re2.yahoo.com>

next in thread | raw e-mail | index | archive | help

--- On Tue, 7/14/09, Matthew Seaman <m.seaman@infracaninophile.co.uk> wrote=
:=0A=0A> From: Matthew Seaman <m.seaman@infracaninophile.co.uk>=0A> Subject=
: Re: ZFS or UFS for 4TB hardware RAID6?=0A> To: mahlerrd@yahoo.com=0A> Cc:=
 "Free BSD Questions list" <freebsd-questions@freebsd.org>=0A> Date: Tuesda=
y, July 14, 2009, 4:23 AM=0A> Richard Mahlerwein wrote:=0A> =0A> > With 4 d=
rives, you could get much, much higher=0A> performance out of=0A> > RAID10 =
(which is alternatively called RAID0+1 or=0A> RAID1+0 depending on=0A> > th=
e manufacturer=0A> =0A> Uh -- no.=A0 RAID10 and RAID0+1 are superficially=
=0A> similar but quite different=0A> things.=A0 The main differentiator is =
resilience to disk=0A> failure. RAID10 takes=0A> the raw disks in pairs, cr=
eates a mirror across each pair,=0A> and then stripes=0A> across all the se=
ts of mirrors.=A0 RAID0+1 divides the=0A> raw disks into two equal=0A> sets=
, constructs stripes across each set of disks, and then=0A> mirrors the=0A>=
 two stripes.=0A> =0A> Read/Write performance is similar in either case: bo=
th=0A> perform well for the sort of small randomly distributed IO=0A> opera=
tions you'ld get when eg.=0A> running a RDBMS.=A0 However, consider what ha=
ppens if=0A> you get a disk failure.=0A> In the RAID10 case *one* of your N=
/2 mirrors is degraded=0A> but the other N-1=0A> drives in the array operat=
e as normal.=A0 In the RAID0+1=0A> case, one of the=0A> 2 stripes is immedi=
ately out of action and the whole IO=0A> load is carried by=0A> the N/2 dri=
ves in the other stripe.=0A> =0A> Now consider what happens if a second dri=
ve should=0A> fail.=A0 In the RAID10=0A> case, you're still up and running =
so long as the failed=0A> drive is one of=0A> the N-2 disks that aren't the=
 mirror pair of the 1st failed=0A> drive.=0A> In the RAID0+1 case, you're o=
ut of action if the 2nd disk=0A> to fail is one=0A> of the N/2 drives from =
the working stripe.=A0 Or in=0A> other words, if two=0A> random disks fail =
in a RAID10, chances are the RAID will=0A> still work.=A0 If=0A> two arbitr=
arily selected disks fail in a RAID0+1 chances=0A> are basically=0A> even t=
hat the whole RAID is out of action[*].=0A> =0A> I don't think I've ever se=
en a manufacturer say RAID1+0=0A> instead of RAID10,=0A> but I suppose all =
things are possible.=A0 My impression=0A> was that the 0+1 terminology was =
specifically invented to=0A> make it more visually distinctive=0A> -- ie to=
 prevent confusion between '01' and '10'.=0A> =0A> =A0=A0=A0 Cheers,=0A> =
=0A> =A0=A0=A0 Matthew=0A> =0A> [*] Astute students of probability will poi=
nt out that this=0A> really only=0A> makes a difference for N > 4, and for =
N=3D4 chances are=0A> evens either way that failure of two drives would tak=
e out=0A> the RAID.=0A> =0A> -- Dr Matthew J Seaman MA, D.Phil.=A0 =A0 =A0=
=0A> =A0 =A0 =A0 =A0 =A0 =A0=A0=A07=0A> Priory Courtyard=0A> =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0=0A> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=0A> =A0 =A0 =A0 =A0=
 =A0 =A0 =A0=0A> =A0=A0=A0Flat 3=0A> PGP: http://www.infracaninophile.co.uk=
/pgpkey=A0=0A> =A0=A0=A0Ramsgate=0A> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=0A> =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=0A> =A0 =A0 =A0 =A0 =A0 =A0 =A0=0A> =A0=A0=
=A0Kent, CT11 9PW=0A> =0A=0A--- On Tue, 7/14/09, Matthew Seaman <m.seaman@i=
nfracaninophile.co.uk> wrote:=0A=0A> From: Matthew Seaman <m.seaman@infraca=
ninophile.co.uk>=0A> Subject: Re: ZFS or UFS for 4TB hardware RAID6?=0A> To=
: mahlerrd@yahoo.com=0A> Cc: "Free BSD Questions list" <freebsd-questions@f=
reebsd.org>=0A> Date: Tuesday, July 14, 2009, 4:23 AM=0A> Richard Mahlerwei=
n wrote:=0A> =0A> > With 4 drives, you could get much, much higher=0A> perf=
ormance out of=0A> > RAID10 (which is alternatively called RAID0+1 or=0A> R=
AID1+0 depending on=0A> > the manufacturer=0A> =0A> Uh -- no.  RAID10 and R=
AID0+1 are superficially=0A> similar but quite different=0A> things.  The m=
ain differentiator is resilience to disk=0A> failure. RAID10 takes=0A> the =
raw disks in pairs, creates a mirror across each pair,=0A> and then stripes=
=0A> across all the sets of mirrors.  RAID0+1 divides the=0A> raw disks int=
o two equal=0A> sets, constructs stripes across each set of disks, and then=
=0A> mirrors the=0A> two stripes.=0A> =0A> Read/Write performance is simila=
r in either case: both=0A> perform well for the sort of small randomly dist=
ributed IO=0A> operations you'ld get when eg.=0A> running a RDBMS.  However=
, consider what happens if=0A> you get a disk failure.=0A> In the RAID10 ca=
se *one* of your N/2 mirrors is degraded=0A> but the other N-1=0A> drives i=
n the array operate as normal.  In the RAID0+1=0A> case, one of the=0A> 2 s=
tripes is immediately out of action and the whole IO=0A> load is carried by=
=0A> the N/2 drives in the other stripe.=0A> =0A> Now consider what happens=
 if a second drive should=0A> fail.  In the RAID10=0A> case, you're still u=
p and running so long as the failed=0A> drive is one of=0A> the N-2 disks t=
hat aren't the mirror pair of the 1st failed=0A> drive.=0A> In the RAID0+1 =
case, you're out of action if the 2nd disk=0A> to fail is one=0A> of the N/=
2 drives from the working stripe.  Or in=0A> other words, if two=0A> random=
 disks fail in a RAID10, chances are the RAID will=0A> still work.  If=0A> =
two arbitrarily selected disks fail in a RAID0+1 chances=0A> are basically=
=0A> even that the whole RAID is out of action[*].=0A> =0A> I don't think I=
've ever seen a manufacturer say RAID1+0=0A> instead of RAID10,=0A> but I s=
uppose all things are possible.  My impression=0A> was that the 0+1 termino=
logy was specifically invented to=0A> make it more visually distinctive=0A>=
 -- ie to prevent confusion between '01' and '10'.=0A> =0A>     Cheers,=0A>=
 =0A>     Matthew=0A> =0A> [*] Astute students of probability will point ou=
t that this=0A> really only=0A> makes a difference for N > 4, and for N=3D4=
 chances are=0A> evens either way that failure of two drives would take out=
=0A> the RAID.=0A=0ASorry, you are correct.  Thanks for clearing that up.  =
=0A=0AI *have,* by the way, stumbled across them a couple of times in the c=
onsumer/on-board market, and that's why I tend to remember that and include=
 it even though it's incorrect now.  IIRC (which is NOT certain :), I remem=
ber once perhaps back around 2000 that a major mag tested some and found th=
at it was only nomenclature differences: all RAID10/1+0/0+1 that were avail=
able were all RAID10.  =0A=0AAnd, if I recall, that was back in the PATA da=
ys.=0A=0AAnyway, NP.  I could also be off my rockers.=0A=0A(Oh, and thanks =
for the addendum, I actually was following and thinking "...now wait a minu=
te..." and then you clarified that last bit.  :)  )=0A=0A=0A      



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?719914.10546.qm>