Date: Tue, 14 Jul 2009 09:23:45 +0100 From: Matthew Seaman <m.seaman@infracaninophile.co.uk> To: mahlerrd@yahoo.com Cc: Free BSD Questions list <freebsd-questions@freebsd.org> Subject: Re: ZFS or UFS for 4TB hardware RAID6? Message-ID: <4A5C4091.3030208@infracaninophile.co.uk> In-Reply-To: <42310.1585.qm@web51008.mail.re2.yahoo.com> References: <42310.1585.qm@web51008.mail.re2.yahoo.com>
next in thread | previous in thread | raw e-mail | index | archive | help
This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
--------------enig8D44B299A69B95A1567A379A
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: quoted-printable
Richard Mahlerwein wrote:
=20
> With 4 drives, you could get much, much higher performance out of
> RAID10 (which is alternatively called RAID0+1 or RAID1+0 depending on
> the manufacturer
Uh -- no. RAID10 and RAID0+1 are superficially similar but quite differe=
nt
things. The main differentiator is resilience to disk failure. RAID10 ta=
kes
the raw disks in pairs, creates a mirror across each pair, and then strip=
es
across all the sets of mirrors. RAID0+1 divides the raw disks into two e=
qual
sets, constructs stripes across each set of disks, and then mirrors the
two stripes.
Read/Write performance is similar in either case: both perform well for=20
the sort of small randomly distributed IO operations you'ld get when eg.
running a RDBMS. However, consider what happens if you get a disk failur=
e.
In the RAID10 case *one* of your N/2 mirrors is degraded but the other N-=
1
drives in the array operate as normal. In the RAID0+1 case, one of the
2 stripes is immediately out of action and the whole IO load is carried b=
y
the N/2 drives in the other stripe.
Now consider what happens if a second drive should fail. In the RAID10
case, you're still up and running so long as the failed drive is one of
the N-2 disks that aren't the mirror pair of the 1st failed drive.
In the RAID0+1 case, you're out of action if the 2nd disk to fail is one
of the N/2 drives from the working stripe. Or in other words, if two
random disks fail in a RAID10, chances are the RAID will still work. If
two arbitrarily selected disks fail in a RAID0+1 chances are basically
even that the whole RAID is out of action[*].
I don't think I've ever seen a manufacturer say RAID1+0 instead of RAID10=
,
but I suppose all things are possible. My impression was that the 0+1=20
terminology was specifically invented to make it more visually distinctiv=
e
-- ie to prevent confusion between '01' and '10'.
Cheers,
Matthew
[*] Astute students of probability will point out that this really only
makes a difference for N > 4, and for N=3D4 chances are evens either way =
that failure of two drives would take out the RAID.
--=20
Dr Matthew J Seaman MA, D.Phil. 7 Priory Courtyard
Flat 3
PGP: http://www.infracaninophile.co.uk/pgpkey Ramsgate
Kent, CT11 9PW
--------------enig8D44B299A69B95A1567A379A
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.12 (FreeBSD)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
iEYEAREIAAYFAkpcQJcACgkQ8Mjk52CukIz1JQCeNeREHTenaloe/RskSFVGLMRf
srwAoImZpbdpWoU2QXiC7scW7lJmfyYM
=J4t9
-----END PGP SIGNATURE-----
--------------enig8D44B299A69B95A1567A379A--
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4A5C4091.3030208>
