Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 15 May 2010 18:01:33 -0700
From:      Jeremy Chadwick <freebsd@jdc.parodius.com>
To:        Bob Friesenhahn <bfriesen@simple.dallas.tx.us>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: Quick ZFS mirroring question for non-mirrored pool
Message-ID:  <20100516010133.GA52593@icarus.home.lan>
In-Reply-To: <alpine.GSO.2.01.1005151937300.12887@freddy.simplesystems.org>
References:  <4BEF2F9C.7080409@netscape.net> <4BEF3137.4080203@netscape.net> <20100516001351.GA50879@icarus.home.lan> <alpine.GSO.2.01.1005151937300.12887@freddy.simplesystems.org>

next in thread | previous in thread | raw e-mail | index | archive | help
On Sat, May 15, 2010 at 07:51:17PM -0500, Bob Friesenhahn wrote:
> On Sat, 15 May 2010, Jeremy Chadwick wrote:
> >What you have here is the equivalent of RAID-10.  It might be more
> >helpful to look at the above as a "stripe of mirrors".
> >
> >In this situation, you might be better off with raidz1 (RAID-5 in
> >concept).  You should get better actual I/O performance due to ZFS
> >distributing the I/O workload across 4 disks rather than 2.  At least
> >that's how I understand it.
> 
> That would be a reasonable assumption but actual evidence suggests
> otherwise.  For sequential I/O, mirrors and raidz1 seem to offer
> roughly similar performance, except that mirrors win for reads and
> raidz1 often win for writes.  The mirror configuration definitely
> wins as soon as there are many seeks or multi-user activity.
> 
> The reason why mirrors still do well for sequential I/O is that
> there is still load-sharing across the vdevs (smart "striping") but
> in full 128K blocks whereas the raidz1 config needs to break the
> 128K blocks into smaller blocks which are striped across the disks
> in the vdev. Breaking the data into smaller chunks for raidz
> multiplies the disk IOPS required.  Disk seeks are slow.
> 
> The main reason to choose raidz1 is for better space efficiency but
> mirrors offer more performance.
> 
> For an interesting set of results, see the results summary of "Bob's
> method" at "http://www.nedharvey.com/".
> 
> The only way to be sure for your own system is to create various
> pool configurations and test with something which represents your
> expected work load.  As long as the pool is not the boot pool, zfs
> makes such testing quite easy.

Thanks Bob.  You're absolutely right.

I'd never seen/read said data results before, nor had I read the below
material until now; quite interesting and educational.

http://blogs.sun.com/roch/entry/when_to_and_not_to

-- 
| Jeremy Chadwick                                   jdc@parodius.com |
| Parodius Networking                       http://www.parodius.com/ |
| UNIX Systems Administrator                  Mountain View, CA, USA |
| Making life hard for others since 1977.              PGP: 4BD6C0CB |




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20100516010133.GA52593>