Date: Sat, 6 Jun 2009 13:16:32 -0700 From: Freddie Cash <fjwcash@gmail.com> To: freebsd-hackers@freebsd.org Subject: Re: Request for opinions - gvinum or ccd? Message-ID: <b269bc570906061316g37290b5q910da0d3ec266c98@mail.gmail.com> In-Reply-To: <h0ehhv$sic$1@ger.gmane.org> References: <20090530175239.GA25604@logik.internal.network> <20090530144354.2255f722@bhuda.mired.org> <20090530191840.GA68514@logik.internal.network> <20090530162744.5d77e9d1@bhuda.mired.org> <A5BB2D2B836A4438B1B7BD8420FCC6A3@uk.tiscali.intl> <h0ehhv$sic$1@ger.gmane.org>
next in thread | previous in thread | raw e-mail | index | archive | help
On Sat, Jun 6, 2009 at 12:54 PM, Ivan Voras<ivoras@freebsd.org> wrote: > Sorry to come into the discussion late, but I just want to confirm > something. > > The configuration below is a stripe of four components, each of which is > RAIDZ2, right? > > If, as was discussed later in the thread, RAIDZ(2) is more similar to > RAID3 than RAID5 for random performance, the given configuration can be > (very roughly, in the non-sequential access case) expected to deliver > performance of four drives in a RAID0 array? According to all the Sun documentation, the I/O throughput of a raidz configuration is equal to that of a single drive. Hence their recommendation to not use more than 8 or 9 drives in a single raidz vdev, and to use multiple raidz vdevs. As you add vdevs, the throughput increases. We made the mistake early on of creating a 24-drive raidz2 vdev. Performance was not very good. And when we had to replace a drive, it spent over a week trying to resilver. But the resilver operation has to touch every single drive in the raidz vdev. :( We remade the pool using 3x 8-drive raidz2 vdevs, and performance has been great (400 MBytes/s write, almost 3 GBytes/s sequential read, 800 MBytes/s random read). -- Freddie Cash fjwcash@gmail.com
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?b269bc570906061316g37290b5q910da0d3ec266c98>