Date: Sun, 28 Jun 2009 13:30:26 +0300 From: Dan Naumov <dan.naumov@gmail.com> To: Andrew Snow <andrew@modulus.org> Cc: freebsd-fs@freebsd.org, freebsd-geom@freebsd.org Subject: Re: read/write benchmarking: UFS2 vs ZFS vs EXT3 vs ZFS RAIDZ vs Linux MDRAID Message-ID: <cf9b1ee00906280330s1f500266xdcbfb1462deda7f8@mail.gmail.com> In-Reply-To: <4A4725FA.80505@modulus.org> References: <cf9b1ee00906261636m5d09966ag6d7e1b7557ada709@mail.gmail.com> <4A4725FA.80505@modulus.org>
next in thread | previous in thread | raw e-mail | index | archive | help
> What confuses me about these results is that the '5 disk' performance was > barely higher than the 'single disk' performance. =A0All figures are also > lower than I get from a single modern SATA disk. > > My own testing with dd from /dev/zero with FreeBSD ZFS an Intel ICH10 > chipset motherboard with Core2duo 2.66ghz showed RAIDZ performance scalin= g > linearly with number of disks: > > > What =A0 =A0 =A0 =A0 =A0 =A0 =A0 Write =A0 Read > -------------------------------- > 7 disk RAIDZ2 =A0 =A0 =A0220 =A0 =A0 305 > 6 disk RAIDZ2 =A0 =A0 =A0173 =A0 =A0 260 > 5 disk RAIDZ2 =A0 =A0 =A0120 =A0 =A0 213 What's confusing is that your results are actually out of place with how ZFS numbers are supposed to look, not mine :) When using ZFS RAIDZ, due to the way parity checking works in ZFS, your pool is SUPPOSED to have throughput of the average single disk from that pool and not some numbers growing skyhigh in a linear fashion. The numbers that did surprise me the most were actually gmirror reads (results posted earlier to this list): a geom gmirror is consistently SLOWER for reading that a single disk (and it only gets progressively worse the more disks you have in your gmirror). Read performance of all other mirroring implementations pretty much scale up linearly with the amount of disks present in the mirror. - Sincerely, Dan Naumov
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?cf9b1ee00906280330s1f500266xdcbfb1462deda7f8>