Date: Thu, 08 Jan 2009 10:33:29 +1000 From: Danny Carroll <fbsd@dannysplace.net> To: Jeremy Chadwick <koitsu@FreeBSD.org> Cc: freebsd-fs@freebsd.org, freebsd-hardware@freebsd.org Subject: Re: Areca vs. ZFS performance testing. Message-ID: <496549D9.7010003@dannysplace.net> In-Reply-To: <20081117070818.GA22231@icarus.home.lan> References: <20081031033208.GA21220@icarus.home.lan> <490A849C.7030009@dannysplace.net> <20081031043412.GA22289@icarus.home.lan> <490A8FAD.8060009@dannysplace.net> <491BBF38.9010908@dannysplace.net> <491C5AA7.1030004@samsco.org> <491C9535.3030504@dannysplace.net> <CEDCDD3E-B908-44BF-9D00-7B73B3C15878@anduin.net> <4920E1DD.7000101@dannysplace.net> <F55CD13C-8117-4D34-9C35-618D28F9F2DE@spry.com> <20081117070818.GA22231@icarus.home.lan>
next in thread | previous in thread | raw e-mail | index | archive | help
I'd like to post some results of what I have found with my tests. I did a few different types of tests. Basically a set of 5-disk tests and a set of 12-disk tests. I did this because I only had 5 ports available on my onboard controller and I wanted to see how the areca compared to that. I also wanted to see comparisons between JBOD, Passthru and hardware raid5. I have not tested raid6 or raidz2. You can see the results here: http://www.dannysplace.net/quickweb/filesystem%20tests.htm An explanation of each of the tests: ICH9_ZFS 5 disk zfs raidz test with onboard SATA ports. ARECAJBOD_ZFS 5 disk zfs raidz test with Areca SATA ports configured in JBOD mode. ARECAJBOD_ZFS_NoWriteCache 5 disk zfs raidz test with Areca SATA ports configured in JBOD mode and with disk caches disabled. ARECARAID 5 disk zfs single-disk test with Areca raid5 array. ARECAPASSTHRU 5 disk zfs raidz test with Areca SATA ports configured in Passthru mode. This means that the onboard areca cache is active. ARECARAID-UFS2 5 disk ufs2 single-disk test with Areca raid5 array. ARECARAID-BIG 12 disk zfs single-disk test with Areca raid5 array. ARECAPASSTHRU_12 12 disk zfs raidz test with Areca SATA ports configured in Passthru mode. This means that the onboard areca cache is active. I'll probably be opting for the ARECAPASSTHRU_12 configuration. Mainly because I do not need amazing read speeds (network port would be saturated anyway) and I think that the raidz implementation would be more fault tolerant. By that I mean if you have a disk read error during a rebuild then as I understand it, raidz will write off that block (and hopefully tell me about dead files) but continue with the rest of the rebuild. This is something I'd love to test for real, just to see what happens. But I am not sure how I could do that. Perhaps removing one drive, then a few random writes to a remaining disk (or two) and seeing how it goes with a rebuild. Something else worth mentioning. When I converted from JBOD to passthrough, I was able to re-import the disks without any problems. This must mean that the areca passthrough option does not alter the disk much, perhaps not at all. After a 21 hour rebuild I have to say I am not that keen to do more of these tests, but if there is something someone wants to see, then I'll definitely consider it. One thing I am at a loss to understand is why turning off the disk caches when testing the JBOD performance produced almost identical (very slightly better) results. Perhaps it was a case of the ZFS internal cache making the disks cache redundant? Comparing to the ARECA passthrough (where the areca cache is used) shows again, similar results. -D
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?496549D9.7010003>