From owner-freebsd-hardware@FreeBSD.ORG Thu Jan 8 09:47:53 2009 Return-Path: Delivered-To: freebsd-hardware@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 9BE5E106566C for ; Thu, 8 Jan 2009 09:47:53 +0000 (UTC) (envelope-from ndenev@gmail.com) Received: from mail-bw0-f33.google.com (mail-bw0-f33.google.com [209.85.218.33]) by mx1.freebsd.org (Postfix) with ESMTP id E1EE38FC14 for ; Thu, 8 Jan 2009 09:47:52 +0000 (UTC) (envelope-from ndenev@gmail.com) Received: by bwz14 with SMTP id 14so6744034bwz.19 for ; Thu, 08 Jan 2009 01:47:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:cc:message-id:from:to :in-reply-to:content-type:content-transfer-encoding:mime-version :subject:date:references:x-pgp-agent:x-mailer; bh=Ny5henz2xXHJNiqfGN23yUDUgqVpw3XgMKlY1205QOU=; b=IqGUJ9MbaCYCV1G5QDPKwW0WF5Onen6zYntHkfTKc4OG9aGf2jbv9wGWzHg/4RMuKZ UZKgDyEpjBDW9ygf6gp7unvMTB5pwBaxQhzwbmxyOxPRYEewYrJMJnYZqs0j31rD5QpU ZOTtS8JkZKUntAZ9SbJ53EwGJtpjTRB5FzSOM= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=cc:message-id:from:to:in-reply-to:content-type :content-transfer-encoding:mime-version:subject:date:references :x-pgp-agent:x-mailer; b=V9oEhxeNv3iOj0r5tzUCiKbnU/fvNYsbeh/6Yj5SCmVZWe8jpYMaqpARKIG3leOFqB NYQJyv2pyZ7PN9gtqnx4iaoyejNDq62e1L8aaH2cMnQ4BnXOQCjkXBFQTAmdReJRgtm8 gm55ylBg/Pxp3HQsIVCA/52oz5N1F8Q7LrZbY= Received: by 10.103.92.10 with SMTP id u10mr8676819mul.22.1231406404220; Thu, 08 Jan 2009 01:20:04 -0800 (PST) Received: from ndenev.cmotd.com (blah.sun-fish.com [217.18.249.150]) by mx.google.com with ESMTPS id e9sm50656510muf.51.2009.01.08.01.20.01 (version=TLSv1/SSLv3 cipher=RC4-MD5); Thu, 08 Jan 2009 01:20:02 -0800 (PST) Message-Id: From: Nikolay Denev To: fbsd@dannysplace.net In-Reply-To: <496549D9.7010003@dannysplace.net> Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes Content-Transfer-Encoding: quoted-printable Mime-Version: 1.0 (Apple Message framework v930.3) Date: Thu, 8 Jan 2009 11:19:59 +0200 References: <20081031033208.GA21220@icarus.home.lan> <490A849C.7030009@dannysplace.net> <20081031043412.GA22289@icarus.home.lan> <490A8FAD.8060009@dannysplace.net> <491BBF38.9010908@dannysplace.net> <491C5AA7.1030004@samsco.org> <491C9535.3030504@dannysplace.net> <4920E1DD.7000101@dannysplace.net> <20081117070818.GA22231@icarus.home.lan> <496549D9.7010003@dannysplace.net> X-Pgp-Agent: GPGMail d55 (v55, Leopard) X-Mailer: Apple Mail (2.930.3) Cc: freebsd-fs@freebsd.org, Jeremy Chadwick , freebsd-hardware@freebsd.org Subject: Re: Areca vs. ZFS performance testing. X-BeenThere: freebsd-hardware@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: General discussion of FreeBSD hardware List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 08 Jan 2009 09:47:53 -0000 -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 8 Jan, 2009, at 02:33 , Danny Carroll wrote: > I'd like to post some results of what I have found with my tests. > I did a few different types of tests. Basically a set of 5-disk tests > and a set of 12-disk tests. > > I did this because I only had 5 ports available on my onboard =20 > controller > and I wanted to see how the areca compared to that. I also wanted to > see comparisons between JBOD, Passthru and hardware raid5. > > I have not tested raid6 or raidz2. > > You can see the results here: > http://www.dannysplace.net/quickweb/filesystem%20tests.htm > > An explanation of each of the tests: > ICH9_ZFS 5 disk zfs raidz test with onboard SATA > ports. > ARECAJBOD_ZFS 5 disk zfs raidz test with Areca SATA > ports configured in JBOD mode. > ARECAJBOD_ZFS_NoWriteCache 5 disk zfs raidz test with Areca SATA = =09 > ports configured in JBOD mode and with > disk caches disabled. > ARECARAID 5 disk zfs single-disk test with Areca > raid5 array. > ARECAPASSTHRU 5 disk zfs raidz test with Areca SATA = ports > configured in Passthru mode. This > means that the onboard areca cache is > active. > ARECARAID-UFS2 5 disk ufs2 single-disk test = with Areca > raid5 array. > ARECARAID-BIG 12 disk zfs single-disk test with Areca > raid5 array. > ARECAPASSTHRU_12 12 disk zfs raidz test with Areca SATA = ports > configured in Passthru mode. This > means that the onboard areca cache is > active. > > > I'll probably be opting for the ARECAPASSTHRU_12 configuration. =20 > Mainly > because I do not need amazing read speeds (network port would be > saturated anyway) and I think that the raidz implementation would be > more fault tolerant. By that I mean if you have a disk read error > during a rebuild then as I understand it, raidz will write off that > block (and hopefully tell me about dead files) but continue with the > rest of the rebuild. > > This is something I'd love to test for real, just to see what happens. > But I am not sure how I could do that. Perhaps removing one drive, =20= > then > a few random writes to a remaining disk (or two) and seeing how it =20 > goes > with a rebuild. > > Something else worth mentioning. When I converted from JBOD to > passthrough, I was able to re-import the disks without any problems. > This must mean that the areca passthrough option does not alter the =20= > disk > much, perhaps not at all. > > After a 21 hour rebuild I have to say I am not that keen to do more of > these tests, but if there is something someone wants to see, then I'll > definitely consider it. > > One thing I am at a loss to understand is why turning off the disk > caches when testing the JBOD performance produced almost identical =20 > (very > slightly better) results. Perhaps it was a case of the ZFS internal > cache making the disks cache redundant? Comparing to the ARECA > passthrough (where the areca cache is used) shows again, similar =20 > results. > > -D > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" There is a big difference betweeen hardware and ZFS raidz with 12 disk =20= on the get_block test, maybe it would be interesting to rerun this test with zfs prefetch =20 disabled? - -- Regards, Nikolay Denev -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.9 (Darwin) iEYEARECAAYFAkllxT8ACgkQHNAJ/fLbfrnHnwCeJ8nSjBY6fc0Lvu2+fSN5E4HI zb0Ani2ZFLdxYCWYBuCnoo+D244O2lg5 =3DEKgi -----END PGP SIGNATURE-----