Date: Tue, 2 Dec 2008 06:04:27 -0600 (CST) From: Wes Morgan <morganw@chemikals.org> To: Jan Mikkelsen <janm@transactionware.com> Cc: freebsd-fs@freebsd.org, freebsd-hardware@freebsd.org Subject: RE: Areca vs. ZFS performance testing. Message-ID: <alpine.BSF.2.00.0812020601290.96868@ibyngvyr.purzvxnyf.bet> In-Reply-To: <C22DCA4BBFD54E3E9FE96197E64B0788@STUDYPC> References: <C22DCA4BBFD54E3E9FE96197E64B0788@STUDYPC>
next in thread | previous in thread | raw e-mail | index | archive | help
On Tue, 2 Dec 2008, Jan Mikkelsen wrote: > Hi, > > Wes Morgan wrote: >> On Sun, 16 Nov 2008, Matt Simerson wrote: >> >>> The Areca cards do NOT have the cache enabled by default. I >> ordered the >>> optional battery and RAM upgrade for my collection of >> 1231ML cards. Even with >>> the BBWC, the cache is not enabled by default. I had to go >> out of my way to >>> enable it, on every single controller. >> >> Are you using these areca cards successfully with large >> arrays? I found a >> 1680i card for a decent price and installed it this weekend, >> but since >> then I'm seeing the raidz2 pool that it's running hang so >> frequently that >> I can't even trust using it. The hangs occur in both 7-stable and >> 8-current with the new ZFS patch. Same exact settings that >> have been rock >> solid for me before now don't want to work at all. The drives >> are just set >> as JBOD -- the controller actually defaulted to this, so I >> didn't have to >> make any real changes in the BIOS. >> >> Any tips on your setup? Did you have any similar problems? > > I am seeing I/O related lockups on 7.1-PRE with an Areca ARC-1220 controller > and eight drives in a RAID-6 array. The same hardware works fine with 6.3. > > When I run gstat while it is happening I see I/O performance drop and the > time to service each write (ms/w) goes up, and then suddenly goes back down > to a sensible value. I have seen it get to about 22000ms. > > The system is essentially unusable for writes, which limits the utility a > bit. Reads seem fine. > > Is this similar to the behaviour you saw? Not quite. The zfs deadlock/hang effected both reads and writes, blocking either of them indefinitely. They were "fixed" by the most recent set of patches in -current.
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?alpine.BSF.2.00.0812020601290.96868>