From owner-freebsd-hardware@FreeBSD.ORG Wed Jan 21 13:15:19 2009 Return-Path: Delivered-To: freebsd-hardware@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 490C61065850; Wed, 21 Jan 2009 13:15:19 +0000 (UTC) (envelope-from fbsd@dannysplace.net) Received: from mail.dannysplace.net (mail.dannysplace.net [213.133.54.210]) by mx1.freebsd.org (Postfix) with ESMTP id EA1998FC3A; Wed, 21 Jan 2009 13:15:18 +0000 (UTC) (envelope-from fbsd@dannysplace.net) Received: from 203-206-171-212.perm.iinet.net.au ([203.206.171.212] helo=[192.168.10.10]) by mail.dannysplace.net with esmtpsa (TLSv1:AES256-SHA:256) (Exim 4.69 (FreeBSD)) (envelope-from ) id 1LPcvI-000MOa-EZ; Wed, 21 Jan 2009 23:15:18 +1000 Message-ID: <49771FEE.1070606@dannysplace.net> Date: Wed, 21 Jan 2009 23:15:26 +1000 From: Danny Carroll User-Agent: Thunderbird 2.0.0.19 (Windows/20081209) MIME-Version: 1.0 To: Koen Smits , freebsd-hardware@freebsd.org, freebsd-fs@freebsd.org References: <20081031033208.GA21220@icarus.home.lan> <20081117070818.GA22231@icarus.home.lan> <496549D9.7010003@dannysplace.net> <4966B6B1.8020502@dannysplace.net> <496712A2.4020800@dannysplace.net> <4976C370.4030406@dannysplace.net> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Authenticated-User: danny X-Authenticator: plain X-Sender-Verify: SUCCEEDED (sender exists & accepts mail) X-Exim-Version: 4.69 (build at 08-Jul-2008 08:59:40) X-Date: 2009-01-21 23:15:13 X-Connected-IP: 203.206.171.212:3576 X-Message-Linecount: 65 X-Body-Linecount: 51 X-Message-Size: 3088 X-Body-Size: 2027 X-Received-Count: 1 X-Recipient-Count: 3 X-Local-Recipient-Count: 3 X-Local-Recipient-Defer-Count: 0 X-Local-Recipient-Fail-Count: 0 X-SA-Exim-Connect-IP: 203.206.171.212 X-SA-Exim-Rcpt-To: kgysmits@gmail.com, freebsd-hardware@freebsd.org, freebsd-fs@freebsd.org X-SA-Exim-Mail-From: fbsd@dannysplace.net X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on ferrari.dannysplace.net X-Spam-Level: ** X-Spam-Status: No, score=2.2 required=8.0 tests=ALL_TRUSTED, AWL, DEAR_SOMETHING, TVD_RCVD_IP autolearn=disabled version=3.2.5 X-SA-Exim-Version: 4.2 X-SA-Exim-Scanned: Yes (on mail.dannysplace.net) Cc: Subject: Re: Areca vs. ZFS performance testing. X-BeenThere: freebsd-hardware@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: fbsd@dannysplace.net List-Id: General discussion of FreeBSD hardware List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 21 Jan 2009 13:15:20 -0000 Koen Smits wrote: > Areca Support: > Dear Sir, > the only difference is > in JBOD mode, controller configure all drives as passthrough disk. > in RAID mode, you have to configure passthrough disk by yourself in RAID > mode > > in other words, you can use raid with passthrough disks at saem time in > RAID mode but JBOD mode not. > > Me: > So does that mean if I use passthrough, I am not protected by the > cache/battery backup? I ask because there is an option for cache mode > when creating a passthrough disk. i.e. Write-Back or Write-Through > > > So 'passthrough' means that the controller lets the OS see the physical > disks just as they are, but with an invisible cache in between that > buffers operations. This way there is no advantage of the onboard XOR > engine, but you do profit from the intelligent cache, which is the most > important anyway imho. Not exactly. In JBOD mode ALL disks are passed through to the OS. You cannot have RAID. The cache is set to Write-Back. In RAID mode, you can mix raid5, raid6 and Passthrough (which are like JBOD but allow writethrough or writeback cache at your discretion). > JBOD mode is at a disadvantage because in this mode the OS sees one > large drive, and is not able to stripe the data to multiple disks, not > taking advantage of the fact that you have multple spindles available. > Makes sense to me :). No, in JBOD, the OS sees all disks individually. What you are talking about is a concatenated disk set which I don't think has a raid level. > I must admit, I do like these results. Very promising. Me too, although I am not sure if I like the idea of turning off the cache flushes in ZFS. I'd be a lot happier if the Areca card would tell me how 'full' the cache was. I'd also love to know if there was a way for the disk to tell me what the status if it's own cache is. > Further tests would be using an SSD for the ZIL, testing linux and NT, > etc. But let's not go there ;). Nope :-) -D