Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 3 Nov 2008 00:31:57 -0800 (PST)
From:      Simun Mikecin <numisemis@yahoo.com>
To:        Peter Schuller <peter.schuller@infidyne.com>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: Areca vs. ZFS performance testing.
Message-ID:  <723911.11212.qm@web36603.mail.mud.yahoo.com>

next in thread | raw e-mail | index | archive | help
Peter Schuller wrote:
> In any case, why would the actual RAID controller cache be flushed,
> unless someone expliclitly configured it such? I would expect a
> regular BIO_FLUSH (that's all ZFS is going right?) to be satisfied by
> the data being contained in the controller cache, under the assumption
> that it is battery backed, and that the storage volume/controller has
> not been explicitly configured otherwise to not rely on the battery
> for persistence.

I'm using amr(4) driver with Dell PERC 4e/DC controller (which is a rebranded LSI 320-2E) that has battery-backed cache and write-caching configured to be write-back. This controller is connected to a LED light that shows when there is something in the cache not yet commited to the disks.
Ever since I changed from UFS2 to ZFS that light comes off very quickly. It does not stay on for longer periods of time (it did for upto 10 seconds when I used UFS2 - it is a controller BIOS setting). So doing BIO_FLUSH in this case *does* flush battery-backed cache.
I can restore old functionality by setting vfs.zfs.cache_flush_disable=1 but I shouldn't use it in my case since the same system also has SATA disks with ZFS on them and turning off BIO_FLUSH for SATA disks would be dangerous.

> Please correct me if I'm wrong, but if synchronous writing to your
> RAID device results in actually waiting for underlying disks to commit
> the data to platters, that sounds like a driver/controller
> problem/policy issue rather than anything that should be fixed by
> tweaking ZFS.

AFAIK as I know BIO_FLUSH (which is for now implemeted only for ata(4) and amr(4) - correct me if I'm mistaken) does just that: actually flushes and waits for the cache content to be written on disk.

> Or is it the case that ZFS does both a "regular" request to commit
> data (which I thought was the purpose of BIO_FLUSH, even though the
> "FLUSH" sounds more specific) and separately does a "flush any actual
> caches no matter what" type of request that ends up bypassing
> controller policy (because it is needed on stupid SATA drives or
> such)?

AFAIK BIO_FLUSH commits *everything* that is in the cache. It is needed for stupid SATA drives. But I'm not so happy about it been turned on for amr(4) flushing the entire 128MB battery backed cache.



      



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?723911.11212.qm>