Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 03 Aug 2013 17:38:51 -0500
From:      Karl Denninger <karl@denninger.net>
To:        freebsd-fs@freebsd.org
Subject:   ZFS option vfs.zfs.cache_flush_disable=1
Message-ID:  <51FD867B.8070803@denninger.net>

next in thread | raw e-mail | index | archive | help
Hi folks;

I'm trying to run down a rather odd bit of ugly interaction between an
ARECA 1680-IX adapter (in JBOD mode) and ZFS.

The adapter has both a fair bit of cache memory on it and a BBU unit. 
If I turn off write-back caching ZFS performance goes straight into the
toilet.  With it on, however, it appears that cache flushes are being
honored and, what's worse, when they come down the pipe they force all
further I/O to the adapter to cease until the ENTIRE cache is flushed.

This can and occasionally does lead to degenerate cases where very
severe performance problems ensue.

I have no meaningful way to adjust behavior on the ARECA adapter that
appears to matter.  I could go to an LSI 2008-based HBA (no cache memory
at all, just an adapter that will do SATA-3) and have one in-building,
but it appears that while it's quite fast when left "alone" I am using
GELI to encrypt the packs.  With the ARECA adapter the read-ahead on the
adapter keeps the pipeline full and GELI whallops the CPU -- but this
box has a lot of CPU to burn, and so performance remains excellent. 
With the HBA and no cache memory, and thus no read-ahead, that's not
true and the performance is impacted fairly-severely.

In addition the LSI HBA has a habit of "jumping" disk attachment points
around -- that is, a disk may come up as "da1" on one boot and then
"da2" on another.  I label all my drives but this still can bite me if
for some reason I have a failure and have to swap one -- and pull the
wrong one.

Turning on vfs.zfs.cache_flush_disable=1 results in a _*very*_
significant difference in performance on this adapter and appears to
stop the bad interaction between ZFS and the adapter's cache buffer RAM.

So this leads to the question -- if I enable the option
vfs.zfs.cache_flush_disable=1, /*what am I risking*/ -- if in fact I
have a BBU on the adapter?  It is not at all clear whether this is a
safe option to turn on _*if*_ the adapter has battery-backed memory, or
if a crash could leave me with a corrupt and unmountable pool.

Thanks in advance!

-- 
Karl Denninger
karl@denninger.net
/Cuda Systems LLC/



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?51FD867B.8070803>