Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 19 Dec 2012 18:07:42 +0100
From:      Tom <freebsdlists@bsdunix.ch>
To:        freebsd-fs@freebsd.org
Cc:        matt.churchyard@userve.net
Subject:   Re: ZFS: cache_flush_disable
Message-ID:  <50D1F45E.3010402@bsdunix.ch>
In-Reply-To: <36B5948F85049844985F24A1CE55B2CBEB980A@USDSERVER.usd.local>
References:  <36B5948F85049844985F24A1CE55B2CBEB980A@USDSERVER.usd.local>

next in thread | previous in thread | raw e-mail | index | archive | help
Hi Matt,

Am 19.12.12 15:52, schrieb Matt Churchyard:
> Can anyone give some knowledgeable info on the
> vfs.zfs.cache_flush_disable variable? 
> There's a lot of talk going on in the FreeBSD forums about how to get
> the best performance but no-one really knows exactly what these settings
> do (searching the net doesn't really find anyone who knows either).
> There's a lot of mis-information going around as well and so I'd like to
> start trying to get some concrete information if possible.
> 
> I'd like to know
> 
> 1) What this setting actually does
> 2) If it's safe assuming you have battery backed ZIL (e.g. supercap SSD)
> 
> I am under the impression it controls whether ZFS asks disks to flush
> their caches or not.
> However, there's an entire function in zil.c (zil_add_block) that is
> skipped when this flag is set. I'm not sure what this function does but
> it seems to do more than just flush caches?


vfs.zfs.cache_flush_disable equitable of zfs_nocacheflush on solaris.

>From the Oracle Documentation
(http://docs.oracle.com/cd/E26502_01/html/E29022/chapterzfs-6.html)

ZFS is designed to work with storage devices that manage a disk-level
cache. ZFS commonly asks the storage device to ensure that data is
safely placed on stable storage by requesting a cache flush. For JBOD
storage, this works as designed and without problems. For many
NVRAM-based storage arrays, a performance problem might occur if the
array takes the cache flush request and actually does something with it,
rather than ignoring it. Some storage arrays flush their large caches
despite the fact that the NVRAM protection makes those caches as good as
stable storage.

ZFS issues infrequent flushes (every 5 second or so) after the uberblock
updates. The flushing infrequency is fairly inconsequential so no tuning
is warranted here. ZFS also issues a flush every time an application
requests a synchronous write (O_DSYNC, fsync, NFS commit, and so on).
The completion of this type of flush is waited upon by the application
and impacts performance. Greatly so, in fact. From a performance
standpoint, this neutralizes the benefits of having an NVRAM-based
storage.....


Afaik this can also affect scsi controllers with it's own flush
algorithm. Some controllers just ignore these scsi flush commands but
some of them not. Other systems are storage arrays because they have an
battery backed cache.


> 
> Regarding question 2, if this variable affects flushing of pool disks,
> not just the ZIL, I can imagine a scenario where ZFS flushes a
> transaction to the pool, but a power loss occurs while some of that data
> is still sat in the disk cache. Usually ZFS would flush the cache and
> only assume the write was complete at that point.
> Assuming it affects all disks, is it possible to turn off write cache
> per disk? I have seen hw.ata.wc but I'm not sure this still applies and
> if it does it'll affect the ZIL which you'd want to leave enabled if it
> was protected.

Cache flushing is commonly done as part of the ZIL operations.

This should answer your question:
http://thr3ads.net/zfs-discuss/2011/02/582087-Understanding-directio-O_DSYNC-and-zfs_nocacheflush-on-ZFS

Regards,
Tom




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?50D1F45E.3010402>