Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 21 Sep 2000 20:32:41 +0000 (GMT)
From:      Terry Lambert <tlambert@primenet.com>
To:        mbendiks@eunet.no (Marius Bendiksen)
Cc:        Stephen.Byan@quantum.com (Stephen Byan), sos@freebsd.dk ('Soren Schmidt'), fs@FreeBSD.ORG, sos@FreeBSD.ORG, freeBSD-scsi@FreeBSD.ORG
Subject:   Re: disable write caching with softupdates?
Message-ID:  <200009212032.NAA15839@usr08.primenet.com>
In-Reply-To: <Pine.BSF.4.05.10009211537460.38959-100000@login-1.eunet.no> from "Marius Bendiksen" at Sep 21, 2000 03:38:36 PM

next in thread | previous in thread | raw e-mail | index | archive | help
> > > OK, I played a bit with that, the only info I can see I get from the
> > > higher levels is the BIO_ORDERED bit, so I tried to flush the cache
> > > each time I get one of those, _bad_ idea, 10% performance loss...
> 
> > That's the price of having a recoverable file system. See Seltzer, Ganger,
> 
> Not necessarily.
> 
> > Contrast this 10% performance hit versus what you get when you disable
> > caching entirely.
> 
> I think you will see that on some drives, this may have a greater
> performance impact than not caching at all.

There will always be a performance impact, since this will, of
necessity, stall the write pipeline for the synchronization,
unless there are a lot of graphically unrelated I/O's pending.

At least in this way, soft updates is better than delayed ordered
writes (DOW -- patented by USL, and used without permission in
ReiserFS), in that DOW will stall all I/O when hitting a
synchronization point, whereas SU will only stall dependent I/O.

That said, the question is whether the drive will flush the
cache and mark it invalid, or will merely flush the cache to
disk, and leave the cache contents intact.  If it does the former,
then there could be additional overhead for subsequent reads.

Really, the OS needs to know the cache strategy of the drive, and
follow the same strategy itself, to reduce the number of drive to
OS transactions, and to remove the problem of the drive having to
go back to the well for a subsequent read, if the cache contents
are effectively discarded.

Frankly, I find it hard to believe that a cache flush would result
in other than a mere write, e.g. that any drive would be so dumb
as to discard.  But there might be other consequences, since the
cache on the drive may all get marked clean, which would result
in a natural disordering of the reuse of cache buffers.  This may
be more or less optimal: it depends on usage patterns by the OS.

So minimally, some experimentation should be done with the drive
and OS in terms of the OS using mode page 2 to obtain the drive
geometry for variable geometry drives, and apply the standard
seek optimizations that are currently disabled in FFS, as well
as placing the OS caching on a track granularity to match the
cache characteristics on most modern drives.

---

On a semi-related note, I have done some experimentation with
some (admittedly older) code that gave ownership of the vnode
to the FS, per SunOS and USL approaches, e.g., instead of two
separate allocations:

	,-------. <-.       ,-->,-------.
	| inode |   |       |   | vnode |
	|       |   |       |   |       |
	|       |   `-----------|       |
	|       |-----------'   |       |
	|       |               |       |
	`-------'               `-------'

Having a single allocation:

	,-------.<---.
	| vnode |    |
	|       |    |
	|       |--. |
	|       |  | |
	|-------|<-' |
	| inode |    |
	|       |    |
	|       |----'
	|       |
	`-------'

Which avoids the ability of the vclean() to disassociate valid
cached data from ihash objects by reclaiming the vnode out
from under the inode, without the inode also being reclaimed,
making the operation idempotent in both directions, and then
totally removing the ihash(), since the vnode is allocated as
part of allocating the in core inode (this avoids the SVR3 inode
size limitation problem, which the Berkeley people resolved via
a divorce).

I measured a better than 30% performance increase on heavily
loaded systems by doing this (this was 3.2 code, so take that
for what it's worth, which is, I think, a lot, since things
have not changed _that_ much).


					Terry Lambert
					terry@lambert.org
---
Any opinions in this posting are my own and not those of my present
or previous employers.


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-fs" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200009212032.NAA15839>