Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 14 Feb 1998 03:16:48 +0000 (GMT)
From:      Terry Lambert <tlambert@primenet.com>
To:        ccsanady@friley585.res.iastate.edu (Chris Csanady)
Cc:        julian@whistle.com, current@FreeBSD.ORG
Subject:   Re: Working (apparently) soft-update code available.
Message-ID:  <199802140316.UAA15274@usr06.primenet.com>
In-Reply-To: <199802140239.UAA02339@friley585.res.iastate.edu> from "Chris Csanady" at Feb 13, 98 08:39:24 pm

next in thread | previous in thread | raw e-mail | index | archive | help
> I have gotten the following panic many times.  It seems to occur even
> when I do not have soft updates turned on for any of my file systems. :/
> 
> panic: softdep_disk_io_initiation: read
> 
> From looking at the code, this seems to indicate that dependancies are
> being formed for read operations as well.  This seems somewhat
> disturbing..  any ideas?

Not about the panic.  But the read dependencies are real.  Consider
the case where I'm making a change to a directory, and I delete one
file then create another.  Both go in the same page, but not in the
same directory block.  Pages are read-before-write for sub-page
sized changes.

You really need to give more information than you are giving before
it's possible to actually intelligently track down the bug; like
what exactly were you doing to get the panic?

I can make a guess, though:

In practice, the stub routines will not get called unless the soft
updates are on; are you using the stubbed version of the calls
and turning the updates on on the stubs?  This won't work.


> Either way, I was curious about the sync/async stats given in mount.
> It seems that some operations are still being done sync?!  Also,
> perhaps the stats should be an option to mount rather than given
> by default.. 

This is a case of ungathered writes.  There's a pool of time slots,
and things are placed in slots "some time in the future".  The effect
if the syncd is to actually "gather" writes to objects scheduled to
be written, but not yet written.

If you cause vnode recycling (of vnodes with dirty buffers), or your
dependency tree depth exceeds the number of slots, then you will get
dependency-forced writes before the time limit expires.

There are actually two ways you could tune up something to avoid this;
one of them is a global (I can't rememebr the name without the code
in front of me, sorry), and you should up the number.  The other is
to push the scheduler window into the future, so events get thrown
in in the same order, but the events you process are delayed afterwards
(It's a bit of work, but I'll probably play with it at some time).


					Terry Lambert
					terry@lambert.org
---
Any opinions in this posting are my own and not those of my present
or previous employers.

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-current" in the body of the message



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199802140316.UAA15274>