Date: Thu, 09 Mar 95 14:19:28 -0800 From: Bakul Shah <bakul@netcom.com> To: terry@cs.weber.edu (Terry Lambert) Cc: freebsd-bugs@FreeBSD.org Subject: Re: QIC-80 problem Message-ID: <199503092219.OAA20388@netcom17.netcom.com> In-Reply-To: Your message of "Thu, 09 Mar 95 10:16:06 MST." <9503091716.AA06567@cs.weber.edu>
index | next in thread | previous in thread | raw e-mail
> Simple; serialization doesn't require that the reads be in any
> particular order, only that the writes be in order. All you
> really have to do is keep track of where the reads come from
> to make sure they are written in the right order; remember that
> it is the disk I/O on the read that you are trying to avoid,
> not the system call overhead.
May be all that lack of sleep is making me especially slow
but sorry, but I just don't see how. Let us look at my
example again:
tar zcf - ... | steam > /dev/tape
There is only one input and one output to steam. Now
*ideally* we'd like to say: put input block n in buffer 0,
block n+1 in buffer 1, and so on, and oh, by the way, call
me when a buffer is filled or eof is reached. Similarly for
output side: when we are notified that block n is filled, we
tell the output side to write it out in sequence and tell us
when done so that we can recycle the buffer -- this is pretty
much what we do at driver level if the controller supports
queueing.
But this is *not* what we can do at user level under Unix
with any number of dups and O_ASYNC in a single process
[actually O_ASYNC is a misnomer; it should really be called
O_SIGIO or O_SIGNAL_ME_WHEN_IO_IS_POSSIBLE or something].
Or have you guys added real asyncio to FreeBSD while I was
not looking? If so, that is great news!!
> Admittedly, the context switch issue is _relatively_ less of a
> problem (I won't say that it isn't a problem entirely). But
> there is also the issue of the token passing using pipes between
> the process, etc.. An implementation using aio in a single
> process context would avoid a lot of system call overhead as
> well as context switch overhead in the token passing, and also
> avoid things like pipe read/write latency, since team does NOT
> interleave that.
team interleaves pipe IO *if* the incoming data or outgoing data
is via pipes. team can not interleave read/write of control
pipes because they are used for synchronization. The core
loop for a team member is something like this:
member[i]:
loop {
wait for a token from pipe[i];
// XXX I am ignoring eof or error processing...
read from input[i];
send the read token to pipe[i mod n]; // n = number of members
wait for write token from pipe[i];
write to output[i];
send the write token to pipe[i mod n];
}
I grant your basic point that a single process implementation
will avoid a lot of context switch and syscall overhead. But
I just do not see how it can be done (even if we give up
on portability).
Note that all that context switching + syscall overhead is a
problem only if the read/write quantum is so small that a
tape device can do it in a few milliseconds.
On my 25Mhz 486 ovehead of team is about 1.2ms *per* block
of data (and *regardless* of the number of team processes).
As long as your tape read/write takes atleast 10 times as
long, you should be pretty safe on a lightly loaded system.
I don't know the data rate of a QIC-80 but on a 4mm DAT that
translates to a blocksize of 250KB/s*12ms or about 3KBytes
on my machine!
> Anyway, just an idea -- I'd prefer that the ft driver become
> immune to system load, actually -- it would put it a hell of a
> lot in advance of many commercial ft driver (some commercial
> UNIX implementations still don't even have an ft driver at all,
> let alone one that is robust under heavy system load. 8-).
No disagreement here.
help
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199503092219.OAA20388>
