Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 14 Feb 2002 15:43:14 -0500
From:      Carlos Ugarte <cau@cs.arizona.edu>
To:        swear@blarg.net (Gary W. Swearingen)
Cc:        freebsd-chat@FreeBSD.ORG
Subject:   Re: How do basic OS principles continue to improve?
Message-ID:  <15468.8546.298786.500178@pc-ugarte.research.att.com>
In-Reply-To: <d1vgd1szmm.gd1@localhost.localdomain>
References:  <20020213192510.A46224@dogma.freebsd-uk.eu.org> <d1vgd1szmm.gd1@localhost.localdomain>

next in thread | previous in thread | raw e-mail | index | archive | help
Gary W. Swearingen writes:
 > 
 > For example, if I was a kernel hacker, I might have done something with
 > a report I read on a school project in which the guy had a compiler
 > inside his kernel and had it compile optimised code as needed.  I didn't
 > see or don't remember the particular techniques he used, but I suppose
 > it was able to avoid indirect addressing or something.  He had several
 > techniques, as I recall.  He reported very significant speed-ups. (But
 > then, I'd guess that most bottlenecks are not hampered by inefficient
 > code as much as by inefficient algorithms, but I'd like to read his
 > report again.)  Sadly, I lost the URL a couple of years ago and a quick
 > google didn't find it.

That sounds like Henry Massalin's dissertation work on the Synthesis
kernel (early 90s at Columbia University).  If I remember correctly it
was implemented using 68000 assembly code ("a fast prototyping
language"); it would take frequently called kernel functions and
optimize them (replace them with code specialized for that type of
invocation).  You took a hit for generating new code, but the cost was
relatively low and the optimized call was used so often that overall
performance improved.

If this wasn't it, there are a few other dynamic/run-time code
generation projects around; Dawson Engler did some work on these in
the mid 90s while he was still at MIT.

More generally, my impression is the same as that posted by Terry.
Most cutting edge research is done by small groups in experimental
environments.  It takes a while for their work to propagate to more
popular systems.  For example, I believe the KSE work is based in part
on the work done at the University of Washington in the early 90s
("Scheduler Activations").

Another example, found on an article posted today on cnn.com -
Microsoft's Farsite system (can't tell if it's expected in 2006 or in
ten years) will make use of "experimental operating system technology
called Byzantine fault-tolerant protocols".  Though work on such
protocols continues even today, Byzantine faults were first identified
some 20 years ago.

On a different note, there seems to be less emphasis on building new
research systems from scratch; it is more and more common to see the
experimental environments I mentioned above make use of systems such
as FreeBSD and Linux (NetBSD, OpenBSD and the various Microsoft
products aren't as prominent).  In these cases the "propagation lag"
can be cut substantially, if the project leads are aware of the
research and deem it worthy of being merged into the official tree.

If you're interested in seeing what kind of stuff is considered
"cutting edge research" you might look for the proceedings of various
conferences and workshops.  SOSP, OSDI, USENIX (Technical) and HotOS
would be the ones I'd look at, though there are many others.

Carlos A. Ugarte                                    cau@cs.arizona.edu

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-chat" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?15468.8546.298786.500178>