From owner-freebsd-chat Thu Feb 14 12:43:24 2002 Delivered-To: freebsd-chat@freebsd.org Received: from pc-ugarte.research.att.com (H-135-207-23-230.research.att.com [135.207.23.230]) by hub.freebsd.org (Postfix) with ESMTP id 7E68637B400 for ; Thu, 14 Feb 2002 12:43:20 -0800 (PST) Received: (from cau@localhost) by pc-ugarte.research.att.com (8.11.0/8.11.0) id g1EKhFs01178; Thu, 14 Feb 2002 15:43:15 -0500 MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Message-ID: <15468.8546.298786.500178@pc-ugarte.research.att.com> Date: Thu, 14 Feb 2002 15:43:14 -0500 From: Carlos Ugarte To: swear@blarg.net (Gary W. Swearingen) Cc: freebsd-chat@FreeBSD.ORG Subject: Re: How do basic OS principles continue to improve? In-Reply-To: References: <20020213192510.A46224@dogma.freebsd-uk.eu.org> X-Mailer: VM 6.99 under 21.1 (patch 14) "Cuyahoga Valley" XEmacs Lucid Sender: owner-freebsd-chat@FreeBSD.ORG Precedence: bulk List-ID: List-Archive: (Web Archive) List-Help: (List Instructions) List-Subscribe: List-Unsubscribe: X-Loop: FreeBSD.org Gary W. Swearingen writes: > > For example, if I was a kernel hacker, I might have done something with > a report I read on a school project in which the guy had a compiler > inside his kernel and had it compile optimised code as needed. I didn't > see or don't remember the particular techniques he used, but I suppose > it was able to avoid indirect addressing or something. He had several > techniques, as I recall. He reported very significant speed-ups. (But > then, I'd guess that most bottlenecks are not hampered by inefficient > code as much as by inefficient algorithms, but I'd like to read his > report again.) Sadly, I lost the URL a couple of years ago and a quick > google didn't find it. That sounds like Henry Massalin's dissertation work on the Synthesis kernel (early 90s at Columbia University). If I remember correctly it was implemented using 68000 assembly code ("a fast prototyping language"); it would take frequently called kernel functions and optimize them (replace them with code specialized for that type of invocation). You took a hit for generating new code, but the cost was relatively low and the optimized call was used so often that overall performance improved. If this wasn't it, there are a few other dynamic/run-time code generation projects around; Dawson Engler did some work on these in the mid 90s while he was still at MIT. More generally, my impression is the same as that posted by Terry. Most cutting edge research is done by small groups in experimental environments. It takes a while for their work to propagate to more popular systems. For example, I believe the KSE work is based in part on the work done at the University of Washington in the early 90s ("Scheduler Activations"). Another example, found on an article posted today on cnn.com - Microsoft's Farsite system (can't tell if it's expected in 2006 or in ten years) will make use of "experimental operating system technology called Byzantine fault-tolerant protocols". Though work on such protocols continues even today, Byzantine faults were first identified some 20 years ago. On a different note, there seems to be less emphasis on building new research systems from scratch; it is more and more common to see the experimental environments I mentioned above make use of systems such as FreeBSD and Linux (NetBSD, OpenBSD and the various Microsoft products aren't as prominent). In these cases the "propagation lag" can be cut substantially, if the project leads are aware of the research and deem it worthy of being merged into the official tree. If you're interested in seeing what kind of stuff is considered "cutting edge research" you might look for the proceedings of various conferences and workshops. SOSP, OSDI, USENIX (Technical) and HotOS would be the ones I'd look at, though there are many others. Carlos A. Ugarte cau@cs.arizona.edu To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-chat" in the body of the message