Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 23 Dec 1999 12:52:36 -0800
From:      Kent Stewart <kstewart@3-cities.com>
To:        "Alexey N. Dokuchaev" <danfe@inet.ssc.nsu.ru>
Cc:        freebsd-questions@FreeBSD.ORG
Subject:   Re: when is it safe to use the 0xa0ffa0ff disk flags?
Message-ID:  <38628B94.AAA938BF@3-cities.com>
References:  <Pine.LNX.4.04.9912232003420.13730-100000@inet.ssc.nsu.ru>

next in thread | previous in thread | raw e-mail | index | archive | help


"Alexey N. Dokuchaev" wrote:
> 
> Hi!
> 
> > CDC (now out of the business) had a hard drive called a Hydra. It was
> > a full height 8" HD that ran 9.6MB/s in 1988. It was also very
> > expensive at $250,000 US. Cray striped them together to create what
> > they called a DD40. It had 16 Hydras and had a total of 20GB. By
> > striping 4 together, Cray would get 20MB/s when they were hung on the
> > 100MB/s data channel. When our benchnmark was run on a system with the
> > Hydras for main storage, the throughput basically doubled. The Cray
> > XM/P had 2x the throughput of the CDC-990. The 990 was a dinosaur and
> > represented the end of big iron. It filled the room where as the Cray
> > looked like the what was left of the volcano in "Close Encounters of
> > the Third Kind". I think the 8" FH HD's were all replaced by HH 5.25"
> > HD's, which looked pretty puny mounted in the center of the Hydra's
> > bay. The cooling requirements probably dropped 10KVA :). The effect of
> > the Hydra on the benchmark was enormous. This the benchmark that I
> > told you about writing behind was so important. The benchmark ran
> > almost twice as fast when write behind caching was used. I think
> > SOFTUPDATES produce an imporovement that helps for similar reasons.
> 
> Is there anywhere where I can read more about it?

If you are asking about the benchmark, the answer is no. Setting up
for a benchmark is similar to setting up a race car for a time trial.
You want it to run as fast as possible with a special setting. Those
settings may not be appropriate when it is released to the users. A
race car setup for a time trial is probably dangerous when you change
the balance by loading a full load of fuel onboard. The programs were
all designed around licensing commercial nuclear power reactors in the
US and the actual runs were considered proprietary. Some of the
programs depended on 60-64 bit numbers. One of the programs had to be
able to and/or a 60-bit integer with FORTRAN. The floating point
number extremes ran from 10^(+/-)2048 or 4096. The old computers ran
from ~10^(+/-)308 The mix of jobs was an average day as defined by
accounting data for a year. They depended on the batch job capability
of the computers. The benchmark started out by building the programs
and then running them. The benchmark shell script added new jobs to
the batch queue every so often. Some of them modified a module and
then build a new program. The benchmark also included 100 simulated
users doing various sorts of things on line. The user simulation ran
on the Sun front ends that were on all Cray's at that point in time.
The users were a constant minor irritation to the system. 

The Cray was eventually replaced by 3 really small DEC Alphas. Before
that happened each group had a number of HP Xterms and a couple of HP
9000 servers to run their jobs interactively. The production jobs were
run on the Cray and then on the Alphas. By small I mean the desk where
the Cray monitors were hung was large enough to include all three
Alpha and the 5 or 6 - 20GB drives attached to each. Each Alpha had
several times the memory of the Cray. Fast memory was really expensive
in 1988. I think the Cray needed something like 230KVA of cooling
running at all times. I think, at this point in time, my PC's have
more memory than they did. Each Alpha (<200MHz) had 1.5 times the
throughput of the Cray. Setting up the Alpha's cost less than the
maintenance on the Cray was running. 

The benchmark was considered obsolete because a single large computer
in the future was considered highly unlikely and it was archived on to
magnetic tape(s), which were probably recycled when I was retired. If
you are running in a similar environment, the things we learned still
apply. You never have a fast enough file system. Write behind caching
to your HD's can be 50% of your system throughput. Turn it off and you
may run half as many jobs in the same amount of time. If you are
swapping too much, you don't have enough memory. If you aren't
swapping a little bit, you spent too much money on memory. From then
on it is a matter of fine tuning your system. BTW, the user percived
speed of the system will be gauged on the amount of time required for
the prompt to reappear after they press the "enter" key :).

I haven't seen a real comparison of SOFTUPDATE in a mix of
environments. If you are trying to figure out how important it is, try
running a number of your jobs with timing turned on, turn SOFTUPDATE
on and run them again. You can redirect input on most programs and
develop scripts based on that ability. You could time it by using the
shell script to append the redirected the output of "time" to a file
and automate the data collection. The kind of things most people are
using FreeBSD for is different than the stuff we were doing and the
tuning of the machines would be different. A buildworld could
represent a piece of what was going on. A kernel build is probably
closer. When I do a buildworld, I leave Seti@Home running in the
background. The cpu is always running at 100% until I do an install.
There are no other users that could sneak things in on me. On a batch
oriented computer, for example, you didn't do things like run a large
database. Web and mail servers would also be running on a different
computer. Since our benchmark was a purchase requirement, it was run
against a stop watch. The batch system had to be idle in x hours. 

When I run setiathome, which has no real I/O requirements, on various
systems, I find it is mostly memory limited and PC-66 memory slows it
down tremendously. When I replaced a Celeron 433a with an Celeron
300a@450, the average time per workunit dropped from a little over
54,000 secs to a little under 34,000 seconds. The average on the
300a@450 is coming up on 100 WU's. This works out pretty close to the
ratio of PC-66/PC-100 access times. I saw a ~15% improvement with the
433 by just replacing the PC-66 memory with PC-100 memory. Setiathome
is running much faster; however, the buildworld times only dropped
~300 seconds, i.e., ~2160u to ~1890u. Memory speed is important but
not as much as it was to setiathome. Depending on how many gaussians
they are seeing a P-III 450 or the Celeron 300a@450 jumps into the
lead on average processing speed. The P-III 450 is running Windows
2000 Server and the Celeron is running FreeBSD. I have a 2% variation
between motherboards and much less than that between OS'es. Setiathome
running on a P-II 400 on a SuperMicro P6SBA mb is running 2% faster
than it does on a P-II 400 on an ABIT BX6 rev 2 mb. All of the BIOS
setting are on defaults.

Kent

> 
> ./danfe

-- 
Kent Stewart
Richland, WA

mailto:kstewart@3-cities.com
http://www.3-cities.com/~kstewart/index.html
FreeBSD News http://daily.daemonnews.org/

SETI(Search for Extraterrestrial Intelligence) @ HOME
http://setiathome.ssl.berkeley.edu/

Hunting Archibald Stewart, b 1802 in Ballymena, Antrim Co., NIR
http://www.3-cities.com/~kstewart/genealogy/archibald_stewart.html


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-questions" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?38628B94.AAA938BF>