Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 26 Jul 1998 22:07:33 +0000 (GMT)
From:      Terry Lambert <tlambert@primenet.com>
To:        dyson@iquest.net
Cc:        roberto@keltia.freenix.fr, freebsd-current@FreeBSD.ORG
Subject:   Re: New LINT options: what is VM coloring?
Message-ID:  <199807262207.PAA18306@usr01.primenet.com>
In-Reply-To: <199807261649.LAA01167@dyson.iquest.net> from "John S. Dyson" at Jul 26, 98 11:49:34 am

next in thread | previous in thread | raw e-mail | index | archive | help
> > > be a big win for direct-mapped caches (e.g. most Pentium L2 caches), but
> > > loses effectiveness with set-associative caches (e.g. Pentium Pro, which
> > > has a set size of 4).
> > 
> > The K6 has a 2-way set associative cache so I guess it is not interesting
> > to use page coloring but does anyone know what kind of L2 cache an ASUS T2P4
> > use ? It is a P5-class motherboard so it is possible that the cache is
> > direct-mapped, no ?
> > 
> David is essentially right.  However, the page coloring code (that has
> been in -current for the last >1yr) goes a little too far and colors
> even the 1st level cache (I know -- I did it.)  Also, there is the issue
> of proper choice of initial color values, so I used an ad-hoc approach that
> appears to work correctly most of the time.

Actually, the L1 cache is where it's most important (see paper references
in my previous posting).  Also, the Alpha can significantly benefit from
this, per Digital UNIX:

] The Alpha EV4 CPU contains a direct mapped physical OFF chip secondary
] cache, which is organized so that if the secondary cache size is N
] pages, then every Nth page of the physical pages of memory hashes
] into the same page. Digital UNIX VM manages the physical pages of
] memory in such a way that, if an entire resident working set of a
] process can fit into the secondary cache, VM places it there. As a
] result, because VM strives to ensure that a process's entire working
] set is always in the secondary cache, the number of physical memory
] accesses is greatly reduced as a process executes.

Also, for your current project, I recommend:

	http://www-flash.stanford.edu/OS/oschar.html

Specifically, the paper:

] Scheduling and page migration for multiprocessor compute servers
] Rohit Chandra, Scott Devine, Ben Verghese, Anoop Gupta, and Mendel
] Rosenblum In Proceedings of the 6th International Conference on
] Architectural Support for Programming Languages and Operating Systems,
] October 1994

Also:

	http://www-flash.stanford.edu/OS/hive.html

] Hive: fault containment for shared-memory multiprocessors John Chapin,
] Mendel Rosenblum, Scott Devine, Tirthankar Lahiri, Dan Teodosiu, and
] Anoop Gupta In The 15th ACM Symposium on Operating Systems Principles,
] December 1995

And:

] Implementing efficient fault containment for shared-memory
] multiprocessors Mendel Rosenblum, John Chapin, Dan Teodosiu, Scott
] Devine, Tirthankar Lahiri, and Anoop Gupta To appear in Communications
] of the ACM, September 1996 

I also have the IEEE symposium papers book on scheduling and load
balancing in parallel and distributed systems; these guys figure
prominently in that book as well, and the research is much more recent.


					Terry Lambert
					terry@lambert.org
---
Any opinions in this posting are my own and not those of my present
or previous employers.

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-current" in the body of the message



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199807262207.PAA18306>