%T Cache\-Affinity Scheduling for Fine Grain Multithreading %A Kurt Debattista, Kevin Vella, Joseph Cordina %E James S. Pascoe, Roger J. Loader, Vaidy S. Sunderam %B Communicating Process Architectures 2002 %X Cache utilisation is often very poor in multithreaded applications, due to the loss of data access locality incurred by frequent context switching. This problem is compounded on shared memory multiprocessors when dynamic load balancing is introduced and thread migration disrupts cache content. In this paper, we present a technique, which we refer to as *batching*, for reducing the negative impact of fine grain multithreading on cache performance. Prototype schedulers running on uniprocessors and shared memory multiprocessors are described, and finally experimental results which illustrate the improvements observed after applying our techniques are presented.