| <- HREF="node13.html" Prev | Index | Next -> |
NHSE ReviewTM: Comments
· Archive
· Search
The first two techniques are closely analogous to the cyclic and block methods for data distribution in a data parallel language [53], while the third technique is loosely analogous to a scattered data distribution [49].
Finding a good parallel random number generator has proven to be a very challenging problem, and is still the subject of much research and debate. One of the reasons good parallel random number generators are so hard to create is that any small correlations that exist in the sequential generator may be amplified by the method used to distribute the sequence among the processors, producing stronger correlations in the subsequences on each processor. Inter-processor correlations may also be introduced. Also, the method used to initialize a parallel random number generator (i.e. to specify the seeds for each processor) is at least as important as the algorithm used for generating the random numbers, since any correlations between the seeds on different processors could produce strong inter-processor correlations.
In this section we describe some of the most common parallel random number generators. More information is available in other review articles [4, 46] and in the references given in this section.
| <- HREF="node13.html" Prev | Index | Next -> |
NHSE ReviewTM: Comments
· Archive
· Search