High Performance Computing and Communications Glossary 2.1

A significant part of the material of this glossary was adapted from material originally written by Gregory V. Wilson which appeared as "A Glossary of Parallel Computing Terminology" (IEEE Parallel & Distributed Technology, February 1993), and is being re-printed in the same author's "Practical Parallel Programming" (MIT Press, 1995). Several people have contributed additions to this glossary, especially Jack Dongarra, Geoffrey Fox and many of my colleagues at Edinburgh and Syracuse.

Original version is from NPAC at <URL:http://nhse.npac.syr.edu/hpccgloss/>

Original author: Ken Hawick, khawick@cs.adelaide.edu.au

See also the index of all letters and the full list of entries (very large)

Sections: A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

M

M-section (n.) a table lookup algorithm for pipelined vector processors that combines features of bisection and linear vector scan.

MACH (n.) an operating system based on Berkely UNIX developed by Carnegie Mellon University.

macropipelining(n.) See pipelining.

macrotasking (n.) technique of dividing a computation into two or more large tasks to be executed in parallel. Typically the tasks are subroutine calls executed in parallel.

mailbox (n.) an address used as a source or destination designator in a message.

mapping (n.) often used to indicate an allocation of processes to processors; allocating work to processes is usually called scheduling. See also load balance.

marshall(v.) To compact the values of several variables, arrays, or structures into a single contiguous block of memory; copying values out of a block of memory is called unmarshalling. In most message passing systems, data must be marshalled to be sent in a single message.

mask (n.) A Boolean array or array-valued expression used to control where a data parallel operation has effect; the operation is only executed where array elements are true.

MegaFLOPS (n.) 10^6 FLOPS.

memory bank conflict (n.) a condition that occurs when a memory unit receives a request to fetch or store a data item prior to completion of its bank cycle time since its last such request.

memory protection (n.) Any system that prevents one process from accessing a region of memory being used by another. Memory protection is supported both in hardware and by the operating system of most serial computers, and by the hardware kernel and service kernel of the processors in most parallel computers.

mesh(n.) A topology in which nodes form a regular acyclic d-dimensional grid, and each edge is parallel to a grid axis and joins two nodes that are adjacent along that axis. The architecture of many multicomputers is a two or three dimensional mesh; meshes are also the basis of many scientific calculations, in which each node represents a point in space, and the edges define the neighbours of a node. See also hypercube, torus.

message passing (n.) A style of interprocess communication in which processes send discrete messages to one another. Some computer architectures are called message passing architectures because they support this model in hardware, although message passing has often been used to construct operating systems and network software for uniprocessors and distributed computers. See also routing.

message typing (n.) The association of information with a message that identifies the nature of its contents. Most message passing systems automatically transfer information about a message's sender to its receiver. Many also require the sender to specify a type for the message, and let the receiver select which types of messages it is willing to receive.

message-oriented language (n.) a programming language in which process interaction is strictly through message passing.

Metropolis routing(n.) A routing algorithm for meshes, in which an ordering is imposed on axes, and messages are sent as far along the most significant axis as they need to go, then as far along the next most significant axis, and so on until they reach their destination. See also e-cube routing, randomized routing.

MHS (n.) is CCITT's X.400 series of recommendations for electronic mail transfer. MHS defines the system of message user agents, message transfer agents, message stores and access units.

MIB (n.) management information base is a variable database for gateways running CMOT or SNMP. MIB-II refers to a database not shared by CMOT and SNMP.

microtasking (n.) technique of employing parallelism at the DO-loop level. Different iterations of a loop are executed in parallel on different processors.

middle product method (n.)a method of matrix multiplication in which entire columns of the result are computed concurrently. See also inner product method.

MIMD (n.) Multiple Instruction, Multiple Data; a category of Flynn's taxonomy in which many instruction streams are concurrently applied to multiple data sets. A MIMD architecture is one in which heterogeneous processes may execute at different rates.

minimax (n.)algorithm used to determine the value of a game tree.

minimum spanning tree (n.) a spanning tree with the smallest possible weight among all spanning trees for a given graph.

MIPS(n.) one Million Instructions Per Second. A performance rating usually referring to integer or non-floating point instructions. See also MOPS.

MIR (n.) Minimum Information Rate or Committed Information Rate (CIR), is the minimum transmit and receive data rate for a connection.

MISD (n.) Multiple Instruction, Single Data. A member of Flynn's taxonomy almost never used. This category has an ambiguous meaning. It refers to a computer which applies several instructions to each datum. The closest real implementation to this category is a vector computer with an instruction pipeline.

module (n.) a memory bank, often used in the context of interleaved memory.

monitor (n.) a structure consisting of variables representing the state of some resource, procedures to implement operations on that resource, and initialization code.

Monte Carlo (adj.) Making use of randomness. A simulation in which many independent trials are run independently to gather statistics is a Monte Carlo simulation. A search algorithm that uses randomness to try to speed up convergence is a Monte Carlo algorithm.

MOPS(n.) one Million Operations Per Second. Usually used for a general operation, either integer, floating point or otherwise. See also MIPS, FLOPS.

MOS (n.) Metal oxide on Silicon: a basic technology for fabricating semiconductors. See also CMOS, BiCMOS.

motherboard (n.) A printed circuit board or card on which other boards or cards can be mounted. Motherboards will generally have a number of slots for other boards, by which means the computer system may be expanded. When all the slots are used up however, it is usually difficult to expand further, and this is the manufacturer's way of telling you to buy a bigger system.

multicast(n.) To send a message to many, but not necessarily all possible recipient processes. See also broadcast, process group.

multicomputer(n.) A computer in which processors can execute separate instruction streams, have their own private memories and cannot directly access one another's memories. Most multicomputers are disjoint memory machines, constructed by joining nodes (each containing a microprocessor and some memory) via links. See also architecture, distributed computer, multiprocessor, processor array.

multigrid method (n.) A method for solving partial differential equations in which an approximate solution on a coarse resolution grid is used to obtain an improved solution on a finer resolution grid. The method reduces long wavelength components of the error or residual by iterating between a hierarchy of coarse and fine resolution grids.

multiprocessor(n.) A computer in which processors can execute separate instruction streams, but have access to a single address space. Most multiprocessors are shared memory machines, constructed by connecting several processors to one or more memory banks through a bus or switch. See also architecture, distributed computer, multicomputer, processor array.

multiprogramming (n.) the ability of a computer system to time share its (at least one) CPU with more than one program at once. See also multitasking.

multitasking(n.) Executing many processes on a single processor. This is usually done by time-slicing the execution of individual processes and performing a context switch each time a process is swapped in or out, but is supported by special-purpose hardware in some computers. Most operating systems support multitasking, but it can be costly if the need to switch large caches or execution pipelines makes context switching expensive in time.

mutual exclusion(n.) A situation in which at most one process can be engaged in a specified activity at any time. Semaphores are often used to implement this. See also contention, deadlock, critical sections.