High Performance Computing and Communications Glossary 2.1

A significant part of the material of this glossary was adapted from material originally written by Gregory V. Wilson which appeared as "A Glossary of Parallel Computing Terminology" (IEEE Parallel & Distributed Technology, February 1993), and is being re-printed in the same author's "Practical Parallel Programming" (MIT Press, 1995). Several people have contributed additions to this glossary, especially Jack Dongarra, Geoffrey Fox and many of my colleagues at Edinburgh and Syracuse.

Original version is from NPAC at <URL:http://nhse.npac.syr.edu/hpccgloss/>

Original author: Ken Hawick, khawick@cs.adelaide.edu.au

See also the index of all letters and the full list of entries (very large)

Sections: A B C D E F G H I J K L M N O P Q R S T U V W X Y Z


cache (n.) A high-speed memory, local to a single processor , whose data transfers are carried out automatically in hardware. Items are brought into a cache when they are referenced, while any changes to values in a cache are automatically written when they are no longer needed, when the cache becomes full, or when some other process attempts to access them. Also (v.) To bring something into a cache.

cache consistency (n.) The problem of ensuring that the values associated with a particular variable in the caches of several processors are never visibly different.

cache hit (n.) a cache access that successfully finds the requested data.

cache line (n.) The unit in which data is fetched from memory to cache.

cache miss (n.) A cache access that fails to find the requested data. The cache must then be filled from main memory at the expense of time.

CAD (n.) Computer-Aided Design; a term which can encompass all facets of the use of computers in manufacturing although the term CAM is also in use.

CAE (n.)Computer-Aided Engineering, like CAD, but usually applied to the use of computers in fields such as civil and nautical engineering.

CAM (n.) Computer-Aided Manufacturing.

CCITT (n.) Consultative Committee International Telephone and Telegraph is an international organization which develops standards and defines interfaces for telecommunications.

cell relay (n.) Packet switching technique which uses packets of fixed length, resulting in lower processing speeds. Also known as BISDN and ATM.

cellular automata (n.) A system made up of many discrete cells, each of which may be in one of a finite number of states. A cell or automaton may change state only at fixed, regular intervals, and only in accordance with fixed rules that depend on cells own values and the values of neighbours within a certain proximity.

cellular computer (n.) A term sometimes used to describe fine grain systems such as neural networks, systolic array, and SIMD systems. Such systems tend to be well suited for the implementation of cellular automata .

CEPT (n.) Conference on European Post and Telegraph is a European organization which develops standards and defines interfaces for telecommunications.

CFD (n.) Computational fluid dynamics; the simulation or prediction of fluid flow using computers, a field which has generally required twice the computing power available at any given time.

chain (n.) A topology in which every processor is connected to two others, except for two end processors that are connected to only one other. See also Hamiltonian, ring.

chaining (n.) the ability to take the results of one vector operation and use them directly as input operands to a second vector instruction, without the need to store to memory or registers the results of the first vector operation. Chaining (or linking as it is sometimes called) can significantly speed up a calculation.

channel (n.) A point-to-point connection between two processes through which messages can be sent. Programming systems that rely on channels are sometimes called connection-oriented, to distinguish them from the more widespread connectionless systems in which messages are sent to named destinations rather than through named channels. See also CSP, channel mask.

channel mask (n.) A non-negative integer specifying a set of communication channels. If the numbers of the channels to be specified are I0,I1,...In-1, then the channel mask is the integer for which these bit numbers are set to 1 and all other bits are 0. For example, if the channels to be specified are numbered 0, 1 and 4, the corresponding channel mask is the binary number 10011, or 19, in which the bits numbered 0, 1 and 4 are set to 1.

chime (n.) Chained vector time, approximately equal to the vector length in a DO-loop. The number of chimes required for a loop dominates the time required for execution. A new chime begins each time a resource such as a functional unit, vector register or memory path, must be reused.

chunksize (n.) The number of iterations of a parallel DO-loop grouped together as a single task in order to increase the granularity of the task.

CIR (n.) See MIR.

circuit switching (n.) A switching method where a dedicated path is set up between the transmitter and receiver. The connection is transparent, meaning that the switches do not try to interpret the data.

CISC (adj.) Complicated instruction set computer; a computer that provides many powerful but complicated instructions. This term is also applied to software designs that give users a large number of of complex basic operations. See also RISC.

clausal logic (n.) a form of logic in which all propositions are expressed in terms of AND, OR and NOT. A six-stage process transforms any predicate calculus formula into clausal form. See also clause.

clause (n.) a sentence in formal logic. See also clausal logic.

CLNP (n.) connectionless network protocol, also known as ISO-IP. This protocol provides a datagram service and is OSI's equivalent to IP.

clock cycle (n.) The fundamental period of time in a computer. Current technology will typically have this measured in nanoseconds.

clock time(n.) Physical or elapsed time, as seen by an external observer. Nonrelativistic time. In small computer systems where all components can be synchronized, clock time and logical time may be the same everywhere, but in large systems it may be difficult for a processor to correlate the events it sees with the clock time an external observer would see. The clock times of events define a complete order on those events.

CLTP (n.) connectionless transport protocol, which provides end-to-end transport data addressing and is OSI's equivalent to UDP.

CMIP (n.) common management internet protocol is the OSI protocol for network management.

CMIS (n.) is the service performed by the common management internet protocol.

CMOS (n.) Complementary Metal Oxide on Silicon. A widely used chip technology. See also BiCMOS.

CMOT (n.) common management internet protocol (CMIP) over TCP is ISO's CMIP/CMIS mapping of the OSI network management protocols and is based on the internet suite of protocols.

co-processor (n.) an additional processor attached to a main processor, to accelerate arithmetic, I/O or graphics operations.

coarse grain(adj.) See granularity

Cobegin/Coend (n.) a structured way of indicating a set of statements that can be executed in parallel.

combining (v.) Joining messages together as they traverse a network. Combining may be done to reduce the total traffic in the network, to reduce the number of times the start-up penalty of messaging is incurred, or to reduce the number of messages reaching a particular destination.

combining switch (n.) an element of an interconnection network that can combine certain types of requests into one request and produce a response that mimics serial execution of the requests.

common subexpression (n.) a combination of operations and operands that is repeated, especially in a loop. A good compiler will not recompute the common subexpressions but will save them in a register for reuse.

communicating sequential processes (n.) See CSP.

communication channel (n.) See channel

communication overhead (n.) A measure of the additional workload incurred in a parallel algorithm due to communication between the nodes of the parallel system. See also latency, startup cost

communication width (n.) the size of shared memory in an SIMD model assuming a global memory.

comparator (n.) a device that performs the compare-exchange operation.

compare-exchange the fundamental operation of the bitonic merge algorithm. Two numbers are brought together, compared, and then exchanged, if necessary, so that they are in the proper order.

compiler directives (n.) special keywords often specified as comments in the source code, but recognised by the compiler as providing additional information from the user for use in optimization.

compiler optimization (n.) Rearranging or eliminating sections of a program during compilation to achieve higher performance. Compiler optimization is usually applied only within basic blocks and must account for the possible dependence of one section of a program on another.

complex instruction set computer (adj.) See CISC.

complexity (n.) a measure of time or space used by an algorithm. Without adjective this refers to time complexity.

compress/index (n.) a vector operation used to deal with the nonzeroes of a large vector with relatively few nonzeroes. The location of the nonzeroes is indicated by an index vector (usually a bit vector of the same length in bits as the full vector in words). The compress operation uses the index vector to gather the nonzeroes into a dense vector where they are operated on with a vector instruction. See also gather/scatter.

computation-to-communication ratio (n.) The ratio of the number of calculations a process does to the total size of the messages it sends. A process that performs a few calculations and then sends a single short message may have the same computation-to-communication ratio as a process that performs millions of calculations and then sends many large messages. The ratio may also be measured by the ratio of the time spent calculating to the time spent communicating, in which case the ratio's value depends on the relative speeds of the processor and communications medium, and on the startup cost and latency of communication. See also granularity.

concurrent computer (n.) A generic category, often used synonymously with parallel computer to include both multicomputer and multiprocessor.

concurrent processing (adj.) simultaneous execution of instructions by two or more processors within a computer.

condition synchronization (n.) process of delaying the continued execution of a process until some data object it shares with another process is in an appropriate state.

configuration (n.) A particular selection of the types of processes that could make up a parallel program. Configuration is trivial in the SPMD model, in which every processor runs a single identical process, but can be complicated in the general MIMD case, particularly if user-level processes rely on libraries that may themselves require extra processes. See also mapping.

conjugate gradient method (n.) A technique for solving systems of linear algebraic equations, which proceeds by minimizing a quadratic residual error function. The method is iterative but quite powerful: in the absence of roundoff error, it will converge exactly in M steps, where M is the order of the system in question.

content addressable (adj.) See associative memory.

contention (n.) Conflict that arises when two or more requests are made concurrently for a resource that cannot be shared. Processes running on a single processor may contend for CPU time, or a network may suffer from contention if several messages attempt to traverse the same link at the same time.

context switching (n.) Saving the state of one process and replacing it with that of another that is time sharing the same processor. If little time is required to switch contexts, processor overloading can be an effective way to hide latency in a message passing system.

control process (n.) A process which controls the execution of a program on a concurrent computer. The major tasks performed by the control process are to initiate execution of the necessary code on each node and to provide I/O and other service facilities for the nodes.

control-driven (adj.) refers to an architecture with one or more program counters that determine the order in which instructions are executed.

cost (n.) complexity of an algorithm multiplied by the number of processors used.

cpu (n.) central processing unit or processor unit of a sequential computer system. Sometimes used to mean one processor element of a concurrent computer system.

CRCW (n.) See PRAM.

CREW (n.) See PRAM.

critical node (n.) when a node is inserted into an AVL tree, the critical node is the root of the subtree about which a rebalancing is going to take place.

critical section (n.) a section of program that can be executed by at most one process at a time.

CSP (n.) Communicating sequential processes; an approach to parallelism in which anonymous processes communicate by sending messages through named point-to-point channels. CSP was coined by Hoare in 1985. All communication is synchronous in that the process that reaches the communication operation first is blocked until a complementary process reaches the same operation. See also guard.

cube-connected cycles network (n.) a processor organization that is a variant of a hypercube. Each hypercube node becomes a cycle of nodes, and no node has more than three connections to other nodes.

cycle a cycle of the computer clock is an electronic signal that counts a single unit of time within a computer.

cycle time (n.) the length of of a single cycle of a computer function such as a memory cycle or processor cycle. See also clock cycle.

cyclic reduction (n.) a parallel algorithm to solve general first order linear recurrence relations.