| <- PREV | Index | Next -> |
NHSE ReviewTM: Comments
· Archive
· Search
Commercial Packages Vendor Codine - Computing in Distributed GENIAS GmbH, Germany Network Environment Connect:Queue Sterling Corp., USA CS1/JP1 Hitachi & Cummings Group, USA Load Balancer Unison Software, USA LoadLeveler IBM Corp., USA LSF - Load Sharing Facility Platform Computing, Canada NQE - Network Queuing Environment Craysoft Corp., USA Task Broker Hewlett-Packard Corp. Research Packages Institution Batch UCSF, USA CCS - Computing Centre Software Paderborn, Germany Condor Wisconsin State University, USA DJM - Distributed Job Manager Minnesota Supercomputing Center DQS 3.x Florida State University, USA EASY Argonne National Lab, USA far University of Liverpool, UK Generic NQS University of Sheffield, UK MDQS ARL, USA PBS - Portable Batch System NASA Amass & LLNL, USA PRM - Prospero Resource Manager University of S. California QBATCH Vita Services Ltd., USA
At least two other CMS packages; Balens (VXM Technologies Inc. USA) and JP1 (Hitachi Inc.) - formerly known as the NC Toolset. Some difficulty has been found in trying to get further information about these packages, but when found it will be added to this review.
URL http://www.genias.de/genias/english/codine/Welcome.html
Codine [12] is a software package targeted at utilising heterogeneous networked environments, in particular large workstation clusters with integrated compute servers, like vector and parallel computers. Codine provides a batch queuing framework for a large variety of architectures via a GUI-based administration tool. Codine also provides dynamic and static load balancing, checkpointing and supports batch, interactive and parallel jobs.
Main Features of Codine
This package is a commercial variation of NQS (until recently know as Sterling NQS/Exec) that is commercially marketed and supported by Sterling Software Inc. Its feature and functionality are very similar to GNQS.
The Package provides a Unix batch and device queuing facility capable of supporting wide range of Unix-based platforms. It provides three queue types:
To be completed.(1)
Load Balancer attempts to optimise the use of computer resources by distributing workloads to available UNIX systems across the network. Load Balancer determines availability based on the:
Major Features
LoadLeveler is a job scheduler that distributes jobs to a cluster of workstations and/or to nodes of a multi-processor machine [13]. LoadLeveler decides when and how a batch job is run based on preferences set up by the user and system administrator. Users communicate with LoadLeveler using a few simple LoadLeveler commands, or by using the LoadLeveler GUI. Jobs are submitted to LoadLeveler via a command file, which is much like a UNIX script file. Jobs are held in the queue until LoadLeveler can allocate the required resources to run the job. Once the job has completed, LoadLeveler will (optionally) notify the user. The user does not have to specify what machines to run on, as LoadLeveler chooses the appropriate machines.
Features
Interactive session support - LoadLeveler's interactive session support feature allows remote TCP/IP applications to connect to the least loaded cluster resource.
Individual control - Users can specify to LoadLeveler when their workstation resources are available and how they are to be used.
Central control - From a system management perspective, LoadLeveler allows a system administrator to control all the jobs running on a cluster. Job and machine status are always available, providing administrators with the information needed to make adjustments to job classes and changes to LoadLeveler controlled resources.
Scalability - As workstations are added, LoadLeveler automatically scales upward so the additional resources are transparent to the user.
LoadLeveler has a command line interface and a Motif-based GUI. Users can:
LSF is a distributed load sharing and batch queuing software package for heterogeneous Unix environments. LSF manages job processing by providing a transparent, single view of all hardware and software resources, regardless of systems in the cluster. LSF supports batch, interactive and parallel jobs and manages these jobs on the cluster making use of idle workstations and servers.
Specifically, LSF provides the following:
NQE [14] provides a job management environment for the most popular workstation by distributing jobs to the most appropriate Unix systems available in a heterogeneous network, allowing users to share resources. NQE is compatible with Network Queuing System (NQS) software, but has a functionality that exceeds the basic NQS capability. NQE has the following features:
Task Broker is a software tool that attempts to distribute computational tasks among heterogeneous UNIX-system-based computer systems. Task Broker performs its computational distribution without requiring any changes to the application. Task Broker will relocate a job and its data according to rules set up at initialisation. The other capabilities provided by Task Broker include:
Task Broker automates the distribution of computations by:
- Gathering machine-specific knowledge from the user.
- Analysing machine-specific information and selecting the most available
server.
- Connecting to a selected server via telnet, rsh (remote
shell), or crp (create remote process).
- Copying program and data files to a selected server via ftp or
NFS.
- Invoking applications over the network.
- Copying the resulting data files back from the server via ftp or
NFS.
Task Broker Features:
This software [15] is designed to manage multiple batch job queues under the control of a daemon process. This daemon controls the batch jobs through the use of the BSD job control signals, while the client programs, batch, baq, and barm provide for submission, examination and removal of jobs, respectively.
The capabilities include:
The Computing Center Software [16 & 17] is itself a distributed software package running on the front-end of an MPP system. The Mastershell (MS), which runs on a front-end workstation, is the only user interface to CCS. It offers a limited environment for creating Virtual Hardware Environments (VHE) and running applications in interactive or batch mode.
The Port-Manager (PM) is a daemon that connects the MS, Queue-Manager (QM) and the Machine-Managers (MM) together. The MS can be started manually by the user or automatically by the surrounding Unix system as the user's default login shell. In either case a connection to the PM is established and data identifying the user is transferred. The PM uses this information to initiate a first authorisation, on failure the user session is aborted immediately. Otherwise, the user has the whole command language of the MS at his/her disposal.
If a user requests a VHE consisting of a number of processors in a certain configuration and wants exclusive usage for one hour. The number of VHEs a user can handle simultaneously is only restricted by the limitations of the metacomputer and the restrictions set up by the administrator or given by the operating system. When a VHE is ordered from the MS side the PM checks the user's limitations first, i.e. the maximum number and kind of resources allowed for the requesting user or project. If the request validation is successful, the VHE is sent to the QM.
The QM administers several queues. Depending on priority, time, resource requirements, and the application mode (batch or interactive), a queue for the request is chosen. If the scheduler of the QM decides that a certain VHE should be created, it is sent to the PM. The PM configures the request in co-operation with appropriate MMs and supervises the time limits. In addition, the PM generates operating and accounting data.
The user is allowed to start arbitrary applications within his VHE. The upper level of the three level optimisation policy used by CCS, corresponds to an efficient hardware request scheduling (QM). The mid-level maps requests onto the metacomputer (PM) and the third level handles the system dependent configuration software, optimising request placements onto the system architectures (MMs). This three level policy leads to a high level load balancing within the metacomputer.
Condor [18 & 19] is a software package for executing batch type jobs on workstations which would otherwise be idle. Major features of Condor are automatic location and allocation of idle machines, checkpointing and the migration of processes. All of these features are achieved without any modification to the underlying Unix kernel. It is not necessary for users to change their source code to run with Condor, although programs must be specially linked with Condor libraries.
The Condor software monitors the activity on all the participating workstations in the local network. Those machines which are determined to be idle are placed into a resource pool. Machines are then allocated from the pool for the execution of jobs. The pool is a dynamic entity -- workstations enter when they become idle, and leave again when they get busy.
Design Features
DJM is a job scheduling system designed to allow the use of massively parallel processor (MPP) systems more efficiently. DJM provides a comprehensive set of management tools to help administrators utilise MPP systems effectively and efficiently.
Main features:
DQS 3.1 [20] is an experimental Unix based queuing system being developed at the Supercomputer Computations Research Institute (SCRI) at The Florida State University. DQS development is sponsored by the United States Department Of Energy. DQS is designed as a management tool to aid in computational resource distribution across a network. DQS provides architecture transparency for both users and administrators across a heterogeneous environment.
Some features of DQS
Qusage - This is a Xt-based accounting package provided by DQS. Accounting information in a variety of forms can be retrieved via Qusage, it features on-line help and Postscript output. All accounting information is stored in one place, making retrieval of accounting information quick and easy.
Dmake - Distributed Make is a generic parallel make utility designed to speed up the process of compiling large packages. Dmake is designed for use with DQS, but can also be easily used as a standalone parallel make utility (separate from DQS). Dmake was developed with simplicity in mind, no daemons or other modifications to network configurations are required.
[Editor's Note: the next article update will include the new EASYLL, a combination of EASY and LoadLeveler.]
The goals of EASY [21], Argonne National Laboratory's job scheduler, are fairness, simplicity, and efficient use of the available resources. These goals are in conflict, but the scheduler is designed to be a compromise. Users will be able to request a set of nodes for any type of use. In order to maintain the quality of machine access, the scheduler provides a single point of access, submit. This program allows users to queue both interactive and batch access jobs. When resources are available, the user is notified by the scheduler and at that time has exclusive access to the number of nodes requested. Having exclusive access to the nodes allows the user to have optimum cache performance and use of all available memory and /tmp disk space. This type of access allows users to run benchmarks at any time and also to predict how long it will take for their job to complete. Having exclusive access is essential so that users can predict wall-clock run time for their jobs when they submit them to the scheduler.
While there are currently no limits to the number or size of jobs that can be submitted, the scheduler uses a public algorithm to determine when batch or interactive time is actually provided. Any modifications to this algorithm will be made public. Argonne has also implemented an allocation policy as a separate part of the scheduler. The intent of the policy is to ensure all users some set amount of resource time and to prevent people from using more than their share of resources.
This project [22] is being carried out by the Computing Services Department of the University of Liverpool and is funded by the JISC New Technologies Initiative. The far project has developed a software tool to facilitate the exploitation of the spare processing capacity of Unix workstations. The initial aims of the project are to develop a system which would:
The Networked, Unix based queuing system, NQS [23 & 24], was developed under a US government contract by the National Aeronautics and Space Administration (NASA). NQS was designed and written with the following goals in mind:
The Multiple Device Queuing System (MDQS) [25] is designed to provide Unix with a fully functional, modular, and consistent queuing system. The MDQS system was designed with portability, expandability, robustness, and data integrity as key goals.
MDQS is designed around a central queue which is managed by a single privileged daemon. Requests, delayed or immediate, are queued by non-privileged programs. Once queued, requests can be listed, modified or deleted. When the requested device or job stream becomes available, the daemon executes an appropriate server process to handle the request. Once activated, the request can still be canceled or restarted if needed.
MDQS can serve as a delayed-execution/batch. MDQS provides the system manager with a number of tools for managing the queuing system. Queues can be created, modified, or deleted without the loss of requests. MDQS recognises and supports both multiple devices per queue and multiple queues per device by mapping input for a logical device to an appropriate physical output device. Anticipating the inevitable, MDQS also provides for crash recovery.
The MDQS system has been developed at the U.S. Army, Ballistics Research Laboratory to support the work of the laboratory and is available to other Unix sites upon request.
The Portable Batch System (PBS) project [26] was initiated to create a flexible, extensible batch processing system to meet the unique demands of heterogeneous computing networks. The purpose of PBS is to provide additional controls over initiating or scheduling execution of batch jobs, and to allow routing of those jobs between different hosts.
PBS's independent scheduling module allows the system administrator to define what types of resources, and how much of each resource, can be used by each job. The scheduling module has full knowledge of the available queued jobs, running jobs, and system resource usage. Using one of several procedural languages, the scheduling policies can easily be modified to suit the computing requirements and goals of any site. The batch system allows a site to define and implement policy as to what types of resources and how much of each resource can be used by different jobs. PBS also provides a mechanism which allows users to specify unique resources required for a job to complete.
A Forerunner of PBS was Cosmic NQS, which was also developed by the NAS program. It became the early standard for batch system under Unix. However Cosmic NQS had several limitations and it was difficult to maintain and enhance.
PBS Supported Provides:
The Prospero Resource Manager (PRM) [27] supports the allocation of processing resources in large distributed systems, enabling users to run sequential and parallel applications on processors connected by local or wide-area networks. PRM has been developed as part of the Distributed Virtual Systems Project at the Information Sciences Institute of the University of Southern California.
PRM enables users to run sequential or parallel jobs on a network of workstations. Sequential jobs may be off loaded to lightly loaded workstations while parallel jobs can make use of a collection of workstations. PRM supports the message passing libraries CMMD and a PVM interface (V3.3.5). PRM also supports terminal and file I/O activity by its tasks, such as keyboard input, printing to a terminal or access to files that may be on a filesystem not mounted by the host on which the task is running. Further more, the components of an application may span multiple administrative domains and hardware platforms, without imposing the responsibility of mapping individual components to nodes on the user.
PRM selects the processors on which the jobs will run, starts the job, supports communication between the tasks that make up the job, and directs input and output to and from the terminal and files on the user's workstation. At the job level, location transparency is achieved through a dynamic address translation mechanisms that translate task identifiers to physical workstation addresses.
PRM's resource allocation functions are distributed across three entities: the system manager, the job manager, and the node manager. The system manager controls access to a collection of processing resources and allocates them to jobs as requested by job managers. Large systems may employ multiple system managers, each managing a subset of resources. The job manager is the principal entity through which a job acquires processing resources to execute its tasks. The job manager acquires resources from one or more system managers and initiates tasks on these workstations through the node manager. A node manager runs on each workstation in the PRM environment. It initiates and monitors tasks on the workstation on which it is running.
QBATCH is a queued batch processing system for Unix. Each queue consists of a file containing information about the queue itself, and about all jobs currently present in the queue. When the program qp is run for a given queue, it will fork a child process for each job in the queue in turn, and wait for it to complete. If there are no jobs present in the queue, qp will wait for a signal from one of the support programs, which will 'tell' it that another job has joined the queue, or that it should terminate.
Features:
There can be as many queues as the system can support. Queues are named, and run asynchronously. The processing of jobs running in one queue are totally independent of those in any other (subject of course to the independence of potentially shared resources such as data files and devices).
Facilities:
- Inform user when job is complete.
- Cancel jobs in queues, or kill running jobs.
- Change the running order of jobs in queues.
- Suspend and resume processing of jobs.
- Start and stop the queue process engine at any time.
- Sizable submissions to a queue, and re-enable them.
- Stop a queue, and wait until the current job has finished.
- List the status and contents of a queue.
- Repeat (or kill and repeat) the currently running job in a queue.
- Examine or edit jobs in a queue, and examine the monitors.
- Monitor (and perhaps account) the times taken by jobs in the queue.
- Prevent CPU hogging jobs from swamping the system.
- Separate queue for large jobs that is started and stopped by cron
out of working hours.
- Protect sensitive applications and data with file and directory permissions.
| <- PREV | Index | Next -> |
NHSE ReviewTM: Comments
· Archive
· Search
NHSE: Software Catalog
· Roadmap