Newest entries are first. Older changes can be found here.

26th January 1996

/parallel/tools/editors/folding-editors/winf.zip
F for Windows by Julian Wilson <julian@bristol.st.com> Shareware version of F for Windows.
/parallel/standards/hippi/hippifc.zip
PKzipped hippifc.ps file. No comments or any indication what it is without printing or previewing. 6.1M in size unzipped.
/parallel/standards/hippi/sgi_hippi-6400.ps.gz
HIPPI-6400 (SuperHIPPI) Summary Details of SuperHIPPI (HIPPI-6400) proposal to extend HIPPI to include a new PHY (6400). 20 slides.
/parallel/teaching/hpctec/epcc/tech-watch/hpf-course-notes.tar.gz
HPF Course: EPIC Student Notes package Authors: A K Ewing; H Richardson; A D Simpson and R Kulkarni.
/parallel/teaching/hpctec/epcc/tech-watch/EPIC_package.tar.Z
EPCC-TEC Interactive Courses. This is a package that allows you to follow a selection of EPCC-TECs courses interactively, combining both on-line exercises and hypertext student notes. Courses include: HPF and MPI
/parallel/teaching/hpctec/epcc/tech-watch/EPCC-HPFcourse-PS.tar.Z
"EPCC HPC Postscript Course Notes and Slides and HPC Examples sources"

19th January 1996

/parallel/standards/mpi/anl/using/errata.dvi
/parallel/standards/mpi/anl/using/errata.ps.Z
Updated: "Using MPI" book, by Gropp, Lusk, and Skjellum errata.
/parallel/events/selhpc-adv-avs
Advanced AVS by Chris Howarth <CZIUSCH@vmsfe.ulcc.ac.uk> Details of workshop being held at University of London Computer Centre, 20 Guilford St, London WC1N 1DZ, UK. Organised by: SEL-HPC.

AVS is a system which allows users to visualize their data by constructing applications from a series of software components called modules. This course is designed for the user with some basic experience of AVS. The course covers extensively the topic of creating AVS modules to incorporate your own application code into the system. This can range from embedding simulation codes to implementing visualization techniques and each class of module is covered in the course. The course was developed by MAN-TEC.

Topics: The AVS datatypes and module structure; Using the module generator to build modules; Compiling and debugging modules; Importing and manipulating arrays of data and Creating geometric objects.

/parallel/events/selhpc-remote-hpc
Remote Interaction with HPC Facilities by Chris Howarth <CZIUSCH@vmsfe.ulcc.ac.uk> Call for participation in workshop being held at University of London Computer Centre, 20 Guildford St, London WC1N 1DZ, UK. Organised by SEL-HPC.

There is an ever-increasing demand for High Performance Computing facilities. Unfortunately the costs of these facilities are restrictively high and need to be shared. To make this solution work it is necessary to work efficiently with high performance computers, workstation clusters and large data repositories. This one day course introduces the methods used to interact with remote computers. Many instances of seamless integration of HPC facilities are presented and their effectiveness is discussed.

Topics: Real-time quantum chemistry; Storing information in a remote data repository and The MIDAS national datasets archive.

15th January 1996

/parallel/events/advanced-par-comp
Advanced Parallel Computation by Thierry Cornu <cornu@mantrasun5.epfl.ch> Call for papers for course being held from 4-8th March 1996 at Lausanne, Switzerland.

This course is part of the COSMASE course Programme (COmputation in Sciences, Methods and Algorithms, on Supercomputing for Engineering).

Theme: Parallel algorithms, implementation, optimization and tools.

See also http://lmfwww.epfl.ch/lmf/COSMASE/Courses/1995-96/AdParaComp

/parallel/simulation/architectures/mint/
Updated: MINT (MIPS Interpreter) fast program-driven simulator for multiprocessor systems.
/parallel/simulation/architectures/mint/mint-2.8.tar.Z
MINT v2.8 simulator sources and documentation by Jack Veenstra <veenstra@cs.rochester.edu>, http://reality.sgi.com/employees/veenstra_mti
/parallel/standards/mpi/anl/userguide.ps.Z
Updated: "Users' Guide to mpich, a Portable Implementation of MPI" by William Gropp <gropp@mcs.anl.gov> and Ewing Lusk <lusk@mcs.anl.gov>. Mathematics and Computer Science Division, Argonne National Laboratory, USA. ABSTRACT: MPI (Message-Passing Interface) is a standard specification for message-passing libraries. mpich is a portable implementation of the full MPI specification for a wide variety of parallel computing environments. This paper describes how to build and run MPI programs using the MPICH implementation of MPI.
/parallel/standards/mpi/anl/install.ps.Z
Updated: "Installation Guide to mpich, a Portable Implementation of MPI" by William Gropp <gropp@mcs.anl.gov> and Ewing Lusk <lusk@mcs.anl.gov>. Mathematics and Computer Science Division, Argonne National Laboratory, USA. ABSTRACT: MPI (Message-Passing Interface) is a standard specification for message-passing libraries. mpich is a portable implementation of the full MPI specification for a wide variety of parallel computing environments, including workstation clusters and massively parallel processors (MPPs). mpich contains, along with the MPI library itself, a programming environment for working with MPI programs. The programming environment includes a portable startup mechanism, several profiling libraries for studying the performance of MPI programs, and an X interface to all of the tools. This guide explains how to compile, test, and install mpich and its related tools.
/parallel/standards/mpi/anl/mpi2/mpi2-draft.ps.Z
MPI-2: Extensions to the Message-Passing Interface by Message Passing Interface Forum 12th January 1996 ABSTRACT: This document describes the MPI-2 effort to add extensions to the original MPI standard. The MPI-2 effort began in March 1995 and the additions to the standard are scheduled to be released for public comment at Supercomputing '96. Topics being explored for possible inclusion in MPI-2 are dynamic processes, one-sided communications, extended collective operations, external interfaces, additional language bindings, real-time communications, and miscellaneous topics. NOTE: This is the current state of some of the chapters being drafted for possible inclusion in the MPI-2 standard document. It represents the ongoing work of the MPI Forum in an incomplete and tentative form, but is being distributed with the intention of desseminating the work of the MPI Forum and to allow people attending the MPI European Workshop to understand the topics under consideration. The proposals for MPI-2 are still very much under development. It is very likely that the final MPI-2 standard will differ in important ways from this draft. As a result, this draft should not be taken as a promise of the final form of the MPI-2 standard.
/parallel/standards/mpi/anl/mpi2/mpi2-draft-oompi.ps.Z
Annex B -C++ Class Library: Object Oriented MPI-2 (OOMPI) by Andrew Lumsdaine; Jeff Squyres and Brian McCandless.
/parallel/standards/hippi/hippi-serial_2.1.ps.gz
/parallel/standards/hippi/hippi-serial_2.1.pdf
High-Performance Parallel Interface - Serial Specification V2.1 (HIPPI-Serial) by Roger Cummings <Roger_Cummings@Stortek.com>, Storage Technology Corporation, 2270 South 88th Street, Louisville, CO 80028-0268, USA; Tel: +1 303 661-6357; FAX: +1 303 684-8196 and Don Tolmie <det@lanl.gov>, Los Alamos National Laboratory, CIC-5, MS-B255, Los Alamos, NM 87545, USA; Tel: +1 505 667-5502; FAX: +1 505 665-7793. ABSTRACT: This standard specifies a physical-level interface for transmitting digital data at 800 Mbit/s or 1600 Mbit/s serially over fiber-optic or coaxial cables across distances of up to 10 km. The signalling sequences and protocol used are compatible with HIPPI-PH, ANSI X3.183-1991, which is limited to 25m distances. HIPPI-Serial may be used as an external extender for HIPPI-PH ports, or may be integrated as a host's native interface without HIPPI-PH.
/parallel/standards/hippi/hippi-serial_2.1_changes.ps.gz
/parallel/standards/hippi/hippi-serial_2.1_changes.pdf
Changes between HIPPI-Serial Rev 2.0 and Rev 2.1 All of the changes were essentially editorial in nature although some of them changed the technical content as well. The changes clarify and correct the document.

12th January 1996

/parallel/vendors/elcom/
Updated Elcom Ltd Home Page, now at http://www.ecsc.mipt.ru/Elcom/
/parallel/vendors/elcom/patches/originfo.txt
Origami release history and notes
/parallel/vendors/elcom/patches/c01to110.zip
Patch for Origami for Windows from 1st December 1995 release to 10th January 1996 release.
/parallel/standards/mpi/anl/mpi2/mpi-draft.super95.ps.Z
MPI-2: Extensions to the Message-Passing Interface by Message Passing Interface Forum 8th January 1996 ABSTRACT: This document describes the MPI-2 effort to add extensions to the original MPI standard. The MPI-2 effort began in March 1995 and the additions to the standard are scheduled to be released for public comment at Supercomputing '96. Topics being explored for possible inclusion in MPI-2 are dynamic processes, one-sided communications, extended collective operations, external interfaces, additional language bindings, real-time communications, and miscellaneous topics. NOTE: This is the current state of some of the chapters being drafted for possible inclusion in the MPI-2 standard document. It represents the ongoing work of the MPI Forum in an incomplete and tentative form, but is being distributed with the intention of desseminating the work of the MPI Forum and to allow people attending the BOF at Supercomputing '95 to understand the topics under consideration. The proposals for MPI-2 are still very much under development. It is very likely that the final MPI-2 standard will differ in important ways from this draft. As a result, this draft should not be taken as a promise of the final form of the MPI-2 standard.
/parallel/standards/mpi/anl/mpi2/mpi-draft-oompi.ps.Z
Annex B -C++ Class Library: Object Oriented MPI (OOMPI) by Andrew Lumsdaine; Jeff Squyres and Brian McCandless.
/parallel/architecture/processing/process-migration/CoCheck/
CoCheck is an environment that allows both process migration and creating checkpoints of parallel applications. Currently, only PVM applications are supported, but support for MPI is under construction.

See also http://wwwbode.informatik.tu-muenchen.de/CoCheck/

/parallel/architecture/processing/process-migration/CoCheck/announcement
Announcement of CoCheck Author: Georg Stellner <stellner@informatik.tu-muenchen.de>
/parallel/architecture/processing/process-migration/CoCheck/CoCheck-V1.0.tgz
CoCheck - Consistent Checkpoints V1.0 by Prof. Dr. A. Bode <bode@informatik.tu-muenchen.de>; Georg Stellner <stellner@informatik.tu-muenchen.de>, http://wwwbode.informatik.tu-muenchen.de/~stellner/ and J. Pruyne. Lehrstuhl f"ur Rechnertechnik und Rechnerorganisation, Institut f"ur Informatik, Technische Universit"at M"unchen, 80290 M"unchen, Germany. CoCheck distribution. This package is under the GNU Library General Public License and the GNU General Public License.

CoCheck was tested to run on SunOs 4.1.x and DEC OSF/1. It requires at least PVM Version 3.3.7 or higher.

See also http://wwwbode.informatik.tu-muenchen.de/CoCheck/

/parallel/architecture/processing/process-migration/CoCheck/Ug-V1.0.ps.gz
CoCheck Users' Guide V1.0 (PVM Version) by Georg Stellner <stellner@informatik.tu-muenchen.de>, http://wwwbode.informatik.tu-muenchen.de/~stellner/ and Jim Pruyne <pruyne@cs.wisc.edu>, http://cs.wisc.edu/~pruyne/.
/parallel/architecture/processing/process-migration/CoCheck/CkptLib-ALPHA-V1.0.tgz
Condor V1.0 binary library (DEC Alpha)
/parallel/architecture/processing/process-migration/CoCheck/CkptLib-SUN4-V1.0.tgz
Condor V1.0 binary library (Sun Sparc + SunOS 4)
/parallel/environments/pvm3/tape-pvm/tape0.9pl8.tgz
Tape/Pvm 0.9 Patch level 8 sources including instructions on setting up, building and installing the distribution. Changes: Now measures the overhead inferred each time an event is traced. Events now contain a new field (alpha) containing the overhead (in mus). This information is used by the intrusion compensation tool "tico". Caution: trace format has changed! Author: Eric Maillet <maillet@imag.fr>, LMC-IMAG, Grenoble, France
/parallel/teaching/hpctec/epcc/tech-watch/EPCC-HPFcourse-PS.tar.Z
"EPCC HPC Postscript Course Notes and Slides and HPC Examples sources"
/parallel/events/pc96
8th Joint EPS - APS International Conference on Physics Computing (Physics Computing'96) by Zofia Mosurska <pc96@cyf-kr.edu.pl> Call for papers for conference being held from 17-21st September 1996 at Krakow, Poland. Sponsored by European and American Physical Societies.

Topics: computer simulation in statistical physics; simulation of specific materials; surface phenomena; percolation; critical phenomena; computational fluid dynamics; classical and quantum molecular dynamics; chaos, dynamical systems; self-organization and growth; neural networks and their applications; complex optimization. As well as contemporary trends in hardware and software development: recent developments in computer architectures; modern programming techniques (parallel programming, object oriented approach); symbolic computations; graphics, visualization and animation together with industrial applications and teaching of computational physics and others.

Deadlines: Camera-ready Papers: 30th April 1996; Notification: 31st May 1996.

See also http://www.cyf-kr.edu.pl/pc96/ and ftp://ftp.cyf-kr.edu.pl/pc96/

/parallel/events/ica3pp96-par-prog-tools
Tools for Parallel Programming mini-track at ICA3PP, Singapore Call for papers for mini-track at ICA3PP-96 being held from 11th-13th June 1996 at Singapore. Sponsored by IEEE.

Topics: visual parallel programming; program visualization and animation; novel monitoring and debugging techniques; performance tuning of parallel programs; performance modeling and prediction; automatic parallelization techniques; tools for parallel high level languages; scheduling and load balancing; support for heterogeneous computing; case studies and applications and others.

Deadlines: Papers (hard copy): 5th February 1996; Papers (email): 15th February 1996; Notification: 31st March 1996; Camera-ready papers: 30th April 1996.

See ICA3PP information at http://www.iscs.nus.sg/ica3pp/.

9th January 1996

/parallel/events/ipps96-hetro-comp-workshop
Heterogeneous Computing Workshop at IPPS96 by V.S.Sunderam <vss@mathcs.emory.edu> Call for papers and participation in workshop being held from 15-16th April 1996 at Sheraton Waikiki, Honolulu, Hawaii, USA. Sponsored by sponsored by the IEEE Computer Society Technical Committee on Parallel Processing.

Topics: Basic Models and Performance Measures; Efficient Resource Management Strategies; Transparent Mechanisms for Storing and Handling Data; System Interfaces and Programming Tools; Failure Resilience Strategies; "Proof of Concept" Application Implementations and others.

Deadlines: Papers: 12th January 1996; Notification: 20th February 1996; Camera-ready papers: 1st March 1996.

See also http://www-cse.ucsd.edu/users/berman/hcw.html and IPPS'96 information at http://www.usc.edu/dept/ceng/prasanna/meetings/ipps/ippshome.html

/parallel/events/supeur96
SUP'EUR 96 - High Performance Computing in Europe on IBM Platforms by Zofia Mosurska <supeur96@cyf-kr.edu.pl> Call for papers, participation and advanced program for conference being held from 8-11th September 1996 at Continental Hotel, Krakow, Poland.

The conference is particularly intended to address problems and needs of high performance computing on IBM machines, to offer up-to-date information on IBM's products and plans in HPC.

Also jointly with the conference is the Sup'Prize contest worth $10,000 for the development of parallel applications on IBM Platforms.

Topics: Parallel and distributed computing; IBM trends in high performance computing; New IBM products; Experience with SP2; High performance storage systems; Environments, languages and tools; Applications; Graphics and visualization; Education and training and others.

Deadlines: Papers: 30th May 1996; Notification: 30th June 1996.

See also http://www.cyf-kr.edu.pl/supeur96 and <URL:ftp: //ftp.cyf-kr.edu.pl/supeur96>

/parallel/standards/mpi/mpimap/MPIMap.tar.Z
Updated: "MPIMap Distribution" by John May <johnmay@llnl.gov> A tool for visualizing MPI datatypes. Must run on the target parallel machine for which the MPI code is being developed, since it calls MPI to determine datatype layouts and sizes.

Designed to with only with the MPICH V1.0.10 and V1.0.11 implementations of MPI at present. It also requires Tcl 7.4 and Tk 4.0 and works best with a colour display.

/parallel/applications/numerical/aztec
"Aztec: A parallel iterative package for the solving linear systems arising in Newton-Krylov Methods" by Ray S. Tuminaro <tuminaro@cs.sandia.gov>; Tel: +1 505 845-7298; John N. Shadid <jnshadi@cs.sandia.gov>; Tel: +1 505 845-7876 and Scott A. Hutchinson <sahutch@cs.sandia.gov>, http://www.cs.sandia.gov/~sahutch; Tel: +1 505 845-7996; FAX: +1 505 845-7442. Sandia National Laboratories, Department 9221, Parallel Computational Sciences, MS 1111 P.O. Box 5800, Albuquerque, NM 87185-1111, USA. Announcement of iterative library that greatly simplifies the parallelization process when solving a sparse linear system of equations Ax = b where A is a user supplied nxn sparse matrix, b is a user supplied vector of length n and x is a vector of length n to be computed.

Available publicaly through a research license from the authors.

See also http://www.cs.sandia.gov/HPCCIT/aztec.html for more details including Postscript papers.

8th January 1996

/parallel/languages/fortran/f90/f90-FAQ
"Fortran 90 information file, on compilers, tools, books, courses, tutorials and the standard." by Michael METCALF <Michael.METCALF@cern.ch> See also http://www.fortran.com/fortran/
/parallel/books/mcgraw-hill/par-dist-comp-handbook
Parallel and Distributed Computing Handbook (1996) by Albert Y. Zomaya <zomaya@ee.uwa.edu.au>, The University of Western Australia, Australia Featuring contributions from more than sixty of the world's top experts in the field, this state-of-the-art handbook offers engineers and scientists the most comprehensive treatment available of the theory and applications of parallel and distributed computing.

Readers will find forty-one well-organized chapters covering the full range of key issues relating to systems design and operation, including models and algorithms....architectures and technologies....development tools....and current and future uses of this exciting technology in science and industry.

See also http://www.ee.uwa.edu.au/~paracomp/home.html

1232 pages. 600 illustrations. ISBN 0-07-073020-2. $99.50.

/parallel/books/prentice-hall/in-search-of-clusters
In Search of Clusters by Gregory F. Pfister <pfister@austin.ibm.com> Topics covered include: how clusters are an invisible, multi-billion dollar segment of the computer industry; why clusters are growing in importance and visibility hardware and software elements of clusters, including the first serious discussion of "single system image"; key incompabilities in programming clusters and symmetric multiprocessors; comparison between clusters and symmetric multiprocessors; how popular benchmarks mislead users and designers of clusters

ISBN 0-13-437625-0 published by Prentice-Hall, 415 pages, $42.

See also http://www.prenhall.com/ or the book at http://www.prenhall.com/~ray/013/437624/ptr/43762-4.html

/parallel/libraries/memory/global-array/
GA Toolkit developed at Molecular Science Research Center in Pacific Northwest Laboratory, USA. It provides portable and efficient shared-memory programming interface through which each process in a MIMD parallel program can asynchronously access logical blocks of physically distributed matrices, without need for explicit cooperations by other processes. Platforms: SP1, IPSC, Delta, Paragon, KSR-2, workstations.

The toolkit contains global arrays (GA), memory allocator (MA), TCGMSG, and TCGMSG-MPI packages bundled together.

Global Arrays is a portable shared Non-Uniform Memory Access (NUMA) programming environment for distributed and shared memory computers.

TCGMSG is a simple and efficient message passing library.

TCGMSG-MPI is a TCGMSG library implementation on top of MPI and in some cases architecture-specific resources.

MA is a dynamic memory allocator for Fortran (and also C) programs.

/parallel/libraries/memory/global-array/global2.1.tar.Z
Global Array (GA) Toolkit V2.1 by Jarek Nieplocha <j_nieplocha@pnl.gov>, Environmental Molecular Sciences Laboratory Pacific Northwest National Laboratory, MSIN: K1-87, Richland, WA 99352, USA; Tel: +1 (509) 372-4469 Requires GNU make, IPC (semaphores).
/parallel/applications/numerical/peigs/siam_6th.ps.Z
Parallel Inverse Iteration with Reorthogonalization by George I. Fann <gi_fann@pnl.gov> and Richard J. Littlefield <rj_littlefield@pnl.gov>. Pacific North West Laboratory, PO Box 999, Richland, WA 99352, USA. ABSTRACT: A parallel method for finding orthogonal eigenvectors of real symmetric tridiagonal matrices is described. The method uses inverse iteration with repeated Modified Gram-Schmidt (MGS) reorthogonalization of the unconverteged iterates for clustered eigenvalues. This approach is more pallelizable than reorthogonalizing against fully convertged eigenvectors, as is done by LAPACK's current DSTEIN routine. The new method is found to provide accuracty and speed comparable to DSTEIN's and to have good parallel scalabilty even for matrices with large clusters of eigenvalues. We present empirical results for residual an ortogonality tests, plus times on IBM RS/6000 (sequential) Intel Touchstone DELTA (parallel)computers.
/parallel/events/ipps96-irregular-workshop
"Second Workshop on Solving Irregular Problems on Distributed Memory Machines at IPPS96" by Eugene N. Miya <eugene@ames.arc.nasa.gov> Call for papers and participation for workshop being held at IPPS96 from 15th-19th April 1996 at Sheraton Waikiki Hotel, Honolulu, Hawaii, USA. Sponsored by IEEE Computer Society Technical Committee on Parallel Processing in cooperation with ACM SIGARCH.

Deadlines: Submissions: 31st January 1996; Notification: 1st March 1996; Camera-ready manuscripts: 20th March 1996.

See also http://www.usc.edu/dept/ceng/prasanna/home.html

/parallel/standards/mpi/anl/pxman.ps.Z
PEXEC Reference Manual (Draft) by William Gropp <gropp@mcs.anl.gov> and Ewing Lusk <lusk@mcs.anl.gov>. Mathematics and Computer Science Division, Argonne National Laboratory, USA. ABSTRACT: PEXEC is a system for writing lightweight, fault-tolerant, client/server programs. It is organized around an event-driven model, and provides for highly modular definition of different services to be provided by a program. This document contains detailed documentation on the routines that are part of the PEXEC implementation. These include the basic event-driven driver routines, as well as a variety of convenience routines for creating and using network sockets and child processes. As an alternate to this manual, the reader should consider using the script pxman; this is a script that uses xman to provide a X11 Window System interface to the data in this manual.
/parallel/events/
At http://www.cs.reading.ac.uk/cs/research/pedal/rwpc/rwpc.html Reading Workshops on Parallel Computing Call for papers for workshops being held from 28-29th March 1996 at Reading, UK.

The Parallel, Emergent, & Distributed Architectures Laboratory (PEDAL) in conjunction with the Dept of Computer Science at Reading University is presenting a series of workshops on the broad theme of parallel computing.

The intention of these workshops is to provide a forum for both academia and industry to meet and discuss key issues in the area of Parallel & Disributed processing and the related field of emergent computing structures which often relies on implicit recognition or direct use of parallelism.

Deadlines: Extended Abstracts: 25th January 1996; Notification: 15th February 1996; Full papers: 15th March 1996.

See also Call for Papers details at http://www.cs.reading.ac.uk/cs/research/pedal/rwpc/rwcfp1.html, and Best Student Paper details at http://www.cs.reading.ac.uk/cs/research/pedal/rwpc/bpaward.html

5th January 1996

/parallel/events/mpidc96
MPI Developers Conference and Users Group Meeting by Andrew Lumsdaine <lums@owl.cse.nd.edu> Call for participation for conference being held from 1st-2nd July 1996 at University of Notre Dame, Notre Dame, IN, USA.

Topics: Message Passing Interface (MPI) standard; applications, implementations, experiences, extensions, comparisons, future, software, ... about MPI.

Deadlines: Abstracts for Papers: 26th April 1996; Abstracts for Posters, Panel discussions and Software Demos: 24th May 1996; Papers for proceedings: 31st May 1996.

See also http://www.cse.nd.edu/mpidc96/

/parallel/events/ibm-sp-workshop
IBM Parallel Programming Workshop for the RS/6000 SP System by Mike Carline <mike@vnet.ibm.com> Details of workshop being held from 12th-16th February 1996 at Southlake, Texas, USA.

This 4.5-day workshop is designed to help information technology professionals such as system designers and programmers gain sufficient knowledge to develop parallel applications on the IBM RS/6000* Scalable POWERparallel System* (RS/6000 SP).

Open to IBM US Customers and IBM employees.

/parallel/events/iopads96
Fourth Annual Workshop on I/O in Parallel and Distributed Systems (IOPADS) by David Kotz <dfk@wildcat.cs.dartmouth.edu> Call for participation and programme for workshop being held on 27th May 1996 at Philadelphia, USA. Held in conjunction with the ACM 1996 Federated Computing Research Conference (FCRC'96). Sponsored by ACM SIGACT, ACM SIGARCH, ACM SIGOPS and IEEE TCOS.

See also http://www.cs.dartmouth.edu/iopads/

/parallel/events/europar96
EURO-PAR'96 (merging of CONPAR-VAPP and PARLE) by Yves Robert <yrobert@cri.ens-lyon.fr> Call for papers for conference being held at ENS Lyon, France.

Workshops: Programming environment and tools; Routing and communication in interconnection networks; Automatic parallelization and high performance compilers; Distributed systems and algorithms; Parallel languages, programming and semantics; Parallel non numerical algorithms; Parallel numerical algorithms; Parallel DSP and image processing; VLSI design automation; Computer arithmetic; High performance computing and applications; Theory and models of parallel computing; Parallel computer architecture; Networks and ATM; Optics and other new technologies for parallel computation; Neural networks; Scheduling and load balancing; Critical systems; Performance evaluation; Instruction level parallelism; High-level and meta-level control in parallel symbolic programs; Parallel and distributed databases.

Deadlines: Paper submission: 4th February 1996; Electronic submissions: 18th February 1996; Notification of acceptance: 10th May 1996; Final papers: 10th June 1996.

See also http://www.ens-lyon.fr/LIP/europar96/

/parallel/events//parallel/events/pooma96
Parallel Object-Orientated Methods A. (POOMA'96) by Mary Dell Tholburn <marydell@> Call for papers for conference being held from 28th February-1st March 1996 Eldorado Hotel, Santa Fe, New Mexico, USA.

Topics: interoperability of distributed and parallel computing with object-oriented methods.

Deadlines: Abstracts: 22nd January 1996.

See also http://www.acl.lanl.gov/Pooma96/

/parallel/events/acpc96
"Third International Conference of the ACPC with special emphasis on Parallel Databases and Parallel I/O" (ACPC'96) Call for papers for conference being held from 22-25th September 1996 at Klagenfurt, Austria.

Topics: Databases; I/O; Algorithms; Applications; Architectures; Languages; Compilers; Programming Environments and others.

Deadlines: Papers: 16th February 1996; Tutorials and Posters: 12th April 1996; Notification: 6th May 1996; Camera-ready papers and posters: 21st June 1996; Camera-ready tutorials: 26th August 1996.

See also http://www.ifi.uni-klu.ac.at/Conferences/ACPC96/

/parallel/events/europar96-routing-intercon-nets
"Euro-Par'96 Workshop #2: Routing and Communications in Interconnection Networks" by Robert Cypher <cypher@maldives.cs.jhu.edu> Call for papers for workshop being held at ENS Lyon, France.

Topics: All aspects of communication, including routing and communication algorithms, the design and packaging of interconnection networks, and the communication costs of parallel algorithms, will be examined.

Deadlines: Paper submission: 4th February 1996; Electronic submissions: 18th February 1996; Notification of acceptance: 10th May 1996; Final papers: 10th June 1996.

See also http://www.ens-lyon.fr/LIP/europar96/

/parallel/events/europar96-instruction-level-par
Euro-Par'96 Workshop #20: Instruction-Level Parallelism by Christine Eisenbeis <eisenbei@hector.inria.fr> Call for papers for workshop being held from 27th-29th August 1996 at ENS Lyon, France.

Deadlines: Paper submission: 4th February 1996; Electronic submissions: 18th February 1996; Notification of acceptance: 10th May 1996; Final papers: 10th June 1996.

See also http://www.ens-lyon.fr/LIP/europar96/

/parallel/events/suif
"1st Stanford University Intermediate Format (SUIF) Compiler Workshop" by Robert S. French <rfrench@Xenon.Stanford.EDU> Call for participation for workshop being held from 11-13th January 1996 at Stanford University, USA.

SUIF (Stanford University Intermediate Format) is a compiler system designed to support collaborative research in optimizing and parallelizing compilers. The system is based on the concept of having different independent compiler passes cooperate via a common program representation.

See also http://suif.stanford.edu/

/parallel/events/islip96
"Ninth International Symposium on Languages for Intensional Programming" (ISLIP'96) by Ed Ashcroft <ed.ashcroft@asu.edu> Call for papers for symposium being held from 13-15th May 1996 at Arizona State University, Tempe, Arizona, USA.

Topics: Programming paradigms: dataflow computation; connectionist models; logic programming; real-time programming and languages such as Lucid and GLU. Semantics: non-determinism; extended Kahn principle; intensional concepts and termination issues. Software Engineering: version control; visual user interfaces; parallel programming; fault-tolerant systems and program verification. Applications: signal processing; image processing; hardware synthesis; graphics and data models.

Deadlines: Full Papers / Extended Abstract: 15th February 1996; Notification: 20th March 1996; Camera-ready papers: 12th April 1996.

See also http://lu.eas.asu.edu/islip96.html

/parallel/events/europar96-new-hpc-apps
"Euro-Par'96 Workshop #11: New Applications in High-Performance Computing" Call for papers for workshop being held at ENS Lyon, France.

Topics: Multimedia support; Data compression; Geographical information systems; Cognitive recognition; Parallel systems in the entertainment industry and arts; Embedded systems; Dynamic distribution; Computational steering and others.

Deadlines: Paper submission: 4th February 1996; Electronic submissions: 18th February 1996; Notification of acceptance: 10th May 1996; Final papers: 10th June 1996.

See also http://www.ens-lyon.fr/LIP/europar96/

/parallel/jobs/usa-virginia-hpc-researchers
Legion group, University of Virginia, USA are seeking researchers in HPC. Research Assistant: MS in CS, C++ & Unix. Senior Scientist: Distributed object systems, PhD in CS and experience.

See also http://www.cs.virginia.edu/~legion Author: Andrew Grimshaw <grimshaw@archive.cs.virginia.edu>

/parallel/jobs/uk-hw-par-dbase-researcher
Heriot-Watt University, Edinburgh, Scotland, UK are seeking a research associate to develop analytic performance modelling tools for parallel database systems. A good CS degree or postgrad experience required.

See also http://www.cee.hw.ac.uk/Databases/ Author: Hamish Taylor <hamish@cee.hw.ac.uk>


Copyright © 1993-2000 Dave Beckett & WoTUG