From: "Klaus E. Schauser" <schauser@cs.ucsb.edu>
Newsgroups: comp.parallel
Subject: ACM 1999 Java Grande Tutorials
Date: 30 May 1999 23:54:03 GMT
Organization: University of California, Santa Barbara
Approved: bigrigg@cs.cmu.edu
Message-Id: <7isj2r$r72$1@goldenapple.srv.cs.cmu.edu>
Originator: bigrigg@ux6.sp.cs.cmu.edu
Xref: ukc comp.parallel:15638


The following tutorials are being offered at the ACM Java Grande 
conference.  Please register now!


Tutorial 1: Mobile Agents -- Java based mobile code for 
            scientific applications
            David Walker and Omer Rana
            University of Wales

Tutorial 2: JIT Compilation of Java for Intel Architecture
            Aart J.C. Bik, Milind Girkar, and Mohammad R. Haghighat
            Intel Corporation

Tutorial 3: Use of Java in Computational Science
            T. Haupt and G. Fox
            Syracuse University

Tutorial 4: Use of Java in Collaboration and its integration 
            with JavaScript and Dynamic HTML
            M. Podgorny and G. Fox
            Syracuse University

Tutorial 5: Introduction to High Performance Java Computing:
            Abstractions for Concurrent, Parallel, and 
            Distributed Coordination using Java Threads
            George K. Thiruvathukal and Thomas W. Christopher
            Loyola University and Illinois Institute of Technology


-------------------------------------------------------------------- 


   ACM 1999 Java Grande Conference Registration Form
     San Francisco, California, June 12-14, 1999 
      

It is important for planing of catering to obtain a precise estimate of 
attendees. Even if you can't pre-register now, please let us know if you 
intend to participate. E-mail your name, affiliation, and dietary needs 
to java99@cs.ucsb.edu. 

Name (title, first, last): _________________________________________

Name tag should read: ______________________________________________

Affiliation: _______________________________________________________

Address: ___________________________________________________________

____________________________________________________________________

Phone: _____________________________________________________________

Fax: _______________________________________________________________

E-mail: ____________________________________________________________

Special needs (including dietary): _________________________________


Please mark selections with X:

CONFERENCE (June 12&13, 1999)
                        EARLY        LATE (after May 15 1999) 
  ACM member            $190 ___     $220 ___
  Non-Member            $220 ___     $250 ___
  Full-Time Student      $60 ___      $60 ___

  ACM membership number: ______________________

TUTORIALS (June 14, 1999)
  1. Java Based Mobile Code, Walker & Rana         (morning)    $75 ___
  2. JIT Compilation, Bik, Girkar & Haghighat      (afternoon)  $75 ___
  3. Java in Computational Science, Haupt & Fox    (morning)    $75 ___
  4. Java in Collaboration, Podgorny & Fox         (afternoon)  $75 ___
  5. Java Concurrency, Thiruvathukal & Christopher (full day)  $150 ___

TOTAL FEE (Conference & Tutorials): US$ ______

  Fees must be paid in U.S. dollars.  Please mark method of payment:

  Check (from U.S. bank, made payable to ACM Java'99) __  Money Order __

  Credit Card:  Visa __  MasterCard __

  Credit Card Number: ______________________ Expiration Date: _______

  Signature: ____________________________ Date: _____________________

Please complete and return this form with your remittance to:

       Klaus Schauser
       Attn: ACM Java Grande'99 Conference Registration
       Department of Computer Science
       University of California at Santa Barbara
       Santa Barbara, CA 93106
       USA

       E-mail: java99@cs.ucsb.edu
       Phone: (805) 893-4321
       FAX:   (805) 893-8553 

-------------------------------------------------------------------- 



                 ACM 1999 Java Grande Conference

(formerly ACM Workshop on Java for High-Performance Network Computing)
                
                  Sponsored by ACM SIGPLAN

        Student Participation Supported by NSF and DARPA

           San Francisco, California, June 12-14, 1999 

            http://www.cs.ucsb.edu/conferences/java99 


The Java Grande conference focuses on the use of Java in the broad
area of high-performance computing; including engineering and 
scientific applications, simulations, data-intensive applications, 
and other emerging application areas that exploit parallel and 
distributed computing or combine communication and computing.  
The conference includes a day of tutorials and will be followed by 
the JavaOne conference, which would enable people to follow the 
Java Grande conference with an exposure to the latest in basic 
Java Technology. 

Many interesting talks, papers, and posters, including invited talks 
by Bill Joy (Sun), Donald F. Ferguson (IBM) and Matt Welsh (Berkeley).



-------------------------------------------------------------------- 



     ACM 1999 Java Grande Conference

                 TUTORIALS


  Tutorial 1: Mobile Agents -- Java based
  mobile code for scientific applications

 Monday, June 14, 1999, 8:30am-12:30pm (half-day, morning)

             David Walker and Omer Rana

                  University of Wales

As distributed systems get more complex and deal with increasing
quantities of data, newer and sophisticated methods for information
search and retrieval are needed. Traditional methods for information
processing has required the data to be moved from the data source
to the point of processing. An approach gaining favour recently is
the concept of moving the computation to the data, in the form of
mobile code, that is generally referred to as a ``mobile agent''. This
is particularly beneficial when dealing with large data sets, or when
the exact sequence of service provision is not known in advance and
dependent on the order in which processing is performed. Hence, in
a system involving mobile agents, the agent can visit a number of
hosts, which offer part of the complete service required by the
agent. The agent does not need to know the complete itinerary in
advance, and may change its routing tables based on information
gathered at intermediate sites. This gives mobile agents the
autonomy to make decisions based on interactions with local hosts
as they move from one site to another, and negotiate services with
hosts as they travel across a communication network. Furthermore,
the mobile agent does not need a permanent connection to the
originating host, making it ideal for handling temporary network
connections, as available on numerous mobile devices. 

Approaches such as Remote Procedure Call (RPC) have generally
been the most common mechanism employed in distributed system
for calling services offered at a remote location. RPC makes the
calling of remote procedures the same as calling local procedures,
by offering local stubs for services available on a remote host. The
sender would call a local procedure stub with the appropriate
arguments, which would then be packed as a message, encoded,
and sent to the remote site, where a reverse sequence of events
would take place. The required service would be executed and a
response sent back to the sender. The main characteristic of such a
process is that each remote call between the requester and the
provider entails two communications, one to ask for the service, and
the other to acknowledge that the service was accepted. 

The alternative approach of Remote Programming (RP), as
employed in mobile agents, views the interaction between the client
requesting the service and the server providing it, as not only a
calling of a procedure with the necessary arguments, but also
supplying the code for procedures which are to be performed
remotely. Each message sent between the client requesting service
on another computer in the RP model, would involve the sending of
code which specifies the procedures to run, and data which holds
the current state of the interaction. 

In this tutorial we compare and contrast thread based code
migration with agent based remote programming. We look at the
differences between mobile agents and CORBA based services,
and identify areas of overlap between the two approaches. We look
at the Java based ``Aglets'' mobile agents library, from IBM
Research (Japan), and show how it may be combined with
Java-MPI. 

Presenters:

Professor David Walker is head of the Parallel and Scientific
Computing group in the Department of Computer Science at the
University of Wales Cardiff, UK. He has conducted research into
parallel software for scientific computation for the past 13 years, the
first ten years of which were spent in the United States, and has
published over 60 papers on the subject. Professor Walker teaches
various courses on parallel computing and algorithms to
undergraduate and postgraduate students. He has also been actively
involved in special interest groups in parallel computing and high
performance Java. 

Dr. Omer Rana is an assistant professor at the Department of
Computer Science at the University of Wales Cardiff, UK. He has
been actively involved in the public understanding of science,
working with BBC radio and television, Channel 4 and the
Discovery Channel as part of ScienceLine. He has been a visiting
research fellow to BT Labs, where he has worked with Aglets for
implementing parallel data mining applications in telecommunications.
He teaches distributed computing and multi-agent systems to both
undergraduates and postgraduates, and has also presented these
courses to local companies, such as BT, Nortel and Hyder. He has
also been actively involved in special interest groups in high
performance Java. 

Benefits to participants:

The participants will gain an insight into the use of mobile agents with
conventional parallel computing libraries, such as MPI. The
difference between thread mobility, active messages and mobile
agents will be highlighted using a data mining algorithm. Extensions
to other application domains will also be discussed, in particular load
balancing and automatic software configuration. The extension to
dynamic resource discovery, using KQML/KIF libraries, such as
JKQML from IBM Research will also be demonstrated, with
example programs. The use of KQML based messaging, built over
MPI, will be demonstrated for building a large society of agents,
some of which may be mobile. A brief overview of other application
areas will be provided towards the end of the tutorial, with reference
to Globus and DoD's HLA. 



 Tutorial 2: JIT Compilation of Java for Intel Architecture

  Monday, June 14, 1999, 1:30pm-5:30pm (half-day, afternoon)

 Aart J.C. Bik, Milind Girkar, and Mohammad R. Haghighat

             Microcomputer Research Labs
                   Intel Corporation

The Java model achieves security and platform independence by
translating and distributing applications in Java bytecodes, the
instructions of Java Virtual Machine. In the early releases of Java
Development Kit, bytecodes were interpreted by the JVM. As
Just-In-Time compilation technology has matured, JIT compilers
have now become integral parts of JVM implementations. By
compiling bytecodes into the host-machine instruction set and
optimizing the code on the fly, the performance of Java applications
is significantly improved, even matching the performance of statically
compiled languages. Given that compilation takes place during the
execution time of an application, JIT-compiler designers are faced
with a major challenge of identifying cost effective optimizations and
implementing them in an efficient manner. 

In this tutorial, a team from Intel's Microcomputer Research Labs
that has developed a Java JIT-compiler for Intel Architecture
presents their experiences and results. We start with an overview of
the Java language and the components of a Java environment, and
give a comprehensive and in-depth presentation on the structure and
optimizations of this state-of-the-art JIT compiler. Covered issues
include JVM-JIT and JIT-GC interactions, intermediate languages
for translating bytecodes for Intel Architecture, optimization
techniques at various levels, exception handling, and garbage
collection. We further discuss a spectrum of high-level optimizations
such as CSE, loop-invariant code-motion, elimination of range
checks and cast checks, inlining, speculative inlining, method
specialization, multiversion code, optimizations enabled due to
strong typing, and hardware-assisted exception-handling. We also
present the techniques and heuristics that we used for low-level
optimizations such as local and global register allocation, instruction
selection and scheduling, and code and data alignment. Moreover,
we share our experiences with advanced vectorization techniques in
support of Intel's MMX Technology, as well as a demonstration of
VTune, Intel's performance analysis tool that assisted us in our
optimizations design. 

Presenters:

Aart J.C. Bik received his M.S. degree in computer science from
the Utrecht University, The Netherlands, in 1992. He received his
Ph.D. degree in 1996 from the Leiden University, The Netherlands.
He was a post doctoral research fellow at the Indiana University,
where he was involved in the design and implementation of a
high-performance Java system under supervision of Prof. D.B.
Gannon. Currently, he is a senior software engineer at Intel's
Microcomputer Research Labs. His research interests are in sparse
matrix computations, optimizing and parallelizing compilers and Java.
His email address is aart.bik@intel.com. 

Milind Girkar did his PhD at the University of Illinois at
Urbana-Champaign. Currently, he is a senior software engineer in
Intel's Microcomputer Research Labs. Prior to Intel, he worked on
the compiler for the UltraSparc architecture at Sun. His research
interests are in parallelizing and optimizing compilers. His email
address is Milind.Girkar@intel.com. 

Mohammad R. Haghighat is a senior software engineer at Intel's
Microcomputer Research Labs. His research interests are in
optimizing and parallelizing compilers and multithreaded
architectures. He holds a B.S. in Computer Science and Engineering
from Shiraz University, and an M.S. and a Ph.D. in Computer
Science from the University of Illinois at Urbana-Champaign.
Mohammad is the author of a book on symbolic analysis for
parallelizing compilers. His email address is
Mohammad.R.Haghighat@intel.com. 



          Tutorial 3: Use of Java in Computational Science

 Monday, June 14, 1999, 8:30am-12:30pm (half-day, morning)

                 T. Haupt and G. Fox

                  Syracuse University

This tutorial assumes a basic familiarity with Java the Language and
describes frameworks and applications useful in different areas of
computational science 

Java can be used in many aspects of engineering and scientific
computing including user interfaces, servers to provide web hosting
of resources and as a basic coding language. We describe these
different aspects describing where Java can be used today and
where it still has performance or functionality disadvantages. In the
computing arena, we discuss both sequential and parallel computing.
We also discuss the use of Java as an educational tool for building
novel interactive learning sites. 

We review status of Java Grande and its initiatives aimed at
providing in Java the best possible programming environment. We
describe the role of Java and distributed objects in middleware;
specialized capabilities such as matrix libraries; Java for
visualization;
Java and MPI for parallel computing and Java in the building of
seamless interfaces and problem solving environments. We contrast
classic HPCC (High Performance Computing and Communication)
applications to those from distributed simulation. Using Java and
distributed objects provides a unified approach. 

This tutorial should be of interest to both computer scientists and
those application programmers interested in using Java in different
aspects of their research and education. 

Presenters:

Geoffrey Fox obtained his Undergraduate and Graduate degrees at
Cambridge University. Most of his professional career has been
spent at Caltech (Physics) and Syracuse University
(Physics/Computer Science). Fox is an expert in the use of parallel
architectures and the development of concurrent algorithms. He
leads a major project to develop prototype high performance Java
and Fortran compilers and their runtime support. His group has
pioneered use of CORBA and Java for both collaboration and
distributed computing and Fox helped initiate the international Java
Grande activity. Fox is a proponent for the development of
computational science and its follow on "Internetics" as an academic
discipline and a scientific method. He has established at Syracuse
University both graduate and undergraduate programs in these
areas. All courses have been made available on the Web and his
research includes HPCC technology to support education at both
K-12 and University level. His research on parallel computing has
focused on development and use of this technology to solve
large-scale computational problems -- such as numerical relativity
and earthquake prediction. A recent set of activities center on Web
collaboration technology and its application to synchronous distance
education. His email address is gcf@npac.syr.edu. 

Tomasz Haupt graduated from Jagiellonian University in Krakow
(Poland) and obtained his Ph.D. in Physics. For 10 years he was
involved in data analysis coming from numerous High Energy
(elementary particles) experiments at CERN and CESR (Cornell).
>From 1990 he has been at Syracuse University with initial activity as
development of the Fortran 90D compiler - the first data-parallel
flavor of Fortran targeted to distributed memory systems that soon
became the first, pilot implementation of HPF compiler. Later, the
technology developed by the Fortran 90d group was licensed to
PGI, Inc. In addition to developing the language (Haupt participated
in HPF forum), and developing the compiler including its runtime
support, Haupt's was involved in numerous activities to transfer the
new technologies to application developers. This includes his
participation in the Grand Challenge Binary Black Hole project.
Currently, Haupt's research activity concentrate on employing the
Web based technologies to high performance distributed computing.
Haupt is applying the WebFlow system to many different classes of
applications. These include development of a nanomaterials problem
solving environment (within NCSA Alliance), where WebFlow
provides a high level visual user interface. As part of the DoD high
performance computing modernization program, Haupt developed
the Landscape Management System that implements "navigate and
choose" paradigm to provide seamless access to remote data and
computational resources to solve the problem at hand. Currently he
is working on the Gateway system (under development for ASC) to
integrate application domain specific Problem Solving Environments
(PSE) with a seamless and secure remote access to resources
available at the ASC computer center. 



 Tutorial 4: Use of Java in Collaboration and its integration 
         with JavaScript and Dynamic HTML

  Monday, June 14, 1999, 1:30pm-5:30pm (half-day, afternoon)

                M. Podgorny and G. Fox

                  Syracuse University

We illustrate the value of integrating Java and JavaScript to build
collaborative systems built around synchronously shared web
objects (defined as any entity -- coming from a web server,
web-linked database or CGI script -- invoked by the user from
HTML). We describe the architecture and use of TangoInteractive
which can share web-objects as well as Java and C++ applications.
We do this in the context of a general framework of shared event
collaborative systems. We show the power of this system to provide
shared dynamical HTML and contrast the use of this with interactive
environments built around Java Applets. The application of this
technology to distance education with lessons from its use will be
given. We comment on implications of this type of application for the
proposed W3C Document Object Model at both XML and HTML
levels. We describe how collaborative technology naturally provides
universal interfaces accessible across disabilities. Shared web
objects are natural in many areas including engineering design,
valued added commerce and education. This tutorial should be of
interest either to those interested in these areas as well as developers
of Java and Dynamic HTML systems. 

Presenters:

Geoffrey Fox (see bio for Tutorial 3). 

Marek Podgorny has PhD in physics and a 18 years record of
research in solid state physics. Since 1991 he changed his research
interest to distributed computing and distributed information systems.
He has led several successful projects such as early deployment of
ATM-based wide area networks including a statewide activity
NYNET. He developed applications of parallel database systems in
data fusion processes, video on demand technology, Command and
Control network-centric solutions, and Web multimedia
collaborative and distance learning environments. The latter is his
major focus today with the TangoInteractive system being used in
several collaboration and distance education areas. Podgorny has
broad expertise in databases, web technologies including
sophisticated Java clients and servers, video and audio compression
and all aspects of networking. 



     Tutorial 5: Introduction to High Performance Java Computing:
  Abstractions for Concurrent, Parallel, and Distributed 
               Coordination using Java Threads

 Monday, June 14, 1999, 8:30am-12:30 1:30-5:30pm (full-day)

    George K. Thiruvathukal and Thomas W. Christopher

    Loyola University and Illinois Institute of Technology

This course provides an intermediate to advanced treatment of the
Java multithreading framework, which is included with all versions of
the Java Development Kit. This course is primarily aimed at industry
practitioners, technical staff, and scientists who are already familiar
with the ideas of concurrency and have some experience with
multithreading and multiprogramming in another language, such as C
or C++. 

Due to the length of this course and expected enrollment, it will not
be practical to provide hands-on experience; however, students will
be provided with notes and a CD containing working example
programs, reusable packages, and documentation. A longer version
of this course is available commercially (and to academics) from
Tools of Computing LLC. This course is also the subject of a
forthcoming book by the authors, tentatively scheduled for release
during 1Q2000. 

The following topics will be covered during this one day intensive,
subject to available time. 

Introduction to Multithreaded Programming: Know Your
Enemy 

This section provides a high-level overview of processes and
threads, the differences between the two, and a summary of general
operating system support for threads. Programming with processes
and threads continues to be an error prone activity, and this section
will detail the pitfalls of working with threads and motivate the need
for the rest of the course, where much time is spent on higher-level
approaches to concurrency. 

Classic Synchronization Mechanisms: Locks, Semaphores,
Conditions, Barriers 

Java objects support closely the notion of a monitor, as defined by
C.A.R. Hoare and P. Brinch Hansen. Programmers unfamiliar with
monitors often find the abstraction unwieldy and lacking the precise
control found in other synchronization mechanisms. This section will
be of great interest to those who have worked with Posix and
Win32 threads environments and parallel environments, where such
control is taken for granted. 

Concurrent Problems Revisited: Shared*, ProCon*,
Monitors*, Diners*, Readers/Writers* 

This section begins by introducing the notion of a race condition,
which remains a little understood troublemaker in concurrent
programming. A number of classic synchronization problems are
presented (fairly quickly) with exciting examples that have been
graphically animated to better teach the actual problem. We also
show how to apply these problems, which are often of purely
intellectual interest, to real world situations. 

Higher-Level Abstractions for Concurrent Programming:
Futures, Incremental Structures, FutureQueue, FutureTable 

Low-level synchronization mechanisms have a key disadvantage of
requiring good discipline to guard against race conditions and other
pitfalls. Programming bugs almost inevitably occur. As a first step
toward eliminating these problems, higher-level abstractions have
been proposed (some dating back almost 20 years in work on
functional programming) to express concurrent computations. This
section will introduce the notion of a future and show how to extend
Java's built-in Vector and Table classes to make use of futures. 

Applications Higher-Level Abstractions: Shared Table of
Queues, Macro-Dataflow, Task Graphs, Work Pools,
Concurrent Aggregates 

This final section applies the abstractions derived from futures to
provide some useful classes to hide most details (especially the
details that cause programming errors) of concurrent computing
from the programmer. A number of case studies are presented,
beginning with the notion of a shared table of queues (a key building
block for a tuple space), macro-dataflow (a version of dataflow that
supports coarse-grain parallelism), task graphs, work pools (a
convenient abstraction for distributed batch processing), and
concurrent aggregates (useful for parallel array and table
operations). 

Dr. Thomas Christopher is an Associate Professor of Computer
Science at Illinois Institute of Technology. He has over two decades
of experience as a teacher and researcher. Professor Christopher's
research interests are in the areas of programming languages and
distributed-memory parallel processing. In the
programming-language area, he is the author of several compilers
and parser generators. His most recent parser generator is a
Strong-LL(k) parser generator that transforms the LL(k) grammar
into an LL(1) grammar with back-up actions. In the parallel
processing area, he has designed and implemented a number of
parallel processing systems based on message-driven designs. The
most recent, in Java, passes active messages among the nodes
(distributed processes). The active messages themselves specify
code to be executed upon the arrival of the message at the
destination node. He also designed the Memo system that provides
inter node communication through a shared directory of queues. 

Dr. George K. Thiruvathukal is a Visiting Assistant Professor at
Loyola University in Chicago, Illinois. His doctoral dissertation was
entitled An Enhanced Actors Model and Programming Environment
for Parallel and Distributed Processing. Currently, he is a
co-director of the Java and High-Performance Research Group
(JHPC) and Secretary General of the Java Grande Forum. While at
the Argonne National Laboratory in the Mathematics and Computer
Science Division, he was a member of the Globus project where he
has worked on a wide-area implementation of the Message Passing
Interface (MPI), and Java-based visualization tools for
metacomputing systems. His current research is on a message
passing system called Active Java and a computational sharing
framework called the Computational Neighborhood. He recently
participated in the ACM Java '98 (held at Stanford University)
workshop, PDPTA 1998 and 1999, and IPPS/SPDP Workshop
on Java as a program committee member. Dr. Thiruvathukal has
also served as an Adjunct Associate Professor and Lecturer,
Computer Science, at I.I.T. from 1997-98. He has also held senior
technical staff and management industry positions at Tellabs and
R.R. Donnelley and Sons Companies from 1991-1995. 



-------------------------------------------------------------------- 
 Klaus E. Schauser                 E-mail: schauser@cs.ucsb.edu 
 Associate Professor               Phone: (805) 893-3926 
 Department of Computer Science           (805) 893-4321 
 University of California          FAX:   (805) 893-8553 
 Santa Barbara, CA 93106           http://www.cs.ucsb.edu/~schauser 
--------------------------------------------------------------------

--
Articles to bigrigg+parallel@cs.cmu.edu (Admin: bigrigg@cs.cmu.edu)
Archive: http://www.hensa.ac.uk/parallel/internet/usenet/comp.parallel

