From: Matthias Brune <mabus@flamenco.uni-paderborn.de>
Newsgroups: comp.parallel
Subject: SCI Europe 1998
Date: 22 Sep 1998 20:00:19 GMT
Organization: University of Paderborn, Germany
Approved: bigrigg@cs.cmu.edu
Message-Id: <6u8vkj$19c$1@encore.ece.cmu.edu>
Originator: bigrigg@ece.cmu.edu


--------------------------------------------------------------------------------

                             Call for Participation

                            SCI Europe 1998 Tutorial
                            ------------------------
                               September 28, 1998
                                Bordeaux, France

--------------------------------------------------------------------------------
Please accept our sincere apologies if you received this message more
than once.
--------------------------------------------------------------------------------

Title: A Tutorial on ccNUMA Using The Scalable Coherent Interface (SCI)
Time:  Monday, September 28, 1998, 9 a.m. to 5 p.m.
Location: Bordeaux, France
  (held in conjunction with SCI Europe'98 and EMMSEC'98; see official
program:
   http://www.uni-paderborn.de/pc2/SCI-Europe98/ and
   http://www.emmsec98.archimedia.fr/)

The most interesting and important aspects of the SCI cache coherence 
specification are documented in C-code. Although most people 
retrospectively find that this code is exceptionally comprehensive, it
is 
time-consuming and difficult to get through it alone. On Monday, 
September 28, 1998, SCI Europe '98 will offer a C-code tutorial which 
will uncover some of the obscurities; the session will be presented by - 
B. Mitchell Loebel, President of MultiNode Microsystems Company and 

Director of The PARALLEL Processing Connection. Whatever are your 
objectives with SCI, this offering will definitely shorten your "time to 
market" by several man years.

Who should attend?  The agenda of the upcoming tutorial is specifically 
tuned to assist those attendees who are planning to implement cache 
coherence between multiple compute nodes using the SCI protocol in the 
near future; this will be a very practical session. And members of the 
instructing team will be available to provide support after the seminar 
concludes. Given the evidence that Intel is doing development in this 
area, this seminar is especially timely!

Some of you have asked about the possibility of an all day seminar at 
your site. That is definitely available. As for agenda, it can be 
tailored to your needs; of course, we would need input from you as to
how 
far along you have progressed and what you are attempting to do. We
think 
that attendance should be limited to about 10-15 people, but we're 
flexible on that point. We'll certainly cover more material, more deeply 
and interactively than we will be able to do at the SCI Europe '98 
meeting. As regards background, it is safe to say that there is very 
little of the code that we haven't explored over the past three years
and 
I am grateful for the able assistance of Dave Gustavson and a number of 
exceptional people who are members in The PARALLEL Processing
Connection. 
Beyond merely sorting through the C-code, our team has also fully 
abstracted out and documented the state machine for the "Full Set" SCI 
protocol. The exact cost will depend on your needs - please inquire at 
408 732-9869 (USA).

Details of the SCI Europe '98 C-code seminar are as follows:

Prerequisites -
   1. A familiarity with the C language is assumed.

   2. A familiarity with the SCI specification (IEEE 1596) is assumed.
      If you have a copy, please bring it with you. If you need to
      purchase a copy, please contact IEEE standards sales
      immediately, at 1-800-678-4333 (US) or ++1-908-562-3800
      (fax ++1-908-981-9667); Or, through the Computer Society, at
      1-800-272-6657 (outside California), ++1-714-821-8380 (inside
      California or international), fax ++1-714-821-4641.

   3. Completion of Dave Gustavson's Introductory tutorial or equivalent
      experience is recommended but not essential.

Agenda -
THE TUTORIAL WILL EXTENSIVELY COVER THE SCI STATE MACHINE (WITH RELEVANT 
STATE DIAGRAMS) IN ADDITION TO THE FOLLOWING MATERIAL. IN THE PAST, THE 
INTENT WAS MAINLY TO GUIDE ATTENDEES IN GETTING THROUGH THE CODE. THIS 
TIME AROUND, WE WILL SPEND MUCH MORE TIME DISCUSSING THE SCI STATE 
MACHINE AND ITS INTERFACE TO SMP BUSSES. THIS WILL BE THE BEST ONE WE'VE 
EVER DONE.

   1. Simulation architecture -
      The SCI spec is written in C and is encapsulated in a
      comprehensive multithreaded simulation. In order to examine and
      understand the spec,it is essential to understand the overall
      structure of the simulation.

   2. Data structures -
      Here we will describe the various data structures that are
      allocated when the simulation starts up.

   3. Packet movement between "chips" -
      As an SCI packet moves in the physical system which is described
      by the simulation, threads are started, suspended and restarted.
      Certain very important insights will be presented to help the
      attendee get through the code.

   4. Glossary of terms and functions -
      The SCI spec is brilliantly done and the code is good. However,
      the naming of functions and variables often leaves much to be
      desired and is sometimes downright misleading. This seminar will
      definitely shorten an attendee's path to overall understanding by
      defining clearly the functionality of key routines.

   5. "Double action" cache state machine -
      SCI cache tag updates do not happen atomically, i.e., a cache
state
      is set to an intermediate value when a packet is sent out and is
      set to its final value when the response to that packet is
      received. We will describe this important behavior in detail.

   6. Conflict between concurrent transactions -
      A.Within a node
        In the case of several CPU's in a node (which seems to be the
        way current system designs are going), certain actions by _some_
        of the CPU's must be blocked. We will describe what blocking is
        necessary and _why_.
      B.External to a node
        Similar to the first part of this section, but this will deal
        with _inter_ node transaction conflicts (multiple processors in
        a node) and interoperability considerations between dissimilar
        nodes.

   7. Optimizing the SCI protocol within the specification.

   8. Demonstration and interactive use of the SCI simulator.

Date: September 28, 1998

Time: 9:00 a.m to 5:00 p.m.

Location: Bordeaux, France (held in conjunction with SCI Europe'98)

Cost:
      US $475 per person.
         or
      US $350 with a Student ID or Professorial ID from an accredited  
      university who register by Friday, September 11, 1998.
      1998, will be at the undiscounted price. Additionally, handout
      materials can only be guaranteed for pre-registrants; others may
      have to be mailed later. All registrations must be paid in full
      in advance of attendance.

Registration form ----

Name:

Organization name:

Snail mail address:

Email address:

Voice telephone:

Fax telephone:

B. Mitchell Loebel
CEO, Chief Technical Officer
MultiNode Microsystems Corporation              408 732-9869

Executive Director
The PARALLEL Processing Connection              408 732-9869

--
Articles to bigrigg+parallel@cs.cmu.edu (Admin: bigrigg@cs.cmu.edu)
Archive: http://www.hensa.ac.uk/parallel/internet/usenet/comp.parallel

