WoTUG - The place for concurrent processes

Communicating Process Architectures 2015: Programme

The CPA 2015 programme comprises two full day and one half day sessions (Keynote presentations, refereed papers and, depending on proposals, mini-Workshops). There will also be two open Fringe sessions on Sunday and Monday evenings. The conference dinner will be on Tuesday evening.

The list of keynote talks, accepted papers, workshops and fringe presentations is given below. Within each category, items are listed by alphabetical order of first author/proposer surname.

This list is subject to revision (especially the fringe programme, which remains open for proposals right up to the relevant sessions).

Group photo (g2)

More photographs are at the bottom of this page.

Schedule

Sunday, August 23rd, 2015
14:00 Room keys available (Keynes College, Reception)
18:00 Bar opens
19:00 Dinner (Keynes College, Dolche Vita)
20:30 Fringe Session 1 (Keynes College, K-Bar)
Note: because of the nature of the Fringe, the following items are provisional; more may be added and the ordering may change.
Not that Blocking! (Abstract) (Slides PDF)
Øyvind Teig, Autronica Fire and Security AS, Trondheim, Norway
T42 in FPGA (year one design status report) (Abstract) (Slides PDF)
Uwe Mielke, Electronics Engineer, Dresden, Germany
OLL Compiler Project (status at 2015/08/16) (Abstract) (Slides PDF)
Barry M. Cook, Independent, UK
23:00 Bar closes or ...
24:00 Bar closes (at manager's discretion)
Monday, August 24th, 2015
08:30 Registration (Keynes College)
Session 1 (Keynes Lecture Theatre 2)
09:20 Welcome (Peter Welch)
09:30 Keynote Address 1
occam's Rule Applied: Separation of Concerns as a Key to Trustworthy Systems Engineering for Complex Systems (Abstract) (Slides PDF)
Eric Verhulst, CEO/CTO, Altreonic NV, Belgium
10:30 Tea/coffee
Session 2 (Keynes Lecture Theatre 2)
11:00 OpenTransputer: Reinventing a Parallel Machine from the Past (Abstract) (Slides PDF)
Andres Amaya-Garcia, David Keller and David May, Department of Computer Science, University of Bristol, UK
11:30 Discrete Event-based Neural Simulation using the SpiNNaker System (Abstract) (Slides PDF) (Slides PPT)
Andrew Brown (a), Jeff Reeve (a) and Steve Furber (b)
(a) Department of Electronics & Computer Science, University of Southampton, UK
(b) School of Computer Science, The University of Manchester, UK
12:00 Adding CSPm Functions and Data Types to CSP++ (Abstract) (Slides PDF)
Daniel Garner (a), Markus Roggenbach (b) and William B. Gardner (c)
(a) BT, Adastral Park, Ipswich, UK
(b) Department of Computer Science, Swansea University, UK
(c) School of Computer Science, University of Guelph, Canada
12:30 Lunch (Keynes College, Atrium)
Session 3 (Keynes Lecture Theatre 2)
14:00 Towards Lightweight Formal Development of MPI Applications (Abstract) (Slides PDF)
Nelson Souto Rosa (a), Humaira Kamal (b) and Alan Wagner (b)
(a) Universidade Federal de Pernambuco, Centro de Informatica, Recife, Brazil
(b) Department of Computer Science, University of British Columbia, Canada
14:30 A Model-driven Methodology for Generating and Verifying CSP-based Java Code (Abstract) (Slides PDF)
Julio Marino (a) and Raul N.N. Alborodo (b)
(a) Babel Group. Universidad Politecnica de Madrid, Spain
(b) IMDEA Software Institute, Madrid, Spain
15:00 A Super-Simple Run-Time for CSP-Based Concurrent Systems (Abstract) (Slides PDF) (Slides PPT)
Michael E. Goldsby, Sandia National Laboratories, Livermore, California, USA
15:30 Tea/coffee
Session 4 (Keynes Lecture Theatre 2)
16:00 Workshop Session 1
Dealing with (Real)-Time in Real-World Hybrid Systems (Abstract) (Slides PDF)
Pieter Van Schaik (a) and Eric Verhulst (b)
(a) Altreonic NV, Belgium
(b) CEO/CTO, Altreonic NV, Belgium
17:30 Workshop ends
19:00 Dinner (Keynes College, Dolche Vita)
20:30 Fringe Session 2 (Keynes College, K-Bar)
Note: because of the nature of the Fringe, the following items are provisional; more may be added and the ordering may change.
C++11 CSP (Abstract) (Slides PDF)
Kevin Chalmers, School of Computing, Edinburgh Napier University, UK
Not that Concurrent! (Abstract) (Slides PDF)
Øyvind Teig, Autronica Fire and Security AS, Trondheim, Norway
Protected Mode RTOS: what does it mean? (Abstract) (Slides PDF)
Bernhard Sputh, Altreonic NV, Belgium
Managing Hard Real Times (28 Years Later) (Abstract) (Slides PDF) (Slides PPT)
Peter H. Welch, School of Computing, University of Kent, UK
23:00 Bar closes or ...
24:00 Bar closes (at manager's discretion)
Tuesday, August 25th, 2015
Session 5 (Keynes Lecture Theatre 2)
09:00 A Design for Interchangeable Simulation and Implementation (Abstract) (Slides PDF)
Klaus Birkelund Jensen and Brian Vinter, Niels Bohr Institute, University of Copenhagen, Denmark
09:30 Lambda Calculus in Core Aldwych (Abstract) (Slides PDF)
Matthew Huntbach, School of Electronic Engineering and Computer Science, Queen Mary University of London, UK
10:00 CoCoL: Concurrent Communications Library (Abstract) (Slides PDF)
Kenneth Skovhede and Brian Vinter, Niels Bohr Institute, University of Copenhagen, Denmark
10:30 Tea/coffee
Session 6 (Keynes Lecture Theatre 2)
11:00 Workshop Session 2
The Role of Concurrency in the Modern HPC Center (Abstract) (Slides PDF)
Brian Vinter, Niels Bohr Institute, University of Copenhagen, Denmark
12:30 Lunch (Keynes College, Atrium)
Session 7 (Keynes Lecture Theatre 2)
14:00 Process-Based Aho-Corasick Failure Function Construction (Abstract) (Slides PDF)
Tinus Strauss (a), Derrick G. Kourie (b), Bruce W. Watson (b) and Loek Cleophas (c)
(a) FASTAR Research Group, University of Pretoria, South Africa
(b) FASTAR Research Group, Stellenbosch University, South Africa
(c) Department of Computer Science, Umeå University, Sweden
14:30 Guppy: Process-Oriented Programming on Embedded Devices (Abstract) (Slides PDF)
Frederick R.M. Barnes, School of Computing, University of Kent, UK
15:00 Communicating Process Architectures in Light of Parallel Design Patterns and Skeletons (Abstract) (Slides PDF)
Kevin Chalmers, School of Computing, Edinburgh Napier University, UK
15:30 Tea/coffee
Session 8 (Keynes Lecture Theatre 2)
16:00 Keynote Address 2
Communicating Processes and Processors 1975-2025 (Abstract) (Slides PDF)
David May, Department of Computer Science, University of Bristol, UK
17:00 Panel session
18:45 Sparkling wine reception (Darwin College, Conference Suite)
19:30 Conference dinner (Darwin College, Conference Suite)
Wednesday, August 26th, 2015
08:00 Reminder: check out today by 10:00am (for those with rooms in college). Ask at college Reception for luggage storage area.
Session 9 (Keynes Lecture Theatre 2)
09:00 Bus Centric Synchronous Message Exchange for Hardware Designs (Abstract) (Slides PDF)
Brian Vinter and Kenneth Skovhede, Niels Bohr Institute, University of Copenhagen, Denmark
09:30 Model-Driven Design of Simulation Support for the TERRA Robot-Software Tool Suite (Abstract) (Slides PDF)
Zhou Lu, Maarten M. Bezemer and Jan F. Broenink, Robotics and Mechatronics, CTIT Institute, University of Twente, The Netherlands
10:00 Code Specialisation of Auto Generated GPU Kernels (Abstract) (Slides PDF)
Troels Blum and Brian Vinter, Niels Bohr Institute, University of Copenhagen, Denmark
10:30 Tea/coffee
Session 10 (Keynes Lecture Theatre 2)
11:00 Workshop Session 3
Message Passing Concurrency Shootout (Abstract) (Slides PDF)
Kevin Chalmers, School of Computing, Edinburgh Napier University, UK
12:30 CPA AGM and awards
13:00 Lunch (Keynes College, Atrium)
14:00 End of CPA 2015

Keynote Presentations

Communicating Processes and Processors 1975-2025
David MAY, Department of Computer Science, University of Bristol, UK

Abstract. The ideas that gave rise to CSP, occam and transputers originated in the UK around 1975; occam and the Inmos transputer were launched around 1985. Thousands of computer scientists and engineers learned about concurrency and parallel computing using the occam-transputer platform and the influence is evident in many current architectures and programming tools. I will reflect on the relevance of these ideas today — after 30 years — and talk about communicating processes and processors in the age of clouds, things and robots.

Brief Background

David May is Professor of Computer Science at Bristol University, UK. He graduated in CS from Cambridge University in 1972 and then spent several years working on architectures and language for distributed processing. In 1979 he joined Inmos and then spent 16 years in the semiconductor industry. David was the architect of the Inmos Transputer — the first microprocessor designed to support multiprocessing — and the designer of the OCCAM concurrent programming language. David joined Bristol University as Head of Computer Science in 1995 and continued an active involvement with Bristol's growing microelectronics cluster and its investors. His most recent venture is XMOS, which he co-founded in 2005. David has 40 granted patents with many pending patents centered around microprocessor technology. David was elected a Fellow of the Royal Society of London in 1990 for his contributions to computer architecture and parallel computing, and a Fellow of the UK Royal Academy of Engineering in 2010.

Slides (PDF)  

occam's Rule Applied: Separation of Concerns as a Key to Trustworthy Systems Engineering for Complex Systems
Eric VERHULST, CEO/CTO, Altreonic NV, Belgium

Abstract. 

"Keep it simple but not too simple" means that a complex solution is really a problem that's not very well understood. In formal methods, this is reflected not only in the size of the state space, but also in the dependencies between these states. This is the main reason why Formal Modelling is not delivering as expected: the state space explosion would require an infinite amount of resources. If an automated tool cannot handle the state space, how can we expect engineers to do so? This is where CSP comes in: it divides the state space in small manageable chunks, making it easier to reason about the behaviour. There are however a few pre-conditions for this to work: one must take a step back, dividing the complex state space before conquering it, hence thinking about functionalities and how they are related before thinking about the punctual states in space and time.

Extrapolating the CSP abstract process algebra leads to a generic concept of describing systems as a set of Interacting Entities, whereby the Interactions are seen as first class citizens, at the same level as the Entities, decoupling the Entities' states by explicit information exchanges. We enter hereby the domain of modelling. One major issue with modelling approaches is that, while we need different and complementary models to develop a real system, these often have different semantics (if the semantics are properly defined at all). By being able to hide the internal semantics, one can focus on the interactions and use these as standardised interfaces.

It is clear that for this to work in the software domain, the natural programming model should be concurrent and execute on hardware that is compatible with it — a design feature of the transputer that has not been matched since. This opens the door to multi-domain modelling where, for example, parts of the system are continuous and other parts are discrete (as in executing a clocked logic). This gives us an interesting new domain of hybrid logic, a topic we want to explore further in a workshop at the conference.

This lecture will be guided by my own personal journey, starting with a spreadsheet to program a parallel machine, covering Peter Welch's courses in occam and the formal development of our distributed RTOS.

Brief Background

Eric led the development of the Virtuoso multi-board RTOS, used in the ESA's Rosetta space mission to comet 67P/Churyumov-Gerasimenko. Virtuoso was the first distributed RTOS on the transputer and its successor developments — such as OpenComRTOS, a formal redevelopment from scratch, and VirtuosoNext, featuring fine-grain space partitioning — all apply the valuable principles and lessons learned from CSP, the transputer and occam.

Slides (PDF)  

Accepted Papers

OpenTransputer: Reinventing a Parallel Machine from the Past
Andres AMAYA-GARCIA, David KELLER and David MAY, Department of Computer Science, University of Bristol, UK

Abstract. The OpenTransputer is a new implementation of the transputer first launched by Inmos in 1985. It supports the same instruction set, but uses a different micro-architecture that takes advantage of today's manufacturing technology; this results in a reduced cycle count for many of the instructions. Our new transputer includes support for channels that connect to input-output ports, enabling direct connection to external devices such as sensors, actuators and standard communication interfaces. We have also generalised the channel communications with the support of virtual channels and the design of a message routing component based on a Beneš switch to enable the construction of networks with scalable throughput and low-latency. We aim to make the transputer and switch components available as open-source designs.

Slides (PDF)   Michael D. Poole prize for widening access to CPA ideas.   Prize photo  

Guppy: Process-Oriented Programming on Embedded Devices
Frederick R.M. BARNES, School of Computing, University of Kent, UK

Abstract. Guppy is a new and experimental process-oriented programming language, taking much inspiration (and some code-base) from the existing occam-π language. This paper reports on a variety of aspects related to this, specifically language, compiler and run-time system development, enabling Guppy programs to run on desktop and embedded systems. A native code-generation approach is taken, using C as the intermediate language, and with stack-space requirements determined at compile-time.

Slides (PDF)  

Code Specialisation of Auto Generated GPU Kernels
Troels BLUM and Brian VINTER, Niels Bohr Institute, University of Copenhagen, Denmark

Abstract. This paper explores and evaluates the effect of automatic code specialization on auto generated GPU kernels. When combining the high productivity coding environment of computational science with the Just-In-Time compilation nature of many GPU runtime systems, there is a clear cut opportunity for code optimization and specialization. We have developed a hybrid kernel generation method which is shown to be useful and competitive across very different use cases, and requires minimal knowledge of the overall structure of the program. Stencil codes which are commonly found at the core of computer simulations are ideal candidates for this type of code specialization. For exactly this type of application we are able to achive speedups of up to 2.5 times with the implemented strategy.

Slides (PDF)  

Discrete Event-based Neural Simulation using the SpiNNaker System
Andrew BROWN (a), Jeff REEVE (a) and Steve FURBER (b)
(a) Department of Electronics & Computer Science, University of Southampton, UK
(b) School of Computer Science, The University of Manchester, UK

Abstract. SpiNNaker is a computing system composed of over a million ARM cores, embedded in a bespoke asynchronous communication fabric. The physical realization of the system consist of 57600 nodes (a node is a silicon die), each node containing 18 ARM cores and a routing engine. The communication infrastructure allows the cores to communicate via short, fixed-length (40- or 72-bit), hardware-brokered packets. The packets find their way through the network in a sequence of hops, and the specifics of each route are held (distributed) in the route engines, not unlike internet routing. On arrival at a target core, a hardware-triggered interrupt invokes code to handle the incoming packet. Within this computing model, the state of the system-under-simulation is distributed, held in memory local to the cores, and the topology is also distributed, held in the routing engine internal tables. The message passing is non-deterministic and non-transitive, there is no memory coherence between the core local memories, and there is no global synchronization. This paper shows how such a system can be used to simulate large systems of neurons using discrete event-based techniques. More notably, the solution time remains approximately constant with neural system size as long as sufficient hardware cores are available.

Slides (PDF)   Slides (PPT)  

Communicating Process Architectures in Light of Parallel Design Patterns and Skeletons
Kevin CHALMERS, School of Computing, Edinburgh Napier University, UK

Abstract. This work presents thoughts on linking Communicating Process Architectures (CPA) and parallel design patterns. The work considers where CPA align in the parallel application design world. Expansion of CPA to support current trends in parallel application development are also presented.

Slides (PDF)  

Adding CSPm Functions and Data Types to CSP++
Daniel GARNER (a), Markus ROGGENBACH (b) and William B. GARDNER (c)
(a) BT, Adastral Park, Ipswich, UK
(b) Department of Computer Science, Swansea University, UK
(c) School of Computer Science, University of Guelph, Canada

Abstract. This work extends the subset of CSPm that can be translated to C++ using the tool CSP++. Specifications can now contain user defined functions, make limited use of set and sequence data, and utilise built-in CSPm functions that operate on such data. An extended development paradigm is suggested, based on applying transformational methods from UML to C++. All techniques are demonstrated over three case-study examples.

Slides (PDF)  

A Super-Simple Run-Time for CSP-Based Concurrent Systems
Michael E. GOLDSBY, Sandia National Laboratories, Livermore, California, USA

Abstract. MicroCSP is a run-time system written in C supporting process-oriented concurrency. It features CSP-style synchronous communication over point-to-point channels and alternation (including timeouts) and does preemptive priority process scheduling. The MicroCSP programmer writes the logic of a process as an ordinary function which, barring preemption, runs to completion every time the process is scheduled. To make such an approach feasible requires that the programmer cast each process’s logic in normal form: a single choice of guarded events, with event-free computation following each event. A context switch requires a mere function call, with the exception of those that occur as the result of an interrupt. The system is memory-efficient, fast and has a narrow hardware interface. The intended target of MicroCSP is bare microcontroller hardware, and its efficient use of the stack makes it particularly suitable for microcontrollers with restricted memory. The current version runs on a single processor over a Linux implementation of the hardware interface and serves as a prototype for the microcontroller implementations. A multicore implementation appears to be possible.

Slides (PDF)   Slides (PPT)  

Lambda Calculus in Core Aldwych
Matthew HUNTBACH, School of Electronic Engineering and Computer Science, Queen Mary University of London, UK

Abstract. Core Aldwych is a simple model for concurrent computation, involving the concept of agents who communicate through shared variables. Each variable will have exactly one agent which can write to it, and its value can never be changed once written, but a value can contain further variables which are written to later. A key aspect is that the reader of a value may become the writer of variables in it. In this paper we show how this model can be used to encode lambda calculus. Individual function applications can be explicitly encoded as lazy or not, as required. We then show how this encoding can be extended to cover functions which manipulate mutable variables, but with the underlying Core Aldwych implementation still using only immutable variables. The ordering of function applications then becomes an issue, with Core Aldwych able to model either the enforcement of an ordering or the decision to retain indeterminate ordering which would allow for parallel execution.

Slides (PDF)  

A Design for Interchangeable Simulation and Implementation
Klaus Birkelund JENSEN and Brian VINTER, Niels Bohr Institute, University of Copenhagen, Denmark

Abstract. Research on concurrent systems requires modeling, simulation and evaluation in many settings. We describe a design for Interchangeable Simulation and Implementation (ISI): an approach that enables the simultaneous modeling, simulation and evaluation of highly concurrent process based systems. We describe how ISI can been used to design and evaluate techniques used in storage systems. Our key insight is that, while the simulation of systems could be implemented as regular discrete event simulation, the benefits of designing it as the highly concurrent process based system that it is – with the full data and control flow in mind – enables seamless interchangeability between discrete and real time simulation.

Slides (PDF)  

Model-Driven Design of Simulation Support for the TERRA Robot-Software Tool Suite
Zhou LU, Maarten M. BEZEMER and Jan F. BROENINK, Robotics and Mechatronics, CTIT Institute, University of Twente, The Netherlands

Abstract. Model-Driven Development (MDD) based on the concepts of model, meta-model, and model transformation is an approach to develop predictable and reliable software for Cyber-Phsical Systems (CPS). The work presented here is on a methodology to design simulation software based on MDD techniques, supporting the TERRA tool suite to describe and simulate process communication flows. TERRA is implemented using MDD techniques and Communicating Sequential Process algebra (CSP). Simulation support for TERRA helps the designer to understand the semantics of the designed model, hence to increase the probability of first-time-right software implementations. A new simulation meta-model is proposed, abstracting the simulation process of a TERRA model. With this new meta-model and our previously designed CSP meta-model, a simulation model can be transformed from its TERRA source. The Eclipse Modelling Framework (EMF) is used to implement the meta-model. The Eclipse Epsilon Framework includes the Epsilon Transformation Language (ETL) and the Epsilon Generation Language (EGL) are used for model-to-model and model-to-text transformation. The simulation support is shown using an example, in which the generated trace text is shown as well. Further work is to implement an animation facility to show the trace text in the TERRA graphical model using colours.

Slides (PDF)  

A Model-driven Methodology for Generating and Verifying CSP-based Java Code
Julio MARINO (a) and Raul N.N. ALBORODO (b)
(a) Babel Group. Universidad Politecnica de Madrid, Spain
(b) IMDEA Software Institute, Madrid, Spain

Abstract. Model-driven methodologies can help developers create more reliable software. In previous work, we have advocated a model-driven approach for the analysis and design of concurrent, safety-critical systems. However, to take full advantage of those techniques, they must be supported by code generation schemes for concrete programming languages. Ideally, this translation should be traceable, automated and should support verification of the generated code. In this paper we consider the problem of generating concurrent Java code from high-level interaction models. Our proposal includes an extension of JML that allows to specify shared resources as Java interfaces, and several translation patterns for (partial) generation of CSP-based Java code. The template code thus obtained is JML-annotated Java using the JCSP library, with proof obligations that help with both traceability and verification. Finally, we present an experiment in code verification using the KeY tool.

Slides (PDF)  

Towards Lightweight Formal Development of MPI Applications
Nelson Souto ROSA (a), Humaira KAMAL (b) and Alan WAGNER (b)
(a) Universidade Federal de Pernambuco, Centro de Informatica, Recife, Brazil
(b) Department of Computer Science, University of British Columbia, Canada

Abstract. A significant number of parallel applications are implemented using MPI (Message Passing Interface) and several existing approaches focus on their verification. However, these approaches typically work with complete applications and fixing any undesired behaviour at this late stage of application development is difficult and time consuming. To address this problem, we present a lightweight formal approach that helps developers build safety into the MPI applications from the early stages of the program development. Our approach consists of a methodology that includes verification during the program development process. We provide tools that hide the more difficult formal aspects from developers making it possible to verify properties such as freedom from deadlock as well as automatically generating partial skeletons of the code. We evaluate our approach with respect to its ability and efficiency in detecting deadlocks.

Slides (PDF)  

CoCoL: Concurrent Communications Library
Kenneth SKOVHEDE and Brian VINTER, Niels Bohr Institute, University of Copenhagen, Denmark

Abstract. In this paper we examine a new CSP inspired library for the Common Intermediate Language (CIL), dubbed CoCoL: Concurrent Communications Library. The use of CIL makes the library accessible from a number of languages, including C#, F#, Visual Basic and IronPython. The processes are based on tasks and continuation callbacks, rather than threads, which enables networks with millions of running processes on a single machine. The channels are based on request queues with two-phase commit tickets, which enables external choice without coordination among channels. We evaluate the performance of the library on different operating systems, and compare the performance with JCSP and C++CSP.

Slides (PDF)   Best Paper Prize.   Prize photo  

Process-Based Aho-Corasick Failure Function Construction
Tinus STRAUSS (a), Derrick G. KOURIE (b), Bruce W. WATSON (b) and Loek CLEOPHAS (c)
(a) FASTAR Research Group, University of Pretoria, South Africa
(b) FASTAR Research Group, Stellenbosch University, South Africa
(c) Department of Computer Science, Umeå University, Sweden

Abstract. This case study is embedded in a wider project aimed at investigating process-based software development to better utilise the multiple cores on contemporary hardware platforms. Three alternative process-based architectures for the classical Aho-Corasick failure function construction algorithm are proposed, described in CSP and implemented in Go. Empirical results show that these process-based implementations attain significant speedups over the conventional sequential implementation of the algorithm for significantly-sized data sets. Evidence is also presented to demonstrate that the process-based performances are comparable to the performance of a more conventional concurrent implementation in which the input data is simply partitioned over several concurrent processes.

Slides (PDF)   Best Student Paper Prize.   Prize photo  

Bus Centric Synchronous Message Exchange for Hardware Designs
Brian VINTER and Kenneth SKOVHEDE, Niels Bohr Institute, University of Copenhagen, Denmark

Abstract. A new design and implementation of the Synchronous Message Exchange (SME) model is presented. The new version uses explicit busses, which may include multiple fields, and where components may use a bus for both reading and writing. The original version allowed only reading from or writing to a bus, which triggered a need for some busses to exist in two versions for different directions. In addition to the new and improved bus-model, the new version of SME produces traces that may be used for validating a later VHDL implementation of the designed component. Further, it can produce a graphical representation of a design to help with debugging.

Slides (PDF)  

Workshops

Message Passing Concurrency Shootout
Kevin CHALMERS, School of Computing, Edinburgh Napier University, UK

Background

In the last few years there has been a number of new programming languages which incorporate message passing concurrency. Examples such as Google's Go and Mozilla's Rust have shown an increased industry and academic interest in the ideas of message passing concurrency as a first order concern. These languages have joined existing ones such as occam-pi, Erlang, and Ada with strong communication based concurrency. It is therefore argued that the concurrent systems programmer has a number of options for exploiting message based concurrency.

The Communicating Process Architectures (CPA) community and others have for a number of years developed libraries to support message passing concurrency within existing programming languages and runtimes. This support is normally built upon the thread support libraries of the host language. JCSP and PyCSP are commonly discussed, but support for CPA ideas has also been implemented in Haskell, C++, Lua, and other languages.

The languages and libraries supporting message passing concurrency are normally inspired by one of more process algebras. Hoare's Communicating Sequential Processes (CSP) and Milner's pi-calculus are the two main inspirations for message passing work. It is questionable however how well these process algebras are supported in the languages and libraries they inspire.

Action

The aim of this workshop is the development of a message passing language and library based shootout in a similar manner to that seen at The Computer Language Benchmarks Game, although looking at more than just performance. The metrics produced will be augmented by a discussion on support for the principals of CSP, CCS, and the pi-calculus. The work undertaken, and further development, will lead to the production of a journal publication focusing on CPA languages and libraries, providing a state-of-the-art review. The paper produced will summarise the results and provide further dissemination of CPA ideas to a wider audience. The test applications built will be made available on a repository and the results published via arxiv for accessibility outside the paper. The workshop is looking for people interested in producing the test applications in the various languages and libraries.

There are two questions that the work undertaken will attempt to answer:

  1. How well supported are the primitives and ideas of CSP, CCS, and the pi-calculus in the range of languages and libraries supporting message passing concurrency?
  2. What are the metrics of the languages and libraries supporting message passing concurrency?

For the first question a set of criteria is required to define what we mean by process algebra support. This requires definition of what a process algebra provides. The key ideas are message passing passing and event selection, with the ability to spawn off processes (execute in parallel). As a starting point, the current list of properties of interest are:

  • Message passing support (this is the minimum criteria)
  • Type of message passing support - synchronous and/or asynchronous
  • First Order Channels (not all languages provide a channel construct)
  • Higher Order Channels (channels that can send channels)
  • First order processes (also a minimum criteria)
  • Higher order processes (channels can send processes)
  • Parallel execution statement
  • Process ownership (e.g. a parallel cannot complete until all its child processes have)
  • Selection on incoming messages
  • Other selection types? (e.g. skip, timeout)
  • Selection on outgoing messages
  • Multiway synchronisation

For the second question, a look at the metrics that help define the performance and usefulness of the languages is required. CPA typically uses the CommsTime and StressedAlt benchmarks. These two benchmarks allow analysis of the two key properties in a communication based language or library - communication time and selection time. The memory usage of a process is also required, or an equivalent measure. The aim would be to understand the number of processes the language or library can support. Knowing whether the language or library exploited parallel hardware (e.g. multicore) is also important. Typical software metrics such as Lines of Code (LoC), time, speedup, efficient, memory usage, and CPU usage for the following applications is a possible starting point:

  • CommsTime - channel communication time
  • StressedAlt - selection time and maximum process support
  • Dining Philosophers - LoC
  • Vector Addition - simple data parallel problem
  • Matrix Multiplication - complex data parallel problem
  • Mandelbrot - data parallel with large data output
  • Monte Carlo Pi - data parallel with low data output and reduction
  • N-body simulation - shared data

The current set of languages I have come across that reportedly support message passing concurrency are:

Ada, Ateji PX, Clojure, D, Elixir, Erlang, Go, Guppy, Hume, Kilim, Hume, Limbo, Nim, occam-pi, Oz, ProcessJ, Perl, Rust, Unicon

As for the set of libraries:

JCSP, CTJ, C++CSP, PyCSP, C Haskell P, C Scala P, CCSP, LuaCSP

A discussion on whether to include all actor based languages and libraries is also required, as if these are included the number of libraries and languages will increase.

Slides (PDF)   Workshop Prize.   Prize photo  

Dealing with (Real)-Time in Real-World Hybrid Systems
Pieter Van SCHAIK (a) and Eric VERHULST (b)
(a) Altreonic NV, Belgium
(b) CEO/CTO, Altreonic NV, Belgium

Background

One of the issues that has been bothering embedded systems engineers is how to deal with time. Some approaches have attempted to make time part of the modelling language, other approaches turn it in a partial order of events, while most programmers ignore it completely equating QoS with real-time (most of the time but not guaranteed). While in the discrete domain, time is considered to be progressing as a number of clock cycles, in the continuous domain time has an infinitesimal granularity. If we now want to proof the correctness of a hybrid system, people traditionally use time dependent partial ordinary differential equations in the continuous domain and model checkers or provers for the discrete domain.

Action

How can we combine the two? Following the Keynote theme, we remember to separate the concerns. Hence we need time-independent models that, when executed on a given platform or in a given system context, result in specific time properties (like meeting a deadline or stability). In the discrete domain, this can be achieved by using priority as a scheduling parameter. In the continuous domain, engineers routinely transform the model into the frequency domain to prove desired properties using Nyquist or Laplace. The workshop will look for answers on how such hybrid models can be formally verified (as opposed to simulation and testing only).

Slides (PDF)  

The Role of Concurrency in the Modern HPC Center
Brian VINTER, Niels Bohr Institute, University of Copenhagen, Denmark

Background

The modern HPC center is a complex entity. A center easily hosts tens of thousands of processors in the smallest cases, and several million processors for the largest facilities. Before 2020 this number is expected to pass 100 million processors in a single computer alone. Thus massive parallel processing is indeed everyday business today, and the major challenges of running such facilities are well established, and solutions exist for most of these challenges.

In the following we will make a strong distinction between parallelism, i.e. Single Program Multiple Data or Single Instruction Multiple Data, and concurrency, i.e. multiple processes, both identical and different, that may run in parallel or interleaved, depending on timing and available hardware.

While concurrency mechanisms were commonly used to express applications for parallel computers two or three decades ago, this use of concurrency has completely died out. There are several explanations for this, but the most important is that the cost of a HPC installation today is so high that users must document that they use the facility efficiently. All applications must stay above a specified threshold, typically 70%, CPU utilization, and even with SPMD type programming, asynchrony amongst processors is a common challenge when trying to stay above the threshold.

This does not mean that concurrency is without its use in the modern HPC center. While the actual computing-­‐nodes do not use concurrency, the underlying fabric, that allows the compute-­‐nodes to operate, has a large potential for concurrency. In many cases these elements could benefit from a formal approach to concurrency control.

Action

In this workshop we will present the challenges in the HPC center that indeed does work on concurrently; storage-­‐systems, schedulers, backup-­‐systems, archiving, and network-­‐bulk transfers to name a few. The interesting challenge is that while all these elements require concurrency control to operate correctly and efficiently, they are also highly interdependent, i.e. the concurrency aspects must cover the full set of infrastructure components for optimal efficiency. We will seek to describe scenarios for real HPC centers and sketch solutions that are build on structured concurrency approach.

Slides (PDF)  

Fringe Presentations

C++11 CSP
Kevin CHALMERS, School of Computing, Edinburgh Napier University, UK

Abstract. This fringe presentation focuses on development undertaken on a library to support CSP ideas within the new C++11 standard. The library has been developed to undertake development of a MPI layer for CPA Networking, with C and C++ being simpler languages to build up such a framework. This fringe presentation will look at how we can write applications using C++11 CSP.

Slides (PDF)  

OLL Compiler Project (status at 2015/08/16)
Barry M. COOK, Independent, UK

Abstract. OLL is the latest iteration of a series of compiler projects I've written over the past 20+ years. During that time I've targeted C, VHDL (hardware), ARM and Java's JVM. Each has concentrated on a different aspect of the challenge.

I've also designed, by hand, a lot of high-performance test equipment hardware/ firmware and control/interface software; also a little bit of a flight system for a satellite. All have required care to (try to) ensure reliable operation, eliminating run-time failures. CSP has been a critical framework that has worked very well.

I am more convinced than ever that I, and maybe others, could use a design language that equally well describes algorithms regardless of whether they are to be run as hardware or software — not Hardware/Software Co-Design but Hardware/Software Same-Design.

Support for eliminating run-time errors is essential but it is theoretically impossible to adequately analyse arbitrary programs - and relatively easy to create pathological cases that would break the compiler. The interesting question, however, is whether it is possible to handle a large-enough set of real (not contrived) programs to be useful. This project aims to explore the bounds of practicality.

I will give a brief overview of the aims of this ongoing project and report its current status.

Slides (PDF)   Fringe Prize.   Prize photo  

T42 in FPGA (year one design status report)
Uwe MIELKE, Electronics Engineer, Dresden, Germany

Abstract. This fringe session will present the current status of our still ongoing IMS-T425 compatible Transputer design in FPGA. Data path and control path are in a stable working state. Fetch unit and a basic system control unit are almost functional. Small instruction sequences can be executed from 8 Kbyte memory already. Some design details around the scheduler micro-code will be discussed.

Slides (PDF)  

Protected Mode RTOS: what does it mean?
Bernhard SPUTH, Altreonic NV, Belgium

Abstract. Now that we can formally verify software models, why do we still need protection? Protection from programming errors and protection from hardware errors. The key reason is that formal models are abstractions and programmers are humans with an illogical brain using illogical and error-prone dynamic programming languages. In addition, software runs on a shared resource, called a processor and that processor exists in the real physical world, whereby external influences like cosmic rays can change its state. Hence, protection has to be seen in the context of increasing the trustworthiness (as defined by the ARRL criterion) of the system. The key is to do it in such a way that we don’t jeopardise the properties we expect from a system in absence of the errors mentioned above. This was the rationale for developing VirtuosoNext, offering fine-grain and space partitioning with some help of the hardware.

Slides (PDF)  

Not that Blocking!
Øyvind TEIG, Autronica Fire and Security AS, Trondheim, Norway

Abstract. Communicating to fellow programmers that the concept of "blocking" in process-oriented design is perfectly acceptable, while using a word with basically negative connotations, is difficult. The bare need to do it often means that misunderstanding this concept is harmful. The first contender on a "blocking channel" has also correctly been said (by several people) to "wait" on the channel. A better correspondence between the negative meaning and the semantics is when "blocking" describes serious side effects on the temporal properties of other concurrent components. This is the correctly feared blocking. This fringe presentation will explore this further and invite discussion.

Slides (PDF)  

Not that Concurrent!
Øyvind TEIG, Autronica Fire and Security AS, Trondheim, Norway

Abstract. Concurrency is, in the literature, often used as a noun with a range of strengths: there is more or less concurrency; it is more or less limited; it may even be seen described as complete. In trying to discuss multi-threaded programming with programmers who state that they program single-threaded, it is important to communicate that they may program less concurrently, but probably not as non-concurrently as they believe. What are the factors that increase concurrency and which factors are orthogonal to the degree of concurrency? Does a Go language goroutine increase it and is a C++ object orthogonal? Will the CSP paradigm generally enable increased concurrency? Is the CSP paradigm of communication-with-synchronisation itself orthogonal to the degree of concurrency? It is also important to understand the term parallel slackness: does it introduce or enable more concurrency? And what about atomicity? This fringe presentation aims to raise more questions that it is able to answer. However, some lines of reasoning are suggested. Finally: is it at all meaningful to raise the awareness of concurrent as an adjective?

Slides (PDF)  

Managing Hard Real Times (28 Years Later)
Peter H. WELCH, School of Computing, University of Kent, UK

Abstract. In some quarters, received wisdom about hard real-time systems (where lateness in response means system failure) is that a currently running process must be pre-empted by a higher priority process that becomes runnable (e.g. by the assertion of a processor event pin or timeout) — otherwise worst-case response times cannot be guaranteed. Further, if a higher priority process needs to synchronise with one of lower priority, the latter must automatically inherit the priority of the former. If this does not happen, the opposite happens and the former effectively inherits the lower priority of the latter as it waits for it to be scheduled (priority inversion) — again, worst-case response times fail.

The CCSP multicore scheduler for occam-pi (part of the KRoC package) is, possibly, the fastest and most scalable (with respect to processor cores) such beast on the planet. However, its scheduling is cooperative (not pre-emptive) and it does not implement priority inheritance (and cannot do so, given the nature of CSP synchronisation, where processes do not know the identities of the other processes involved). Therefore, despite its performance, received wisdom would seem to rule it out for hard real-time applications.

This talk reviews a paper, [1], from the OUG-7 proceedings (1987) that discusses these ideas with respect to Transputers. One minor improvement to the reported techniques will be described. Otherwise, no change is needed for modern multicore architectures. Conclusions are that (a) pre-emptive scheduling is not required, (b) priority inheritance is a design error (dealt with by correct design, not the run-time system) and (c) the occam-pi/CCSP scheduler can be made to work even more efficiently for hard real-time systems than it presently does for soft real-time (e.g. complex system modelling).

[1]   Peter H. Welch. Managing Hard Real-Time Demands on Transputers.
        In: Traian Muntean, ed., Proceedings of OUG-7 Conference and International Workshop
        on Parallel Programming of Transputer Based Machines.

        LGI-IMAG, Grenoble, France: IOS Press, Netherlands. 1987.

Slides (PDF)   Slides (PPT)  

Group Photos

Group photo (j)

Group photo (d2)


Pages © WoTUG, or the indicated author. All Rights Reserved.
Comments on these web pages should be addressed to: www at wotug.org