Motivation: A calculus with few, orthogonal mechanisms, able to represent all the relevant concepts of concurrent computations.

- To help understanding/resoning/developement of formal tools in concurrency.
- To play an analogous role, for concurrency, as the lambda calculus for sequential computation.

In concurrecy the interaction possibilities are much richer. Example: consider the following two fragments of programs:

A: x := 1If a sequential computation model, A and B are equivalent (i.e. they induce the same state-tranformation) in any context. If a concurrent computation model, on the contrary, there are context which distinguish them. Consider for instance the composition with the following

B: x := 0; x := x+1

C: x := 2We have that A | C and A | B (where | stands for the parallel composition) are not equivalent. In fact, the first can produce only the states where x is 1 or 2, while the latter can produce also the state where x is 3.

- The nondeterminismistic Turing machines
- (The operational semantics of) logic languages like Prolog and Lambda Prolog

- It could be eliminated without loss of computational power (by using backtracking)
- Failures don't matter: all what we are interested on is the existence of succesful computations. A failure is reported only if all possible alternatives fail.

- It cannot be avoided. At least, not without loosing essential parts of expressive power. All interesting models of concurrency and interaction have to cope with nondeterminism.
- Failures do matter. Chosing the wrong branch might bring to an "undesirable situation". Backtracking is usually not applicable (or very costly) in this context, because the control is distributed: not only one process, but all processes should be restarted.

To illustrate what are the "undesirable situations", consider the example of the dining philosophers:

n philosophers are sitting at a circular table. Between each two philosophers there is a fork (hence there are n forks on the table). Each philosopher can either think or eat. In order to eat, he needs two forks. He can take only one fork at the time. All philosophers are the same in the sense that they follow "the same attitude about thinking and eating". Also all forks are the same. Hence the situation is completely symmetric, i.e. there are no privilegies, no preestablished ordering, etc.This example is paradigmatic of a situation in which processes are competing for some shared and distributed resources. The "bad situations" are "deadlock" (each philosopher has a fork, nobody eats) and "starvation" (some philosopher never eats because the neighboroughs are quicker in getting the forks.)

The problem of the dining philosophers is to guarrantee maximal independence (hence avoid having a scheduler or a monitor who decides whose turn is to eat) while avoiding those bad situations. Note that, even if we "convince" each philopher to give back the fork in case of deadlock, it is not so easy avoiding starvation, because we could enter a loop in all philopher take one fork each, detect deadlock, put back the fork, takes one fork again etc.

This example was proposed by Dijkstra in the 70s as a benchmark to test the expressiveness of concurrent languages. It was observed by Rabin in the 80s that a completely distributed, symmetric solution, must rely on probabilistic methods. In this solution the starvation possibility is not ruled out, but has probability 0.

Concurrent systems offer several kinds of communication, depending on the medium. Examples are:

- Communication via ether.
- Communication via channel.
- Communications via shared memory.

- Broadcasting / point-to-point
- Ordered / unordered (i.e. queues / bags)
- Bounded / unbounded.

Thus the fundamental kind of interaction is not
the one between two proceeses P and Q communicating
via a buffer B, but rather between P and B, and Q and B.
In Milner's view, the fundamental model of interaction is
*synchronous* and *symmetric*, i.e. the
partners act at the same time performing complementary
actions. This kind of interaction
is called *handshaking*: the partners agree
simoultaneously on performing the two (complementary) actions.

In the following, the complement of an action *a* will be
denoted by *^a*. Usually we will regard *a* as the action of
"receiving along channel (or interface, or port) *a*", and
*^a* as the action of
"sending along channel (interface, port) *a*". But let us not
forget that this terminology is purely a convention: the two actions
have really the same status from every possible point of view.
We will also use the terms "input" and "output"
to denote the same distinction between the two counterparts of the action.

If we name *in* the interface of the buffer
B where it receives data,
and *out* the interface where its data are made available, then
the buffer can be specified as follows (assuming for simplicity
that it has only one
cell, i.e. that it can store only one datum at a time) :

B = in(x).B'(x)The "." here is called "action prefixing" and denotes sequentialization; i.e. B'(x) becomes active only after the action in(x) has been performed. The sending and the receiving processes will then be specified as follows (assuming that P send the datum d):

B'(x) = ^out(x).B

P = ^in(d).P'As explained above, the complementary actions ^in(d) and in(x) must take place at the same time (and cause the instantiation of x with d). Same for ^out(x) (by then instantiated to ^out(d)) and out(x). In other words, we want that the system P | B | Q evolve as follows:

Q = out(x).Q'(x)

P | B | Q --> P' | B'(d) | Q --> P' | B | Q'(d)

In concurrency, in order to achieve a structural definition, we must add some information in the transition relation (specifying the behaviour of processes). In particular, to model interaction, we have to specify the action that is being preformed during a transition. Transitions will then be formalized as a relation between two processes (or configurations) and one action.

A process with an input prefix can make a transition by performing the corresponding input action:

a.P -a-> PAnalogously, a process with an output prefix can make a transition by performing the corresponding output action:

^a.P -^a-> PFinally, the interaction between two parallel processes is captured by the following rule:

P -^a-> P' Q -a-> Q' ---------------------- P | Q -tau-> P'| Q'Were the label tau in the conclusion represents "a silent action", and is the only action which does not have a complement. This is to express the fact that if P and Q are interacting with each other, they cannot (at the same time) interact with anybody else (two-ways interaction).

Two parallel processes should not be obliged to interact at every step. For this reason, we need also another rule for parallel composition, which models the situation in which one process makes a step and the other does not (is idle). The rule is the following:

P -a-> P' ------------------ P | Q -a-> P'| QOf course there will be also the symmetric rule (where P and Q roles are exchanged), and

In some formalisms for concurrency there are also other rules, tro represent the fact that two processes can be active at the same time independenty, i.e. without interacting. These are called "true concurrency models". In CCS, however, the avove two rules (and the symmetric of the second) are all we have for the parallel construct. Such a kind of model of concurrent computation is called "interleaving": the actions of the processes are interleaved so that at each moment only one (at most) is observed.

We leave to the interested reader to apply these rules to prove the two transitions of the system above (P | B | Q). Actually, the rule for interaction needs to be modified so to cope with parameter-passing. A natural definition is:

P -^a(d)-> P' Q -a(x)-> Q'(x) -------------------------------- P | Q -tau-> P'| Q'(d)CCS does however does not deal explicitly with parameter-passing. We will see how parameter-passing can be modeled in CCS.