Our purpose here is to investigate the computational models of (the functional part of) programming languages, and therefore we are mainly interested in the operational semantics of PCF. We will consider the two main kinds of semantics, corresponding to the eager (call-by-value) and to the lazy (call-by-name) evaluation stategies. The first is the basis for languages like Lisp, Scheme and ML. The latter is the basis for langauges like Miranda, Haskell, and Orwell.

The original PCF is a very tiny language, in the sense that it only contains a few operators more than the lambda-calculus: successor, predecessor, test is-zero, conditional and fixpoint. We will consider a sligthly richer language as it is more suitable to our purpose. Also, the original PCF is typed a' la Church, in the sense that every variable is explicitly decorated with a type. In order to be more general (so to capture also the implicitly typed languages like ML) we will not make this assumption.

Term ::= Var| \Var.Term | Term Term % lambda terms | Num | true | false % numerical and boolean constants | Term Op Term % numerical ops and comparison | if Term then Term else Term % conditional | (Term,Term) | fst Term | snd Term % pairs and projections | let Var = Term in Term % term with a local declaration | fix % the fixpoint operator (Y)Var and Num are syntactical categories which generate the set of term variables and the set of (representations of) integer numbers. Op is a syntactical category which generates the numerical operations +, *, - and /, and the comparison operation =.

The following grammar defines the type expressions for the above language.

Type ::= int % integers | bool % booleans | TVar % type variable | Type -> Type % functional type | Type * Type % cartesian product typeTVar is a syntactical category which generates the set of type variables (distinguished from the term variables above).

The rules of the type system are those of the system of Curry (for term variables, abstraction and application) plus the following ones:

-------------- for any numerical constant n G |- n : int ------------------ ------------------- G |- true : bool G |- false : bool G |- M : int G |- N : int G |- M : int G |- N : int ----------------------------- op = +,*,- or / ----------------------------- G |- M op N : int G |- M = N : bool G |- M : bool G |- N : A G |- P : A ---------------------------------------- G |- if M then N else P : A G |- M : A G |- N : B G |- M : A * B G |- M : A * B -------------------------- ---------------- ---------------- G |- (M,N) : A * B G |- fst M : A G |- snd M : B G |- M : A G, x : A |- N : B --------------------------------- G |- let x = M in N : B -------------------------- G |- fix : (A -> A) -> ANote that the presence of the rule for the fixpoint breaks the analogy between inhabited types and intuitionistic validity. In fact, by using the rules for fix and for application we get that fix(\x.x) has generic type A, for any A. Namely, any A is inhabited in this system; and clearly, not any A is intuitionistically valid.

Note: The rule for the let construct given above is the one which is usually considered in (the extensions of) PCF, but it is only an approximation of the real rule used, for instance, in ML. The real one is more complicated and requires notions that we have not introduced yet. We might see it in future lectures.

The notion of evaluation is based essentially on beta reduction. The way beta-reduction is formalized in the lambda calculus, however, is "too liberal", in the sense that a given term usually gives rises to several possible reductions, and it is not extablished (except for the terms in normal form) when the reduction process should end.

When specifying an operational semantics, we should therefore fix:

- what reduction to allow, if more than one is possible from a given term (reduction strategy), and
- what terms we want to consider as representing values. These will be called "canonical terms".

We will consider a "big-step" semantics, in the sense that we will consider statements of the form

M eval Nmeaning: M reduces to the "value" N according to the given evaluation strategy. In contrast, in a "small-step" semantics N is not forced to be a value, and usually we would need a sequence of (small) steps to reach a value.

Independently from the above choices, we want the definition of evaluation to be sound with respect to beta reduction, in the sense that, for a lambda calculus term M, if M eval N then M ->> N (i.e. N can be obtained from M by beta reduction). As for the reverse (completeness), we do not want necessarily to impose it. We will see in fact that the eager and the lazy strategies differ in this issue: the lazy semantics is complete and the eager is not.

- the numerical and boolean constants are canonical
- if M, N are canonical, then the pair (M,N) is canonical
- the lambda abstractions are canonical (they represent functional values)

- The canonical terms evaluate to themselves:
---------- for any number n n eval n

---------------- ------------------ true eval true false eval false

---------------- \x.M eval \x.M

- The numerical operators and comparison
evaluate to their semantic counterpart
M eval P N eval Q ------------------------- op is +,-,*,/,=. (M op N) eval (P sop Q)

In the above, sop stands for the semantic counterpart of op. For instance, if op is the symbol +, then 5 sop 3 is 8. - The conditional statement has the usual behavior of
evaluating to the first branch or to the second depending on the value of the
condition:
M eval true N eval Q M eval false P eval Q ----------------------------- ------------------------------ (if M then N else P) eval Q (if M then N else P) eval Q

- The semantics of the pairing and the projections are as follows
M eval P N eval Q M eval (N,P) M eval (N,P) --------------------- ---------------- ---------------- (M,N) eval (P,Q) (fst M) eval N (snd M) eval P

- The application (M N) follows the call-by-value discipline, i.e.
the argument N is first evaluated, and then its value is replaced in the
body of the function:
M eval (\x.P) N eval Q P[Q/x] eval R ------------------------------------------ (M N) eval R

- The value of a construct of the form let x = M in N is the value of N,
where x is replaced by the value of M
M eval Q N[Q/x] eval P -------------------------- (let x = M in N) eval P

- Finally, the fixpoint rule is based on the idea that recursively
definded functions can be evaluated by unfolding, i.e. by substituting
the funcion call by its body. In other words: in order to evaluate
(fix M), we should evaluate M (fix M). However this does not work under
the call-by-value discipline, because the evaluation of the latter would
bring to the evaluation of (fix M) again, and we would end up in a loop.
To fix this problem, we require in the premise that
(fix M) is replaced in the body of M withouth being evaluated first.
By generalizing this principle, we get the following rule:
M eval (\x.P) P[(fix M)/x] eval Q ------------------------------------- (fix M) eval Q

- the numerical and boolean constants are canonical
- the pairs (M,N) are canonical
- the lambda abstractions are canonical (they represent functional values)

The rules of the lazy semantics are the following (we list only those which differ from the corresponding rules in the eager semantics)

- Pairs and projections:
M eval (N,P) N eval Q M eval (N,P) P eval Q ------------------ ------------------------- ------------------------- (M,N) eval (M,N) (fst M) eval Q (snd M) eval Q

Note that, since M and N are not evaluated in (M,N), then the rules for fst and snd have to be modified so to force the evaluation of the selected component. - The application (M N) follows the call-by-name discipline, i.e.
the argument N is replaced in the body of the function without being
evaluated first:
M eval (\x.P) P[N/x] eval R ------------------------------- (M N) eval R

- The rule for the let construct:
some authors (see, for instance, [Winskell93])
specify a lazy rule for this construct.
We prefer to leave the let rule as in the early semantics,
following the practice of lazy programming languages.
(Having an eager semantics for let allows for more flexibility,
i.e. for expressing eager functions when desired.)
- The fixpoint rule in the lazy semantics can equivalently be formulated as
in the eager semantics, or can be simplified as follows:
(M (fix M)) eval N -------------------- (fix M) eval N

- the cons constructor (denoted by ::): (a :: s) is the stream whose first element is a and whose rest is s.
- the head selector (denoted by hd): (hd s) is the first element of the stream s.
- the tail selector (denoted by tl): (tl s) is the rest of the stream s (i.e. what we get by removing from s the first element).

M eval P M eval (N::P) M eval (N::P) P eval Q -------------------- --------------- ------------------------- (M::N) eval (P::N) (hd M) eval N (tl M) eval Q

fun nats n = n :: (nats (n+1));Note that in a eager language like ML a call of nats would end up in a loop (since recursion is unguarded). On the contrary, in Haskell, if we write for instance an expression like

hd (tl (nats 0));its evaluation terminates and gives the value 1.

fun facts = 1 :: (times (nats 2) facts);where times is a function which takes two input steams of numbers and outputs the stream of the pairwise products:

fun times (a::r) (b::s) = (a*b) :: (times r s);

fun primes = sieve (nats 2);where sieve is a function which outputs the first element a of the input stream, and then creates a filter that will let pass only those elements, in the rest of s, which are not divisible by a. Sieve can be defined as follows:

fun sieve (a::s) = a :: (sieve (filter a s));And filter can be defined as follows:

fun filter a (b::s) = if (b mod a) = 0 then filter a s else b :: (filter a s);

M eval P N eval Q M eval (N::P) M eval (N::P) --------------------- -------------- --------------- --------------- (M::N) eval (P::Q) nil eval nil (hd M) eval N (tl M) eval P

**Theorem** (Soundness of eager semantics)
If M is a lambda term, and M eval N holds in the early
semantics, then M ->> N (M beta-reduces to N).

The vice versa does not hold. Take for instance the term M = (\x y. y) P N, where P is any term whose evaluation does not terminate (for example, Y). We have that M beta-reduces to N, but, if the evaluation of P does not terminate, then the evaluation of M does not terminate either.

For the lazy semantics, on the contrary, also the other directions holds, at least in a weak form:

**Theorem** (Soundness and weak completeness of the lazy semantics)
If M is a lambda term, and M eval N holds in the lazy
semantics, then M ->> N holds.
Viceversa, if M ->> N holds, then M eval P holds in the lazy semantics
for some P such that
P ->> N.

In the completeness part of the above theorem, in general N is different from P. Consider for instance M = \x. P, where P ->> Q. We have that M ->> \x. Q, but we cannot evaluate M to (\x. Q), since M is already in canonical form.

For instance, we can encode streams and its constructors/selectors in ML as follows:

datatype 'a stream = empty | cons of 'a * (unit -> 'a stream); fun head (cons(a,f)) = a; fun tail (cons(a,f)) = f();(we are obliged to use different names than ::, hd and tl, because the latter are reserved for the (eager) lists, which in ML are predefined). Now we can define function on streams in the usual way. For instance, the functions nats, times and facts of previous lecture can be defined as follows:

fun nats n = cons(n, fn() => nats(n+1)); fun times r s = cons((head r)*(head s), fn() => times (tail r) (tail s)); fun facts () = cons(1, fn () => times (facts ()) (nats 2));(here we need to give an argument to facts only to satisfy the ML constraint on the syntax of functions.) Now, we can define, for instance

val s = facts();The following are examples of interactions with ML (after giving the above definitions):

- head(tail s); val it = 2 : int - head(tail(tail(s))); val it = 6 : int - head (tail(tail(tail(tail s)))); val it = 120 : int