Our purpose here is to investigate the computational models of (the functional part of) programming languages, and therefore we are mainly interested in the operational semantics of PCF. We will consider the two main kinds of semantics, corresponding to the eager (call-by-value) and to the lazy (call-by-name) evaluation stategies. The first is the basis for languages like Lisp, Scheme and ML. The latter is the basis for langauges like Miranda, Askell, and Orwell.
The original PCF is a very tiny language, in the sense that it only contains a few operators more than the lambda-calculus: successor, predecessor, test is-zero, conditional and fixpoint. We will consider a sligthly richer language as it is more suitable to our purpose. Also, the original PCF is typed a' la Church, in the sense that every variable is explicitly decorated with a type. In order to be more general (so to capture also the implicitly typed languages like ML) we will not make this assumption.
Term ::= Var % variables
| Num | true | false % numerical and boolean constants
| Term Op Term % numerical ops and comparison
| if Term then Term else Term % conditional
| (Term,Term) | fst Term | snd Term % pairs and projections
| \Var.Term | Term Term % abstraction and application
| let Var = Term in Term % term with a local declaration
| fix % the fixpoint operator (Y)
Var and Num are syntactical categories which generate
the set of term variables and the set of
(representations of) integer numbers.
Op is a syntactical category which generates the numerical
operations +, *, - and /, and the comparison operation =.
G |- M : A G, x : A |- N : B
---------------------------------
G |- let x = M in N : B
This rule is the one which is usually considered in
(the extensions of) PCF, but it is only an approximation of
the real rule used, for instance, in ML.
The real one is more complicated and requires
notions that we have not introduced yet.
We will see it in future lectures.
The notion of evaluation is based essentially on beta reduction. The way beta-reduction is formalized in the lambda calculus, however, is "too liberal", in the sense that a given term usually gives rises to several possible reductions, and it is not extablished (except for the terms in normal form) when the reduction process should end.
When specifying an operational semantics, we should therefore fix:
We will consider a "big-step" semantics, in the sense that we will consider statements of the form
M eval Nmeaning: M reduces to the "value" N according to the given evaluation strategy. In contrast, in a "small-step" semantics N is not forced to be a value, and usually we would need a sequence of (small) steps to reach a value.
Independently from the above choices, we want the definition of evaluation to be sound with respect to beta reduction, in the sense that, for a lambda calculus term M, if M eval N then M ->> N (i.e. N can be obtained from M by beta reduction). As for the reverse (completeness), we do not want necessarily to impose it. We will see in fact that the eager and the lazy strategies differ in this issue: the lazy semantics is complete and the eager is not.
---------- for any number n n eval n
---------------- ------------------ true eval true false eval false
---------------- \x.M eval \x.M
M eval P N eval Q ------------------------- op is +,-,*,/,=. (M op N) eval (P sop Q)In the above, sop stands for the semantic counterpart of op. For instance, if op is the symbol +, then 5 sop 3 is 8.
M eval true N eval Q M eval false P eval Q ----------------------------- ------------------------------ (if M then N else P) eval Q (if M then N else P) eval Q
M eval P N eval Q M eval (N,P) M eval (N,P) --------------------- ---------------- ---------------- (M,N) eval (P,Q) (fst M) eval N (snd M) eval P
M eval (\x.P) N eval Q P[Q/x] eval R
------------------------------------------
(M N) eval R
M eval Q N[Q/x] eval P -------------------------- (let x = M in N) eval P
M eval (\x.P) P[(fix M)/x] eval Q
-------------------------------------
(fix M) eval Q
The rules of the lazy semantics are the following (we list only those which differ from the corresponding rules in the eager semantics)
M eval (N,P) N eval Q M eval (N,P) P eval Q
------------------ ------------------------- -------------------------
(M,N) eval (M,N) (fst M) eval Q (snd M) eval Q
Note that, since M and N are not evaluated in (M,N), then
the rules for fst and snd have to be modified so to
force the evaluation of the selected component.
M eval (\x.P) P[N/x] eval R
-------------------------------
(M N) eval R
(M (fix M)) eval N -------------------- (fix M) eval N
M eval P M eval (N::P) M eval (N::P) P eval Q
-------------------- --------------- -------------------------
(M::N) eval (P::N) (hd M) eval N (tl M) eval Q
fun nats n = n :: (nats (n+1));
Note that in a eager language like ML a call of nats would end up
in a loop (since recursion is unguarded).
On the contrary, in Askell, if we write for instance an expression like
hd (tl (nats 0));
its evaluation terminates and gives the value 1.
fun facts = 1 :: (times (nats 2) facts);
where times is a function which takes two input steams of numbers and
outputs the stream of the pairwise products:
fun times (a::r) (b::s) = (a*b) :: (times r s);
fun primes = sieve (nats 2);
where sieve is a function which outputs the first element a of the
input stream, and then creates a filter that
will let pass only those elements, in the
rest of s, which are not divisible by a.
Sieve can be defined as follows:
fun sieve (a::s) = a :: (sieve (filter a s));
And filter can be defined as follows:
fun filter a (b::s) = if (b mod a) = 0 then filter a s
else b :: (filter a s);
M eval P N eval Q M eval (N::P) M eval (N::P) --------------------- --------------- --------------- (M::N) eval (P::Q) (hd M) eval N (tl M) eval P