- Readability
- Static correctness
- Efficiency of implementation
- Modularity (interfaces of modules, inheritance,...)

There are two main type systems for the simply typed Lambda Calculus: The system of Curry (1934) and the system of Church (1940).

The system of Curry is the basis of typed languages like ML, where variables do not require explicit type annotation. The type system of Church is the basis of languages like Pascal and C, where the type of every variable must be declared.

In this system, the syntax of Lambda terms is the same as in the untyped version.

The types are defined by the following grammar:

Type ::= TVar(type variable)| (Type -> Type)(function)

Convention: In order to eliminate some parentheses,
we assume` -> `to be right associative.
So` (A _{1} -> (A_{2} -> (A_{3} -> ... (A_{n-1} -> A_{n}) ... ))) `will be
written simply as

The system allows to derive statements of the following form:

G |- M : Ameaning: under the assumptions G (asociations between lambda variables and types), the lambda term M has type A.

The rules for the Type System are the following:

(var) --------------- if x : A is in G G |- x : A G |- M : A -> B G |- N : A (app) ---------------------------------- G |- M N : B G , x : A |- M : B (abs) ------------------------ G |- \x. M : A -> B

In the following, the statement "`M` has type `A`"
(or equivalently, "`A` is a type of `M`"),
notation `|- M : A ` or simply `M : A`, stands for
"the statement `{} |- M : A` has a proof in the type system of Curry".
Note that the set of assumption must be empty.

(var) ---------------- x : A |- x : A (abs) ------------------ |- \x.x : A -> A

- Does a term always has a type?
- If a term has a type, is this type unique?
- If the type is not unique, is there a sort of "most natuaral type" that one can choose?
- We have seen how to use the system so to prove that a given term has a given type (type checking). Is it possible also to use the system to infer a type for a term (type inference)?
- Given a type T, does it always exists a term which has type T?
- What is the relation between types and reduction? Namely, is the type of a term preserved under reduction?

(var) -------------------------- x : B -> B |- x : B -> B (abs) -------------------------------- |- \x.x : (B -> B) -> (B -> B)In a sense, however, the previous type (

**Theorem** If a lambda term has a type, then
it has a principal type, unique modulo renaming.

We don't give a formal proof of this theorem, but we show
how to construct the principal type. Let us start with
the example of the term `\x.x`. Intuitively,
any proof of a type statement for `\x.x` must have the
following form:

(var) ---------------- A = B x : A |- x : B (abs) ---------------- C = A -> B |- \x.x : CThe equation

the principal type ofIf, on the contray, the generic proof cannot be built, or the equations are not solvable, thenMisA theta.

`[True]`, i.e.`\x\y.x`, has principal type`A -> B -> A`.`[False]`, i.e.`\x\y.y`, has principal type`A -> B -> B`.`[1]`, i.e.`\x\y.x y`, has principal type`(A -> B) -> A -> B`.

- To find a proof for
`|- M : A`, where`A`is a given type. This process is called*Type Checking*. - To derive a (principal)
type for
`M`, by constructing a proof and imposing the conditions as explained above. This process is called*Type Inference*.

(remember that- val f = fn x => x;

(whereval f = fn : 'a -> 'a

Note that
`x x` represents
the application of a generic function
`x` to itself.
There are functions
for which it makes sense to
be applied to themselves (for instance
the identity function), but this is not
the case for all functions.

From a semantic point of view, note that if
`A -> B` is to be interpreted
as the set of functions from a set `A` to a set
`B`, then there is no non-trivial set `A`
which can be equal to the set `A -> B`, for cardinality reasons.
In fact, if `A` has cardinality n, and `B`
has cardinality m,
then `A -> B` has cardinality m^{n}.
In domain theory, where it is desirable that such equations have
a solution, `A -> B` is assumed to represent not all functions
from `A` to
`B`, but only a particular class of them.

- Any type variable
`A` `A -> B`, for any distinct type variables`A`and`B``A -> A -> B`, for any distinct type variables`A`and`B`

the inhabited types are exactly the formulas which are valid in the intuitionistic propositional logicwhere

**Definition**
Given a set of variables Var, and a set of function symbols Fun
(possibly including constant symbols) the
first-order terms are defined by the following grammar:

Term ::= Var | Fun(Term,...,Term)

In the case of type expressions, we have only one binary function symbol (represented in infix notation): the arrow ->.

**Definition**
A substitution theta is any mapping
theta : Var -> Term.

A substitution theta
will be denoted by listing explicitly
the result of its application to every variable (usually
we are interested only in finite substitutions, i.e. substitutions
which affect only a finite number of variables). More precisely, a substitution theta such that
theta(x_{1}) = t_{1},..., theta(x_{n}) = t_{n},
will be denoted by [t_{1}/x_{1},..., t_{n}/x_{n}].

The application of a substitution theta to a term t, denoted by t theta, is the term obtained from t by replacing symultaneously each variable x by theta(x).

The composition of two substitutions sigma and theta is the substitution sigma theta s.t. for every variable x, (sigma theta)(x) = (x sigma)theta.

**Definition**
Given a set of equations on terms
E = {t_{1} = u_{1}, ... , t_{n} = u_{n}},
a substitution theta is a *unifier*
(solution) for E
iff for each i we have that
t_{i}theta is identical to
u_{i}theta.
A substitution theta is the
*most general unifier* of E
if it is a unifier for E
and, for any other unifier sigma,
there exists sigma`'` s.t.
sigma = theta sigma`'` (i.e.
`sigma` can be obtained by
instantiating theta).

**Example**
Consider the set of equations

We have thatE = {A = B->C , B->C = C->B}

- theta =
`[B->B/A , B/C]`is a unifier of E. - theta
`'`=`[B->C/A , B/C]`is**not**a unifier of E (the substitution must be applied simultaneously to all the equations). - sigma =
`[(D->D)->(D->D)/A , D->D/B , D->D/C]`is also a unifier of E. - theta is more general than
sigma in fact,
`sigma = theta [D->D/B]`.

There are various algoriths to find the most general unifier for a set of first order equations; for instance, the algorithm of Martelli-Montanari (see [Apt_Pellegrini, Section 2]. Note that in this reference they use a reversed notation for subsitution: x/t insteand of t/x).

**Proposition**
The algorithm of Martelli-Montanari always terminates
and it gives a most general unifier
if the set of equations is solvable, failure otherwise.

**Corollary** Given a set of first order equations E,
it is decidable whether E is solvable or not, and, if it is
solvable, then it has a most general unifier (which is
unique modulo renaming).

- ML uses:
- First order unification for type inference at compile time (transparent to the user)
- Pattern matching (a sort of one-way unification) for evaluating function calls, at run time

- Prolog uses first order unification for evaluating goals, at run time. For efficiency reasons the occur-check is usually not implemented.
- Lambda Prolog uses:
- First order unification for type checking at compile time (transparent to the user)
- Higher Order Unification (unification of Higher-Order
terms modulo alpha, beta and eta conversion) for evaluating goals,
at run time.

HO unification is a conservative extension of FO unification, in the sense that if we give to lambda Prolog a FO equation, then we get back the FO most general unifier. Those who are interested can find the algorithm for HO unification in [G. Huet, A Unification Algorithm for Typed lambda-Calculus,*Theoretical Computer Science*1:27-57, 1973.]

**Proposition**
For any term M, it is decidable whether M is typable or not,
and in case it is, then it has a principal type.

This proposition is very important because it states that type inference/checking can be done effectively and without risk of looping.

Another important property is the following, which states that type inference/checking can be done once and for all at compile time, in the sense that if a program typechecks correctly then there is no risk of getting type errors at execution time.

**Theorem (Subject reduction)** If M : A and M ->> M`'` then M`'` : A.

Note that the converse (subject expansion) of this theorem does not
hold. Namely, there are terms M and M`'` such that
M ->> M`'` and M`'`: A, but M is not typeable.
Take for instance M = (\y x. x) Y and M`'` = \x. x.
Note that the reason why subject expansion does not hold is
related to lazy evaluation.

The language of Simply Typed lambda terms is defined by the following grammar:

Term ::= Var | (\Var : Type . Term) | (Term Term)Type expressions are defined as usual:

Type ::= TVar | (Type -> Type)The rules are identical to those of Curry, the only difference is the abs rule, which is forced to introduce the argument type specified in the abstraction variable:

G , x : A |- M : B (abs) ------------------------- G |- (\x:A. M) : A -> B

- Church-Rosser
- Consistency (not all terms are convertible to each other)
- Decidability of typing
- Subject reduction

**Proposition**

- If M : A and M : B in Church, then A and B are identical type expressions.
- If M and M
`'`are convertible, and they are both typable in Church, then they have the same type.

**Proposition**

- If M : A in Church, then |M| : A in Curry
- If |M| : A in Curry, and A is its principal type, then M : B in Church d for some instance B of A.
- If |M| : A in Curry, then there exists N such that N : A in Church and |M|=|N|.