- Types correspond to formulas
- A term M of type T correspond to a proof of the formula T (M is a representation, or encoding, of the proof)
- Beta-reduction corresponds to proof normalization (i.e. elimination of redundancy from a proof)

Let us first observe that there is a correspondence one-to-one between the proofs in the type system of the simply typed lambda calculus and the lambda terms (modulo alpha-renaming). In fact, given a term M with type T, the proof of M:T is isomorphic to the parse tree of M. This is because each rule of the type system corresponds uniquely to a production in the grammar generating the lambda terms. We can therefore see a typeable lambda term as the encoding of the corresponding proof of typeability.

The correspondence with intuitionistic validity comes from the observation that the rules of the type system, when we consider only the type part, are exactly the rules of the (implicational fragment of) intuitionistic logic. Namely, the following rules, where G represents a set of formulas:

(initial) -------- if A is in G G |- A G, A |- B (->introduction) ------------- G |- A -> B G |- A G |- A -> B (->elimination) ---------------------- G |- BNote: the above is a presentation of intuitionistic logic in sequent calculus style. Usually the correspondence is illustrated by using the so-called "natural deduction" presentation. We have followed the above approach because more similar to the rules we have seen for the typed lambda calculus.

**Definition** A formula T is intuitionistically valid if and only if {} |- T
has a proof using the above rules initial, ->introduction, and ->elimination.

As a consequence of the correspondence between the systems for the typed lambda calculus and for intuitionistic logic, we have:

**Proposition** A type T is inhabited (i.e. there exists a term M with type T) iff,
seen as a formula, T is intuitionistically valid.

**Corollary** A type T is inhabited only if, seen as a formula, is classically valid.

**Example**
Consider the type T =def= (A -> B) -> (B -> A). Seen as a formula, T is not classically
valid, because for A = false and B = true the result is false. Hence T is not inhabited.

The if-part of the above corollary does not hold. For instance, ((A -> B) -> A ) -> A is classically valid, but not intuitionistically. Hence, as a type, it is not inhabited. This formula is called "Pierce's formula".

- A -> B -> A
- (A -> B -> C) -> (A -> B) -> (A -> C)
- ((A -> B) -> A ) -> A

Beta redexes correspond to the presence of a redundancy in the proof, namely the application of a ->introduction rule followed by the application of a ->elimination rule. The elimination of such redundancy transforms the proof into a new proof, in a way which corresponds to making a step of beta-reduction. I.e. if M is the term corresponding to the first proof, and N is the term corresponding to the second proof, then M -> N (the "->" here represents beta-reduction).

More precisely: consider the beta redex (\x.M)N. Assume (\x.M):A->B and N:A.
Then N represents a proof P_{1} for A.
On the other hand, (\x.M) represents a proof P_{2} for A->B,
with a ->introduction at the bottom, and premise x:A |- M:B.
Let P_{3} be the part of P_{2} without the last rule,
i.e. the proof of x:A |- M:B.
The application (\x.M)N corresponds to a proof having a ->elimination at the
bottom, and then continuing with P_{1} and P_{2}.
Now, the proof P_{2} may make use of the assumption A in some
application of the initial rule, to prove A.
Let P_{4} be the proof obtained from P_{3}
by replacing the proofs for A via the initial rule
with P_{1}. We have that P_{4} is a proof for
B, and it is easy to check that the corresponding lambda term is
M[N/x], i.e. the term obtained by applying beta-reduction to
the redex (\x.M)N.

**Example**
Consider the formula A -> A. The following is a proof for it:

------------------ init -------- init A -> A |- A -> A A |- A ------------------------- ->intro ----------- ->intro |- (A -> A) -> (A -> A) |- A -> A ------------------------------------------------ ->elim |- A -> AThis proof corresponds to the term (\x.x)(\y.y). We can eliminate the leftmost ->introduction and the bottom ->elimination, by the transformation illustrated above. We obtain:

-------- init A |- A ----------- ->intro |- A -> Awhich corresponds to the lambda term \y.y, i.e. the term obtained by applying beta-reduction to (\x.x)(\y.y).

Note: In general the proof normalization does not reduce the size of a proof. On the contrary, it usually increases it.

The following theorem states an important property of typeable terms. We do not give the proof because it is rather complicated.

**Theorem**
If M is typeable then M is strongly normalizing.

**Corollary**
If T is valid then T has a normalized proof (i.e. a proof without the
redundancies of the kind ->intoduction followed by a ->elimination).

We illustrate the idea by usingh the logical connectives AND and OR. We will see that, as types, they correspond to the cartesian product and to the disjoint union respectively.

In intuitionistic logic, the rules for AND and OR are:

G |- A G |- B AND-intro ----------------- G |- A AND B G |- A AND B G |- A AND B AND-elim-l -------------- AND-elim-r ---------------- G |- A G |- B G |- A G |- B OR-intro-l ------------- OR-intro-r ------------- G |- A OR B G |- A OR B G |- A OR B G, A |- C G, B |- C OR-elim -------------------------------------- G |- C

Type ::= TVar | Type -> Type | Type * Type | Type + TypeWe will use the following conventions to minimize the use of parentheses:

- + and * are left associative, -> is right associative
- * has precedence over +, and + has precedence over ->

In general, the constructs corresponding to introduction rules are called constructors, and those corresponding to the elimination rules are called selectors (or destructors). The names comes from the fact that the constructors "construct" a new object from its components (in a unique way, i.e. two terms resulting from the application of the same constructor to different terms are supposed to be different. In other words, a constructor is injective). A selector "destroys" a structure created by a constructor and selects a certain component. Below we give an example of the definition of a ML datatype corresponding to the disjoint union.

Certain programming languages, like ML, offer the possibility of defining new data types. The constructors correspond to the ML constructors for the data type. The selectors can be defined by pattern matching on the constructors. (The fact that costructors are injective allows us to use them to build patterns, and therefore to use them in definition given by pattern-matching.)

We will call "pairing" the constructor corresponding to the AND-intro rule, and "projections" (left and right) the selectors corresponding to the AND-elim rules. We will call "injections" (left and right) the constructors corresponding to the OR-intro rules, and "case" the selector corresponding to the OR-elim rule. The type of each of these constructs is of course determined by the logical rules.

The enriched language is described by the following grammar:

Term ::= Var | \Var. Term | Term Term pure lambda terms | (Term,Term) | fst Term | snd Term pairs and projections | inl Term | inr Term injections | case(Term, \Var.Term, \Var.Term) case construct

G |- M:A G |- N:B *intro --------------------- G |- (M,N): A * B G |- M: A * B G |- M: A * B *elim-l ---------------- *elim-r --------------- G |- fst M : A G |- snd M: B G |- M : A G |- M : B +intro-l -------------------- +intro-r -------------------- G |- inl M : A + B G |- inr M : A + B G |- M: A + B G, x:A |- N: C G, y:B |- P: C +elim --------------------------------------------------- G |- case(M, \x.N, \y.P) : CObviously, the correspondence between inhabited types and valid formulas still hold (it should be no surprise: we are designing things so that it holds).

**Proposition** A type T is inhabited iff (seen as a formula) T is intuitionistically valid.

**Corollary** If a type T is inhabited then (seen as a formula) T is classically valid.

The redundancies that we want to eliminate are those which come from an application of a op-into rule followed by an op-elim rule, where op is either * or + (same as we saw for ->). This criterion brings to the design of the following rules (where the arrow means reduction):

fst(M,N) -> M snd(M,N) -> N case(inl M, \x.N, \y.P) -> N[M/x] case(inr M, \x.N, \y.P) -> P[M/y]Plus, of course, the structural (congruence) rules.

Thanks to the fact that the rules for reduction are designed on the basis of the logical properties of the system, we have that the subject reduction theorem still holds in this extended language:

**Theorem (Subject reduction)**
If M ->> N and M : T, then N : T.

- datatype ('a,'b) disjoint_union = inl of 'a | inr of 'b; datatype ('a,'b) disjoint_union = inl of 'a | inr of 'b - fun my_case((inl M),N,_) = N M | my_case((inr M),_,P) = P M; val my_case = fn : ('a,'b) disjoint_union * ('a -> 'c) * ('b -> 'c) -> 'cWe can instantiate the parameters of the disjoint union so to create specific types. For instance, we can define the type number as the disjoint union of integers and real. Then, we can define the operation half : number -> number (division by two, either integer division or real division, depending on the argument) by using the function case.

- type number = (int, real) disjoint_union; type number = (int,real) disjoint_union - val half : number -> number = fn x => my_case(x, fn y => inl(y div 2), fn z => inr(z / 2.0)); val half = fn : number -> number - half (inl 3); val it = inl 1 : number - half (inr 3.0); val it = inr 1.5 : numberNote: Some languages, like Pascal, offer the "disjoint union" type in the form of variant records. However, unlike the types above, variant records do not have rigorous (logically based) foundations. One consequence is that type checking on variant records cannot be done statically; it can only be done during execution.