# The Curry-Howard isomorphism

There is an interesting correspondence between the symply typed lambda calculus and the intuitionistic propositional logic, known as Curry-Howard isomorphism. This correspondence has the following three levels:
1. Types correspond to formulas
2. A term M of type T correspond to a proof of the formula T (M is a representation, or encoding, of the proof)
3. Beta-reduction corresponds to proof normalization (i.e. elimination of redundancy from a proof)

## Types as formulas

The correspondence comes from interpreting the type variables as propositional letters, and the arrow -> as implication. Thus A -> B, for instance, corresponds to "A implies B".

## Program as proofs

The fact that, for a given type T, there exists a lambda term (program) M with type T, corresponds to the fact that T as a formula is intuitionistically valid. Furthermore, M can be seen as a representation of the proof of T.

Let us first observe that there is a correspondence one-to-one between the proofs in the type system of the simply typed lambda calculus and the lambda terms (modulo alpha-renaming). In fact, given a term M with type T, the proof of M:T is isomorphic to the parse tree of M. This is because each rule of the type system corresponds uniquely to a production in the grammar generating the lambda terms. We can therefore see a typeable lambda term as the encoding of the corresponding proof of typeability.

The correspondence with intuitionistic validity comes from the observation that the rules of the type system, when we consider only the type part, are exactly the rules of the (implicational fragment of) intuitionistic logic. Namely, the following rules, where G represents a set of formulas:

```
(initial)  --------   if A is in G
G |- A

G, A |- B
(->introduction)  -------------
G |- A -> B

G |- A   G |- A -> B
(->elimination)   ----------------------
G |- B

```
Note: the above is a presentation of intuitionistic logic in sequent calculus style. Usually the correspondence is illustrated by using the so-called "natural deduction" presentation. We have followed the above approach because more similar to the rules we have seen for the typed lambda calculus.

Definition A formula T is intuitionistically valid if and only if {} |- T has a proof using the above rules initial, ->introduction, and ->elimination.

As a consequence of the correspondence between the systems for the typed lambda calculus and for intuitionistic logic, we have:

Proposition A type T is inhabited (i.e. there exists a term M with type T) iff, seen as a formula, T is intuitionistically valid.

It is well known that intuitionistical validity is stronger than classical validity, hence we have also:

Corollary A type T is inhabited only if, seen as a formula, is classically valid.

Classical validity is easy to check by using the truth tables; therefore the above corollary offers a simple method to prove that a type is not inhabited.

Example Consider the type T =def= (A -> B) -> (B -> A). Seen as a formula, T is not classically valid, because for A = false and B = true the result is false. Hence T is not inhabited.

The if-part of the above corollary does not hold. For instance, ((A -> B) -> A ) -> A is classically valid, but not intuitionistically. Hence, as a type, it is not inhabited. This formula is called "Pierce's formula".

### Axiomatic presentation

In the axiomatic presentation of propositional logics, classically we have the ->elimination rule (aka modus ponens) and the following three axioms, of which Pierce's formula is the third:
1. A -> B -> A
2. (A -> B -> C) -> (A -> B) -> (A -> C)
3. ((A -> B) -> A ) -> A
In the axiomatic presentation of intuitionistic logics, we have the ->elimination rule and the first two axioms only. The corresponding lambda terms are \xy.x and \xyz.(xz)(yz) respectively (the combinators K and S).

## Beta-reduction as proof normalization

We have seen that there is a correspondence one-to-one between typeable terms and proofs. On the contrary, given an inhabited type (i.e. valid formula) T, there are several (infinitely many, in fact) proofs for T. We will see now that some proofs for the same formula are related, in the sense that one can be obtained from the other.

Beta redexes correspond to the presence of a redundancy in the proof, namely the application of a ->introduction rule followed by the application of a ->elimination rule. The elimination of such redundancy transforms the proof into a new proof, in a way which corresponds to making a step of beta-reduction. I.e. if M is the term corresponding to the first proof, and N is the term corresponding to the second proof, then M -> N (the "->" here represents beta-reduction).

More precisely: consider the beta redex (\x.M)N. Assume (\x.M):A->B and N:A. Then N represents a proof P1 for A. On the other hand, (\x.M) represents a proof P2 for A->B, with a ->introduction at the bottom, and premise x:A |- M:B. Let P3 be the part of P2 without the last rule, i.e. the proof of x:A |- M:B. The application (\x.M)N corresponds to a proof having a ->elimination at the bottom, and then continuing with P1 and P2. Now, the proof P2 may make use of the assumption A in some application of the initial rule, to prove A. Let P4 be the proof obtained from P3 by replacing the proofs for A via the initial rule with P1. We have that P4 is a proof for B, and it is easy to check that the corresponding lambda term is M[N/x], i.e. the term obtained by applying beta-reduction to the redex (\x.M)N.

Example Consider the formula A -> A. The following is a proof for it:

```
------------------  init           --------  init
A -> A |- A -> A                   A |- A
------------------------- ->intro    ----------- ->intro
|- (A -> A) -> (A -> A)              |- A -> A
------------------------------------------------ ->elim
|- A -> A
```
This proof corresponds to the term (\x.x)(\y.y). We can eliminate the leftmost ->introduction and the bottom ->elimination, by the transformation illustrated above. We obtain:
```    --------  init
A |- A
----------- ->intro
|- A -> A
```
which corresponds to the lambda term \y.y, i.e. the term obtained by applying beta-reduction to (\x.x)(\y.y).

Note: In general the proof normalization does not reduce the size of a proof. On the contrary, it usually increases it.

The following theorem states an important property of typeable terms. We do not give the proof because it is rather complicated.

Theorem If M is typeable then M is strongly normalizing.

This theorem is not a consequence of what we have seen above; we mention it here only because from the correspondence between terms and proofs we can derive the following corollary, which is relevant in proof theory:

Corollary If T is valid then T has a normalized proof (i.e. a proof without the redundancies of the kind ->intoduction followed by a ->elimination).

# Extending the isomorphism

The Curry-Howard isomorphism is a guide to the design of types and constructs in programming languages. The relation with intuitionistic logic ensures a solid foundation and the possibility of using the properties of a well known theory.

We illustrate the idea by usingh the logical connectives AND and OR. We will see that, as types, they correspond to the cartesian product and to the disjoint union respectively.

In intuitionistic logic, the rules for AND and OR are:

```                           G |- A   G |- B
AND-intro -----------------
G |- A AND B

G |- A AND B                     G |- A AND B
AND-elim-l --------------      AND-elim-r ----------------
G |- A                            G |- B

G |- A                       G |- B
OR-intro-l -------------      OR-intro-r -------------
G |- A OR B                   G |- A OR B

G |- A OR B   G, A |- C   G, B |- C
OR-elim --------------------------------------
G |- C
```

## The types

We introduce two constructor on types corresponding to the logical connectives AND and OR. As constructors on types, they are usually denoted by * and +. Thus our new langauge of types (or formulas) is:
```   Type ::= TVar | Type -> Type | Type * Type | Type + Type
```
We will use the following conventions to minimize the use of parentheses:
• + and * are left associative, -> is right associative
• * has precedence over +, and + has precedence over ->

## The terms

In order to mantain the analogy between proofs and programs, we need to enrich the language of terms with constructs corresponding to each of the rules above.

In general, the constructs corresponding to introduction rules are called constructors, and those corresponding to the elimination rules are called selectors (or destructors). The names comes from the fact that the constructors "construct" a new object from its components (in a unique way, i.e. two terms resulting from the application of the same constructor to different terms are supposed to be different. In other words, a constructor is injective). A selector "destroys" a structure created by a constructor and selects a certain component. Below we give an example of the definition of a ML datatype corresponding to the disjoint union.

Certain programming languages, like ML, offer the possibility of defining new data types. The constructors correspond to the ML constructors for the data type. The selectors can be defined by pattern matching on the constructors. (The fact that costructors are injective allows us to use them to build patterns, and therefore to use them in definition given by pattern-matching.)

We will call "pairing" the constructor corresponding to the AND-intro rule, and "projections" (left and right) the selectors corresponding to the AND-elim rules. We will call "injections" (left and right) the constructors corresponding to the OR-intro rules, and "case" the selector corresponding to the OR-elim rule. The type of each of these constructs is of course determined by the logical rules.

The enriched language is described by the following grammar:

```   Term ::= Var | \Var. Term | Term Term             pure lambda terms
| (Term,Term) | fst Term | snd Term        pairs and projections
| inl Term | inr Term                      injections
| case(Term, \Var.Term, \Var.Term)         case construct
```

## The type system

We can design the rules for * and + to the type system by following those of intuitionistic logic:
```                              G |- M:A   G |- N:B
*intro ---------------------
G |- (M,N): A * B

G |- M: A * B                 G |- M: A * B
*elim-l ----------------      *elim-r ---------------
G |- fst M : A                G |- snd M: B

G |- M : A                       G |- M : B
+intro-l --------------------     +intro-r --------------------
G |- inl M : A + B                G |- inr M : A + B

G |- M: A + B   G, x:A |- N: C   G, y:B |- P: C
+elim ---------------------------------------------------
G |- case(M, \x.N, \y.P) : C
```
Obviously, the correspondence between inhabited types and valid formulas still hold (it should be no surprise: we are designing things so that it holds).

Proposition A type T is inhabited iff (seen as a formula) T is intuitionistically valid.

Also with AND and OR, the intuitionistic validity is stronger than classical validity, hence we have:

Corollary If a type T is inhabited then (seen as a formula) T is classically valid.

## The reduction rules

The rules for reduction can be obtained by following the analogy with proof-normalization.

The redundancies that we want to eliminate are those which come from an application of a op-into rule followed by an op-elim rule, where op is either * or + (same as we saw for ->). This criterion brings to the design of the following rules (where the arrow means reduction):

```   fst(M,N) -> M
snd(M,N) -> N
case(inl M, \x.N, \y.P) -> N[M/x]
case(inr M, \x.N, \y.P) -> P[M/y]
```
Plus, of course, the structural (congruence) rules.

Thanks to the fact that the rules for reduction are designed on the basis of the logical properties of the system, we have that the subject reduction theorem still holds in this extended language:

Theorem (Subject reduction) If M ->> N and M : T, then N : T.

Subject reduction is an important property because it allows us to do type checking statically (i.e. at compile time).

## Example: the disjoint union in ML

We give here an example which illustrates how we can use the datatype declaration in ML to construct new types. We define the disjoint union of two arbitrary types 'a and 'b, and the selector function case (we will call it my_case because "case" is a reserved word in ML).
```- datatype ('a,'b) disjoint_union = inl of 'a | inr of 'b;

datatype ('a,'b) disjoint_union = inl of 'a | inr of 'b

- fun my_case((inl M),N,_) = N M
| my_case((inr M),_,P) = P M;

val my_case = fn : ('a,'b) disjoint_union * ('a -> 'c) * ('b -> 'c) -> 'c
```
We can instantiate the parameters of the disjoint union so to create specific types. For instance, we can define the type number as the disjoint union of integers and real. Then, we can define the operation half : number -> number (division by two, either integer division or real division, depending on the argument) by using the function case.
```- type number = (int, real) disjoint_union;

type number = (int,real) disjoint_union

- val half : number -> number =
fn x => my_case(x, fn y => inl(y div 2), fn z => inr(z / 2.0));

val half = fn : number -> number

- half (inl 3);

val it = inl 1 : number

- half (inr 3.0);

val it = inr 1.5 : number

```
Note: Some languages, like Pascal, offer the "disjoint union" type in the form of variant records. However, unlike the types above, variant records do not have rigorous (logically based) foundations. One consequence is that type checking on variant records cannot be done statically; it can only be done during execution.