## Exercise 1

Note that   M N P = (N M) (M P)   is satisfied if the following equality is satisfied:
```   M = \np.(n M)(M p)
```
This latter equation is equivalent to the following fixpoint equation:
```   M = (\mnp.(n m)(m p)) M
```
Hence the solution can be obtained by applying the fixpoint operator:
```   M = Y (\mnp.(n m)(m p))
```

## Exercise 2

It is sufficient to define
```   Iterate = \fzx.zfx
```
In fact, we then have
```   Iterate F [n] X = (\fzx.zfx) F [n] X = [n] F X = Fn X
```

## Exercise 3

It is sufficient to define
```   [minus] = \xy.Iterate [Predecessor] y x
```
where   Predecessor is the predecessor function defined in Lecture 4. In fact, we then have
```   [minus] [m] [n] = (\xy.Iterate [Predecessor] y x) [m] [n]
= Iterate [Predecessor] [n] [m]
= [Predecessor]n [m]
```
and it is easy to see that
```   [Predecessor]n [m] = [m - n] if m >= n
[Predecessor]n [m] = [0]     if m < n
```

## Exercise 4

We can use the minimalization operator and define
```   smallest_prime_factor n = 2 + min k. n mod (k+2) = 0
```
where n mod (k+1) represents the remainder of the integer division of n by k+1. Note that we can be ensured that the result of this function is a prime number, because we take the minimum divisor of n.

We can also semplify the computation by starting the iteration from 2 instead than 0:

```   smallest_prime_factor n = min k > 1. n mod k = 0
```
where min k > 2. n mod k = 0 represents the minimum number k greater than 1 (hence greater than or equal to 2) such that n mod k = 0.

Following the standard technique illustrated in Lecture 5, we can therefore define:

```   Spf = (Y F) [2]
```
where
```   F = \f z k. [if_then_else] ([is_zero] ([mod] z k)) k (f z ([S] k))
```
It remains to define the lambda term [mod]. Note that mod (in curried version) can be defined recursively as follows:
```   mod = \z k. if z < k then z else mod (z-k) k
```
and, in fixpoint version:
```   mod = (\f z k. if z < k then z else f (z-k) k) mod
```
>From which, using the fixpoint operator, we can define:
```   [mod] = Y (\f z k. [if_then_else] ([smaller_than] z k) z (f ([minus] z k) k))
```
We know already how to lambda define [minus] (Exercise 3). As for the smaller_than function, we have:
```   smaller_than = \z k. if k = 0
then false
else if z = 0
then true
else smaller_than (z-1)(k-1)
```
Hence we can define:
```   [smaller_than] = Y (\f z k. [if_then_else]
([is_zero] k)
[false]
([if_then_else]
([is_zero] z)
[true]
(f ([Predecessor] z)([Predecessor] k)))
)
```

## Exercise 5

1. Not normalizing. In fact the only possible reductions are:
```   ((\x.xxx)(\x.xx))(\yz.z) -> ((\x.xx)(\x.xx)(\x.xx))(\yz.z)
-> ((\x.xx)(\x.xx)(\x.xx))(\yz.z)
-> ...
```

2. Weakly normalizing. In fact we have an infinite reduction chain:
```   (\yz.z)((\x.xxx)(\x.xx)) -> (\yz.z)((\x.xx)(\x.xx)(\x.xx))
-> (\yz.z)((\x.xx)(\x.xx)(\x.xx))
-> ...
```
and also we have the following finite reduction chain:
```   (\yz.z)((\x.xxx)(\x.xx)) -> \z.z (normal form)
```

3. Strongly normalizing. In fact we have only one possible reduction chain, and it is finite:
```   (\yz.z)(\x.xxx)(\x.xx) -> (\z.z)(\x.xx) -> (\x.xx) (normal form)
```

## Exercise 6

1. False. For instance, take M = (\x.xx). Then M is in normal form (hence strongly normalizing), but M M = (\x.xx)(\x.xx)is the term Omega (see Lecture 6), which we know is not normalizing.

2. True. In fact the only possible reduction chains from \x.M are those of the form
```   \x.M -> \x.M1 -> \x.M2 -> \x.M3 ->...
```
which are obtained via the xi rule from reduction chains from M of the form:
```   M -> M1 -> M2 -> M3 ->...
```
If the first were infinite, so would be the latter.

3. True. In fact for every reduction chain from N of the form
```   N -> N1 -> N2 -> N3 ->...
```
we have, via the Congruence 1 rule, a corresponding reduction chain from M of the form
```   M = N P -> N1 P -> N2 P -> N3 P ->...
```
If the first were infinite, so would be the latter.

We can prove in an analogous way that also P must be strongly normalizing.

## Exercise 7

1. Not possible. Y M cannot be strongly normalizing. In fact we always have the infinite reduction chain:
```   Y M = (\f.(\x.f(xx)(\x.f(xx)) M -> (\x.M(xx)(\x.M(xx))
-> M((\x.M(xx))(\x.M(xx)))
-> M(M((\x.M(xx))(\x.M(xx))))
-> ...
```

2. Possible. For example, M = \xy.y satisfies the requirement. In fact we have:
```   Y M = (\f.(\x.f(xx)(\x.f(xx)) M -> (\x.M(xx)(\x.M(xx))
-> M((\x.M(xx))(\x.M(xx)))
=  (\xy.y)((\x.M(xx))(\x.M(xx)))
-> \y.y (normal form)
```
So Y M is weakly normalizing. And clearly it is not strongly normalizing for the argument given in previous point.

3. Possible. For example, M = \x.x satisfies the requirement. In fact using redexes containing M in this case does not help:
```   Y M = (\f.(\x.f(xx)(\x.f(xx)) M -> (\x.M(xx)(\x.M(xx))
-> M((\x.M(xx))(\x.M(xx)))
=  (\x.x)((\x.M(xx))(\x.M(xx)))
-> (\x.M(xx)(\x.M(xx))
-> ...
```

## Exercise 8

1. Convertible. In fact both M and N have normal form [1].

2. Not convertible. In fact M has normal form [1] and N has normal form [0]. If they were convertible they would have the same normal form.