M = \np.(n M)(M p)This latter equation is equivalent to the following fixpoint equation:
M = (\mnp.(n m)(m p)) MHence the solution can be obtained by applying the fixpoint operator:
M = Y (\mnp.(n m)(m p))
Iterate = \fzx.zfxIn fact, we then have
Iterate F [n] X = (\fzx.zfx) F [n] X = [n] F X = Fn X
[minus] = \xy.Iterate [Predecessor] y xwhere Predecessor is the predecessor function defined in Lecture 4. In fact, we then have
[minus] [m] [n] = (\xy.Iterate [Predecessor] y x) [m] [n] = Iterate [Predecessor] [n] [m] = [Predecessor]n [m]and it is easy to see that
[Predecessor]n [m] = [m - n] if m >= n [Predecessor]n [m] = [0] if m < n
smallest_prime_factor n = 2 + min k. n mod (k+2) = 0where n mod (k+1) represents the remainder of the integer division of n by k+1. Note that we can be ensured that the result of this function is a prime number, because we take the minimum divisor of n.
We can also semplify the computation by starting the iteration from 2 instead than 0:
smallest_prime_factor n = min k > 1. n mod k = 0where min k > 2. n mod k = 0 represents the minimum number k greater than 1 (hence greater than or equal to 2) such that n mod k = 0.
Following the standard technique illustrated in Lecture 5, we can therefore define:
Spf = (Y F) [2]where
F = \f z k. [if_then_else] ([is_zero] ([mod] z k)) k (f z ([S] k))It remains to define the lambda term [mod]. Note that mod (in curried version) can be defined recursively as follows:
mod = \z k. if z < k then z else mod (z-k) kand, in fixpoint version:
mod = (\f z k. if z < k then z else f (z-k) k) mod>From which, using the fixpoint operator, we can define:
[mod] = Y (\f z k. [if_then_else] ([smaller_than] z k) z (f ([minus] z k) k))We know already how to lambda define [minus] (Exercise 3). As for the smaller_than function, we have:
smaller_than = \z k. if k = 0 then false else if z = 0 then true else smaller_than (z-1)(k-1)Hence we can define:
[smaller_than] = Y (\f z k. [if_then_else] ([is_zero] k) [false] ([if_then_else] ([is_zero] z) [true] (f ([Predecessor] z)([Predecessor] k))) )
((\x.xxx)(\x.xx))(\yz.z) -> ((\x.xx)(\x.xx)(\x.xx))(\yz.z) -> ((\x.xx)(\x.xx)(\x.xx))(\yz.z) -> ...
(\yz.z)((\x.xxx)(\x.xx)) -> (\yz.z)((\x.xx)(\x.xx)(\x.xx)) -> (\yz.z)((\x.xx)(\x.xx)(\x.xx)) -> ...and also we have the following finite reduction chain:
(\yz.z)((\x.xxx)(\x.xx)) -> \z.z (normal form)
(\yz.z)(\x.xxx)(\x.xx) -> (\z.z)(\x.xx) -> (\x.xx) (normal form)
\x.M -> \x.M1 -> \x.M2 -> \x.M3 ->...which are obtained via the xi rule from reduction chains from M of the form:
M -> M1 -> M2 -> M3 ->...If the first were infinite, so would be the latter.
N -> N1 -> N2 -> N3 ->...we have, via the Congruence 1 rule, a corresponding reduction chain from M of the form
M = N P -> N1 P -> N2 P -> N3 P ->...If the first were infinite, so would be the latter.
We can prove in an analogous way that also P must be strongly normalizing.
Y M = (\f.(\x.f(xx)(\x.f(xx)) M -> (\x.M(xx)(\x.M(xx)) -> M((\x.M(xx))(\x.M(xx))) -> M(M((\x.M(xx))(\x.M(xx)))) -> ...
Y M = (\f.(\x.f(xx)(\x.f(xx)) M -> (\x.M(xx)(\x.M(xx)) -> M((\x.M(xx))(\x.M(xx))) = (\xy.y)((\x.M(xx))(\x.M(xx))) -> \y.y (normal form)So Y M is weakly normalizing. And clearly it is not strongly normalizing for the argument given in previous point.
Y M = (\f.(\x.f(xx)(\x.f(xx)) M -> (\x.M(xx)(\x.M(xx)) -> M((\x.M(xx))(\x.M(xx))) = (\x.x)((\x.M(xx))(\x.M(xx))) -> (\x.M(xx)(\x.M(xx)) -> ...