# News

Tak by Prof. A. Mitsos: «Global Optimization of Embedded Programs» 2017-03-08

Un ancien du LIX casse l'algorithme cryptographique SHA1 2017-02-24

Sémin'Ouvert : « Communicating and trusting formal proofs », par Dale Miller 2017-02-23

Sémin'Ouvert : « Graph degeneracy for citation networks exploration », par Michalis Vazirgiannis 2017-01-19

Talk by Emily Gunawan: «Cluster algebraic interpretation of infinite friezes» 2017-01-18

Talk by Christian Johansen: «Sculptures for Higher Dimensional Automata» 2017-01-12

Sémin'Ouvert : « Multiplication rapide de nombres entiers » 2016-12-15

GT Combi: «Génération aléatoires d'automates non-déterministes "intéressants"» par Julien David 2016-12-14

Talk by Panayotis Mertikopoulos: «Learning under uncertainty in games with continuous action sets» 2016-12-09

# The laboratory LIX

The abbreviation LIX stands for *Laboratoire d'Informatique de l'Ecole polytechnique* (Laboratory of Informatics of the École polytechnique), where the letter X represents the Polytechnique per French tradition. LIX is a *mixed research unit* (UMR 7161) consisting of two *tutelles*: the École polytechnique and the National Center for Scientific Research (CNRS). LIX falls under the jurisdiction of the INS2I division (which stands for the National Institute of Sciences of Information and their Interactions) of CNRS.

# Sémin'Ouvert : « Communicating and trusting formal proofs », par Dale Miller

Faced with concerns over the correctness of safety critical systems, we should be able to take comfort that some parts of them may be proved correct: that is, we should be able to trust proofs. However, the methods by which formal proofs are built today makes trusting them difficult. While proof assistants are used to build formal proofs, those proofs are often locked to the technology behind that assistant. Formal proofs produced by one proof assistant cannot usually be checked and trusted by another proof assistant nor by a future version of itself. Thus, one of the components of the scientific method that ensures trust—reproducibility—is missing from proof assistants today. Building on recent developments in the theory of proof, I will present the Foundational Proof Certificate framework for defining the semantics of proof evidence. Since such definitions are executable, it is possible to build proof checkers that can check a wide range of formal proofs in both classical and intuitionistic versions of logic and arithmetic. In this scheme, the logic engine that executes such definitions is the only thing that needs to be trusted. Since such a logic engine is based on well-known computational logic topics, anyone can write a logic engine in their choice of programming language and hardware in order for them to build a checker they might trust. Possible consequences of employing this framework include much richer networking of existing provers as well as enabling both globe-spanning libraries and marketplaces of proofs.