PhD positions on Formal Methods for the Safety of Artificial Intelligence
Context
LIX (Laboratoire d'Informatique de l'Ecole Polytechnique) is a joint research unit with two supervisory institutions, École Polytechnique, a member of the Institut Polytechnique de Paris (cluster of universities composed of Ecole Polytechnique, Télécom Paris, ENSTA Paris, Télécom Sud Paris, ENSAE), and the Centre National de la Recherche Scientifique (CNRS), and one partner, Inria Saclay, with shared buildings and mixed teams.
LIX is organized in four poles: “Computer Mathematics”, “Data Analytics and Machine Learning”, “Efficient and Secure Communications”, “Modeling, Simulation and Learning” and “Proofs and Algorithms”. The Ph.D. student will be part of the Cosynus team in the “Proofs and Algorithms” pole. The members of the Cosynus team work on the semantics and static analysis of software systems, sequential, concurrent or distributed, and hybrid/control systems and cyber-physical systems.
The Ph.D. student will benefit from the exciting environment of LIX, specifically the Computer Science Department of Ecole Polytechnique (DIX), in which he or she will be able to give courses, and the Department of Computer Science, Data and Artificial Intelligence of Institut Polytechnique de Paris (IDIA). He or she will also interact with the project members of the SAIF project (Safe Artificial Intelligence through Formal Methods), of the French National Research Programme on Artificial Intelligence PEPR IA.
Topic: Verification of probabilistic properties for AI
AI is now embedded into a number of everyday life applications. More and more, we depend on neural networks to make even critical situations, such as control and motion planning for autonomous cars, and it is of primary importance to be able to verify their correct behavior.
For some applications, e.g. perception in autonomous systems, through classifiers, we can only hope for probabilistic safety. Moreover, in real-world systems, precise models representative of the data are not always available. For instance, several probabilistic models may be plausible for describing of a problem, or a probabilistic model may be known but with uncertain parameters. Therefore, we need to consider both probabilistic information and epistemic uncertainty. Recently was introduced in [1] an approach based on imprecise probabilities or probability boxes, which generalize probabilistic and epistemic uncertainties by defining sets of probability distributions, for the quantitative verification of neural networks. This approach provides qualitative measures of how likely the outputs of a neural network are to exhibit a certain behavior, given some assumptions on the inputs specified as imprecise probabilities. Based on an abstraction and propagation of sets of probability distributions in neural networks, the probability of a property satisfaction can be bounded, and the regions of the input space the more likely to lead to property violation identified.
The approach of [1] proved to be both more general and more computationally efficient than the state of the art. However many challenges remain on the way to the applicability to real-world problems. As a starting point to the problem, the following extensions can be studied:
- the current abstraction of probability boxes relies on a constant stepsize staircase discretization which will hardly scale with the input dimension of the networks; we would like to investigate other abstractions;
- independence is currently assumed between the inputs of the network; we would like to handle also multivariate input distributions, for instance using copulas [4] as in e.g. [2].
- application to the safety of autonomous systems, such as robustness of perception and decision in the vision of drones with imprecise probabilistic information on trajectories;
- contribution to the fairness analysis and more generally the explanability of the network behavior [3].
[1] Eric Goubault and Sylvie Putot. A zonotopic Dempster-Shafer approach to the quantitative verification of neural networks. In Formal Methods: 26th International Symposium, FM 2024, Milan, Italy, September 9–13, 2024, Proceedings, Part I, page 324–342, Berlin, Heidelberg, 2024. Springer-Verlag.
[2] Ander Gray, Marcelo Forets, Christian Schilling, Scott Ferson, and Luis Benet. Verified propagation of imprecise probabilities in non-linear odes. International Journal of Approximate Reasoning, 164, 2024. Publisher Copyright: © 2023.
[3] Rabia Saleem, Bo Yuan, Fatih Kurugollu, Ashiq Anjum, and Lu Liu. Explaining deep neural networks: A survey on the global interpretation methods. Neurocomputing, 513:165–180, 2022.
[4] Bernhard Schmelzer. Random sets, copulas and related sets of probability measures. International Journal of Approximate Reasoning, 160:108952, 2023.
Competences
Candidates should have a strong background in computer science with a particular focus on formal methods and artificial intelligence.
The working languages are English and French.
Conditions
Monthly gross salary of between 2200 and 2300 euros per month plus 232 euros extra in case of teaching (64 hours per year).
Extra perks include full social security coverage, partial reimbursement of public transport costs and subsidized meals.
How to apply
The application should contain a cover letter, an academic CV with transcripts, the contact information of two academic references (name, e-mail, and connection to the applicant), and be sent by e-mail at sylvie.putot@polytechnique.edu and eric.goubault@polytechnique.edu.
Contacts and general information
- Keywords : Formal methods, artificial intelligence, robustness and explainability, set-based methods, imprecise probabilities
- Laboratory : Laboratoire d'Informatique de l'Ecole Polytechnique (LIX), Institut Polytechnique de Paris, France
- Starting date : 2025-10-01
- Duration : 3 years
- Appplication deadline : 2025-06-25
- Team : Cosynus
- PhD advisors : Eric Goubault and Sylvie Putot