PhD positions on Formal Methods for the Safety of Artificial Intelligence
Context
LIX (Laboratoire d'Informatique de l'Ecole Polytechnique) is a joint research unit with two supervisory institutions, École Polytechnique, a member of the Institut Polytechnique de Paris (cluster of universities composed of Ecole Polytechnique, Télécom Paris, ENSTA Paris, Télécom Sud Paris, ENSAE), and the Centre National de la Recherche Scientifique (CNRS), and one partner, Inria Saclay, with shared buildings and mixed teams.
LIX is organized in four poles: “Computer Mathematics”, “Data Analytics and Machine Learning”, “Efficient and Secure Communications”, “Modeling, Simulation and Learning” and “Proofs and Algorithms”. The Ph.D. student will be part of the Cosynus team in the “Proofs and Algorithms” pole. The members of the Cosynus team work on the semantics and static analysis of software systems, sequential, concurrent or distributed, and hybrid/control systems and cyber-physical systems.
The Ph.D. student will benefit from the exciting environment of LIX, specifically the Computer Science Department of Ecole Polytechnique (DIX), in which he or she will be able to give courses, and the Department of Computer Science, Data and Artificial Intelligence of Institut Polytechnique de Paris (IDIA). He or she will also interact with the project members of the SAIF project (Safe Artificial Intelligence through Formal Methods), of the French National Research Programme on Artificial Intelligence PEPR IA.
1st position: Probabilistic analysis of neural networks
AI is now embedded into a
number of everyday life applications. More and more, we depend
on neural networks to make even critical situations, such as
control and motion planning for autonomous cars, and it is of
primary importance to be able to verify their correct
behavior. But for some applications, e.g. perception in
autonomous systems, through classifiers, we can only hope for
weaker properties such as probabilistic safety.
The context of
this Ph.D. proposal is to provide qualitative measures of how
likely a neural network classifier, or a neural network used in a control system,
is
to exhibit a certain behavior. A classical property for such classifiers is some form of robustness: for instance if an
input
image
is
slightly
perturbed,
the
classifiers
should
classify
this
perturbed
image
the
same
way
as
the
original
one.
The
aim
of
this
work
is
to
tackle
more
general
properties
than local robustness,
providing
some
explainability
of
the
network behavior
[RSBY+22].
The
techniques
that
will
be
explored
in
the
first
part
of
the
thesis
will
be
based
on
imprecise
probabilities,
generalizing
probabilistic
and
epistemic uncertainties
(PBoxes
and
Dempster-Shafer
structures
[AAOB+13],
polynomial
forms
[SSYC+20]),
in
the
style
of
existing
work
such
as
[GFS+24]
(for
hybrid
systems)
and
[EGSP22]
(for
neural-network
based
control
systems). As
a
first
step,
this
supposes
defining
good
models
to
handle
and
propagate
efficiently
correlations
in
the
Dempster-Shafer
framework,
possibly
using copulas
[Sch23].
[EGSP22] Eric Goubault, Sylvie Putot, “RINO: Robust INner and Outer Approximated Reachability of Neural Networks Controlled Systems”. CAV 2022.
[GFS+24] Ander Gray, Marcelo Forets, Christian Schilling, Scott Ferson, and Luis Benet. “Verified propagation of imprecise probabilities in non-linear odes”. International Journal of Approximate Reasoning 2024.
[RSBY+22] Rabia Saleem, Bo Yuan, Fatih Kurugollu, Ashiq Anjum, Lu Liu. “Explaining deep neural networks: A survey on the global interpretation methods”. Neurocomputing, 2022.
[Sch23] Bernhard Schmelzer. “Random sets, copulas and related sets of probability measures”. International Journal of Approximate Reasoning. 2023.
[SSYC+20] Sriram Sankaranarayanan, Yi Chou, Eric Goubault, Sylvie Putot. “Reasoning about uncertainties in Discrete-Time Dynamical Systems using Polynomial Forms”. Neurips 2020.
2nd position: Set-based verification of quantified properties of neural networks
Abstraction-based safety verification for
neural networks has received considerable attention recently,
with in particular range analysis of neural networks using polyhedric
abstractions (e.g. DeepPoly and deepZ [SGMPV2018,SGPV2019])
and reachability analysis of closed-loop systems with neural
network controllers.
The objective of the thesis is to design sound abstractions to prove
or
falsify
generalized
robustness
properties
of
neural
network
enabled
systems.
This
includes
in particular
temporal
properties
of
neural
network-controlled
cyber-physical
systems
and
hyperproperties
of
neural
networks,
with
applications
in e.g.
fairness analysis,
or to
compare
the
quality of
neural
networks
obtained
by
sparsification
and
quantization
to
the
baseline.
The tractable
inner
and
outer-approximations
of
ranges
of
functions
and
neural
networks
proposed
in
[GP2020,GP2022]
are
a
building
block
for
proving
very general
quantified
reachability
problems
[EGSP2024].
These
constitute
a
basis
from
which
the
PhD
student
is
expected to
design
new
set-based
methods
to
tackle
these
difficult
properties
of
neural
network
controlled
systems.
Another
possible
starting
point
can
be
to
explore
some
classes
of
polyhedras
such as
tropical
abstractions
[GPPRS2021]
which
are
well-suited
to
abstract
ReLU-activated neural networks.
[GP2022] Eric Goubault, Sylvie Putot: RINO: Robust INner and Outer Approximated Reachability of Neural Networks Controlled Systems. CAV (1) 2022: 511-523
[GPPRS2021] Eric Goubault, Sébastien Palumby, Sylvie Putot, Louis Rustenholz, Sriram Sankaranarayanan: Static Analysis of ReLU Neural Networks with Tropical Polyhedra. SAS 2021: 166-190
[SGMPV2018] Singh, G., Gehr, T., Mirman, M., Püschel, M., Vechev, M.T.: Fast and effective robustness certification. In: Advances in Neural Information Processing Systems, NeurIPS. pp. 10825–10836 (2018)
[TYMMN+2020] Tran, H.D., Yang, X., Manzanas Lopez, D., Musau, P., Nguyen, L.V., Xiang, W., Bak, S., Johnson, T.T.: NNV: The neural network verification tool for deep neural networks and learning-enabled cyber-physical systems. In: Lahiri, S.K., Wang, C. (eds.) Computer Aided Verification. pp. 3–17 (2020)
[SGMPV2018] Gagandeep Singh, Timon Gehr, Matthew Mirman, Markus Püschel, and Martin Vechev: Fast and Effective Robustness Certification. NeurIPS 2018.
[SGPV2019] Gagandeep Singh, Timon Gehr, Markus Püschel, and Martin Vechev: An Abstract Domain for Certifying Neural Networks. POPL 2019.
[RSBY+22] Rabia Saleem, Bo Yuan, Fatih Kurugollu, Ashiq Anjum, Lu Liu. “Explaining deep neural networks: A survey on the global interpretation methods”. Neurocomputing, 2022.
[GP2024] E. Goubault, S. Putot. “Inner and outer approximate quantifier elimination for general reachability problems“. HSCC 2024.
Competences
Candidates should have a strong background in computer science with an interest in formal methods and specifically abstract interpretation/set-based methods, artificial intelligence, and possibly the application to cyber-physical and controlled systems.
The working languages are English and French.
Conditions
Monthly gross salary of between 2100 and 2300 euros per month plus 232 euros extra in case of teaching (64 hours per year).
Extra perks include full social security coverage, partial reimbursement of public transport costs and subsidized meals.
How to apply
The application should contain a motivation letter, an academic CV with transcripts, the contact information of two academic references (name, e-mail, and connection to the applicant), and be sent by e-mail at sylvie.putot@polytechnique.edu and eric.goubault@polytechnique.edu.
Contacts and general information
- Keywords : Formal methods, artificial intelligence, robustness and explainability, set-based methods, cyber-physical systems
- Laboratory : Laboratoire d'Informatique de l'Ecole Polytechnique (LIX), Institut Polytechnique de Paris, France
- Starting date : 2024-10-01
- Duration : 3 years
- Appplication deadline : 2024-06-25
- Team : Cosynus
- PhD advisors : Eric Goubault and Sylvie Putot