Seminar: Ethical issues, law & novel applications of AI (MIE 630)

On Tuesdays, 2pm-3:30pm, Room Gilles Kahn or Sophie Germain, Alan Turing Buidling.

  • October 2 - Toward Responsible and Safe AI, Nozha Boujemaa, Dr Inria, director of the DataIA institute (Data Sciences, Intelligence & Society) (room Sophie Germain).

Abstract: AI technologies are essential for data-driven innovation and digital transformation in many socio-economic sectors. The scientific progress and deployment of AI are increasing sharply. However, several scientific and social challenges remain. In this presentation, I will develop more specifically the challenges we are facing in developing responsible and human values centric AI.

Bio: Nozha Boujemaa is a Senior Research Scientist at Inria (the French National Institute for computer science and applied mathematics), Director of DATAIA Institute (Data Sciences, Artificial Intelligence & Society). She is Member of the Board of Directors of Big Data Value Association, Vice-Chair of the AI High Level Expert Group of the European Commission and member of the OECD AI expert group. Nozha is International Advisor for Japanese Science and Technology Agency Program “Advanced Core Technologies for Big Data Integration”, Senior Scientific Advisor for “The AI Initiative“ (Harvard Kennedy School) and President of the Scientific Council of the Institute of Technological Research “SystemX”. Nozha is Knight of the National Order of Merit. Her domains of expertise are Machine Learning, Unsupervised & Semi-supervised Learning, Large-scale Multimedia Retrieval, Content Search, Personalization, Pattern and Object Recognition, Computer Vision and Image Analysis, Transparency and Accountability of data and algorithms covering several application domains. Nozha Boujemaa is the co-author of over 150 international publications and has supervised over 25 PhD students. Previously, Advisor to the Chairman and CEO of Inria in Data Science with concern to the socio-economic impact, Director of Inria Saclay Research Center for 5 years (2010-2015) and Scientific Head of IMEDIA Research Group (Large Scale Multimedia Content Search) for more than 10 years.

  • October 16 - From artificial intelligence to computational ethics, Jean-Gabriel Ganascia, Sorbonne University, Lip6, Chairman of the COMETS (CNRS Ethical Committee) (room Sophie Germain)

Abstract: With the development of artificial intelligence, it is now possible to design agents that are said autonomous in the sense that their behavior results from a chain of physical causalities from sensors acquisition of signals to action without any human contribution. There are many possible applications of such agents, for instance in transportation, with autonomous cars, or in war, with autonomous weapons. Since human is not present in the loop, many fears the robots animated by such agents be predatory. In order to prevent unsafe behaviors, references to humane values have to be included in the agent programming. More technically, it means that engineers are now designing the equivalent of an “ethical controller” to restrict the robot actions according to moral criteria. To do so, it is necessary to encode different ethical systems, which gives birth to what is called “Computational Ethics”. At the light of the recent autonomous car accident that happened March 2018 in Arizona, we shall detail de different ethical dimensions that such a controller has to satisfy and the technical difficulties that artificial intelligence researchers who deal with computational ethics are facing.

Bio: Jean-Gabriel Ganascia is Professor of Computer Science at Sorbonne University, senior member of the Institut Universitaire de France, EurAI – European Association for Artificial Intelligence – fellow and member of the LIP6 (Laboratory of Computer Science of the Paris 6) where he heads the ACASA team. In addition, he chairs the COMETS that is the Ethical Committee of the CNRS and he is member of the CERNA, i.e. the Ethical committee of the Digital Sciences of ALLISTENE, which is the coordination of the French research institutes in computing.

  • October 23 - Facial recognition: from early methods to deep learning, Stéphane Gentric, Research unit manager, IDEMIA (room Gilles Kahn)

Abstract: 2D Face Recognition is one of the oldest computer vision application. As techniques improve, more complex databases arise, always leaving space for algorithm improvements. This lecture will review the whole face recognition pipeline, how early methods address the main issues and how Deep Learning handles them now. We will present major operational deployments and the most recent performances. Finally, we will discuss current limitations and future research avenues.

Bio: Stéphane Gentric is Research Unit Manager at Idemia (ex-Morpho) and a Deep Learning lecturer at ESIEA and Telecom ParisTech. He received his PhD on Pattern Recognition at UPMC in 1999. As principal researcher then team leader, he worked on Fingerprint recognition algorithms, then Face, then Iris and now also Video Analytics. As Senior Expert, he was involved in most of Idemia’s projects in biometrics for the past 15 years, such as the Changi border crossing System as well as NIST benchmarks, or the UIDAI project. His current research interests center around pattern recognition for improvement of biometric systems.

  • November 13 - Google's AI principles, Ludovic Peran, Google Paris (room Sophie Germain)

Abstract: The spread of powerful AI-based technologies in the recent years contributes to solve important problems, helps promote innovation and facilitates Google’s mission to organize the world’s information. But these same technologies also raise important challenges when it comes to fairness, safety, privacy, or accountability. Google is committed to address clearly, thoughtfully, and affirmatively these challenges. This class will present in details Google’s core principles going through the different tensions between various principles. We will also detail our processes, tools and initiatives in the new and still exploratory field of responsible AI development.

Bio: Ludovic is Public Policy Manager at Google, specifically in charge of artificial intelligence policy issues. He is a member of the OECD AI expert groups and a board member of the think tank Renaissance Numerique. Ludovic also teaches digital economics at ESCP business school and takes part in the work of various other think tanks around the issue of the impact of technology on society. Ludovic received a Master of Science in quantitative economics from Paris School of Economics and a Master in Management from ESCP Europe.

  • November 20 - Unsupervised Learning on Homogeneous Manifolds, Lie groups and Structured Matrices based on Information Geometry and Souriau Lie Group Thermodynamics, Frederic Barbaresco, THALES (room Sophie Germain)

Abstract: This talk deals with new applications based on extension of Machine Learning methods for data in Homogeneous Manifolds, Lie groups and Structured Matrices based on Information Geometry, Statistics on Metric space and Souriau Lie Group Thermodynamics that will be illustrated with Radar applications. Information Geometry has its origins in Maurice Fréchet’s works and his Clairaut equation. In this geometry, the metric is given by the Fisher matrix. Fisher's metric is a special case of Hessian metrics whose structures have been studied by the mathematician Jean-Louis Koszul. Jean-Marie Souriau extended Statistical Mechanics to dynamical systems by generalizing the notion of covariant Gibbs state related to the Hamiltonian actions of a Lie group on a Symplectic manifold. Extending Koszul model, we discovered that Souriau model made it possible to generalize Information Geometry for homogeneous manifolds by introducing a universal Souriau-Fisher metric, invariant under the action of the group acting on the manifold.

Bio: Frédéric Barbaresco is representative at the Key Technology Domain PCC (Processing, Control & Cognition) Board for Global Business Unit THALES LAND & AIR SYSTEMS. He is also Senior Expert at Advanced Radar Concepts Business Unit of THALES SURFACE RADAR Business Line. Frédéric Barbaresco is Co-Chairman of GSI “Geometric Science of Information” Conferences serie (https://www.see.asso.fr/GSI2019) and Editor of MDPI Entropy Books “Differential Geometrical Theory of Statistics” (https://www.mdpi.com/books/pdfview/book/313) and “Information, Entropy and Their Geometric Structures” (https://www.mdpi.com/books/pdfview/book/127). He is SEE Emeritus Member and President of SEE ISIC Club (Ingéniérie des Systèmes d’Information et de Communications). He was Awarded Aymé Poirson Prize 2014 of the French Academy of Sciences for Application of Science to Industry, SEE Ampère Medal 2009 and NATO Best Lecture Award 2010.

master_intelligence_artificielle_ecole_polytechnique.pdf

  • November 27 - Law and ethics of autonomous robots, Nathalie NEVEJANS, Lecturer in Law, University of Artois (France) (room Gilles Kahn).

Abstract.

In recent years, progress of robotics and artificial intelligence are prodigious. All the civil robotics (surgical robots, industrial robots, robots for elderly, services robots, …) and military robotics (war robots, war drones, …) renew the debates. However, the development of autonomous robotics will have an important impact in economics, social, legal and ethics terms.

Autonomous robotics has drawn the attention of the European legislator, as it is shown by the European Resolution on European Civil Law Rules in Robotics of the 16 of February, 2017. However, this text, without any obligatory value, causes more difficulties than it solves. Indeed, it tends to deform the state‐of‐the‐art in robotics and to adopt a vision tinged with science fiction.

By limiting the reflection on the Law and the Ethics of the civil robots, we notice that they pose several difficulties very delicate as well as legal than ethical, especially: Should we grant legal status for autonomous robots? How to determine who is responsible for the damages caused by a robot?  How will ethical issues affect the whole civil society when using autonomous robots?

It’s impossible to skip these debates today, because the European Commission have to adopt an European Directive in 2019 on autonomous robotics, which will have an compulsory value in European Union. It’s therefore essential that all these difficulties should be understood right now because of their inevitable impact on society, and on human being himself.

Bio: Nathalie Nevejans is a lecturer in private law at the University of Artois (France), authorized to direct research projects and a member of the CNRS Ethics Committee (COMETS). Author of numerous articles, participating in events for not only the academic world but also industry, she is one of the few specialists in France on the law and ethics of robotics, artificial intelligence and emerging technologies. She is also a member of the Research Centre for Law, Ethics and Procedures (EA n° 2471), as well as the Institute for the Study of Human-Robot Relations (Etude des Relations Hommes-Robots – IERHR). Her book “Treatise of Law and Ethics of Civil Robotics”, LEH editions (1232 pages) was published in 2017.

1-law_and_ethics_of_autonomous_robots_polytechniques_paris_27_novembre_2018.pdf

2-civil_law_rules_on_robotics_2017.pdf

3-study_n._nevejans_european_civil_law_rules_on_robotics_2016.pdf

4-draft_report_on_civil_law_rules_on_robotics_2016.pdf

  • December 4 - Machine learning ojectives for individual fairness, Nicolas Usunier, Facebook (room Sophie Germain)

Abstract: Machine learning is more and more widespread for decision making in scenarios where the decision have high impact on individuals lifes or on society. Algorithmic fairness aims at assessing whether the algorithms exhibit biases towards specific social groups, typically groups defined by gender or etchnicity protected by anti-discrimination laws, and preventing such biases. In this talk, I will review the main frameworks of algorithmic fairness and some existing approaches to learn unbiased decision functions. In particular, I will focus on the criterion called individual and its variants. I will present a number of approaches to learn unbiased classifiers in this setting, bunilding on a relationship between learning fair decision functions and domain adaptation, a field of machine learning which studies cases where the training data is not exactly representative of the target distribution.

Bio: Nicolas Usunier joined Facebook as a Research Scientist in March 2015. He received his PhD in Machine Learning from Université Pierre et Marie Curie, in Paris, in 2006. He was Associate Profesor there until 2012, when he joined Université de Technologie de Compiègne, in France, with a chair position from the “CNRS-Higher Education Chairs” program. His research interests include learning to rank, algorithms for tensor factorization, and learning with multiple objectives.

  • January 8 - AI@Inria (Research in Artificial Intelligence at Inria), Bertrand Braunschweig, Inria, director of Inria Saclay research center (room Sophie Germain).

Bio

Bertrand Braunschweig is ENSIIE engineer, PhD from Paris-Dauphine University and Habilitation from University of Paris VI. After a career as a researcher in systems dynamics and artificial intelligence in the petroleum industry, he joined IFP Energies Nouvelles to lead the research activities in AI and to coordinate international projects for defining interoperability standards for processes modeling and simulation.

President of the French Association for Artificial Intelligence for four years, he joined the National Research Agency in 2006 as head of several research programmes and then from January 2009 as head of ANR’s ICT department. He then held the position of director of Inria Rennes - Bretagne Atlantique research centre for four years and was advisor to the president of Inria in the field of artificial intelligence, before becoming director of Inria Saclay- Île de France research centre in early 2016. Since December 2018, he is the coordinator of the research component of France’s AI plan.

In 2016 he coordinated the preparation of Inria's white book on Artificial Intelligence. In the beginning of 2017, he was co-ordinator of the “Industrialisation fo Research Results” working group of #FranceIA, the French national strategy in artificial intelligence. Inria spokesperson on artificial intelligence, chairman of several AI projects selection panels, he regularly speaks about AI in the media and in scientific events.

  • January 15 - Use of personal data and ethics: Vision & challenges from the industry, Sarah Lannes, Research engineer, IDEMIA (room Sophie Germain).
  • January 22 - From data protection to data empowerment : how can humans keep the upper hand? Geoffrey DELCROIX, Direction des technologies et de l’innovation, CNIL (room Sophie Germain) POSTPONED.
  • January 29 - Fighting blindness with bionic eyes, Vincent Bismuth, Pixium Vision. (room Sophie Germain).
  • February 12 - AI for health: challenges and opportunities - Jean-Philippe Vert, Google Paris. (room Sophie Germain).

Abstract: The collection of large quantities of health data and progress in AI algorithms should give rise to new exciting applications in health, including better diagnosis and treatment decisions. In this talk I will illustrate through a few examples the opportunities and challenges in this field, which concentrates huge scientific, societal and economic interest.

Bio: Jean-Philippe Vert is a senior researcher at Google Brain, and adjunct researcher at MINES ParisTech. His work is at the interface of machine learning and computational biology, focusing in particular on applications in cancer research and precision medicine.

  • February 19 - Is ethics computable? - Milad Doueihi, philosopher specialist of digital sciences, Université Paris-Sorbonne. (room Sophie Germain).

Bio: Milad Doueihi, An accidental digitician.

Abstract: The history of ethics and computing dates back at least to N Wiener’s God, Golem, Inc. From “Moral Machines”, “Machine Ethics”, to the current debates concerning various forms of “embedded ethics” and the impact of AI and data massification, it is perhaps pertinent to revisit the issue by examining the epistemological relations between computability and ethics in order to postulate a shared set of properties that support their claims to a form of “universality”.

  • March 5 - Learning Prosthetics Design : Function, Shape, Style - François Faure, Anatoscope. (room Sophie Germain).
  • March 12 - Gender issues in AI, chatbots & robots - Laurence Devillers, LIMSI & Paris-Sorbonne University. (room Sophie Germain).