User Tools

Site Tools


seminarprogramm

**Seminar 2022/2023: Ethical issues, law & novel applications of AI (MIE 630)**

On Tuesdays, 1:30 pm-3:00pm, Amphitheater Sophie Germain, in the X building (near main entrance/Receptions) or by Videoconference if necessary

To participate in the face-to-face seminars, please contact to register by mail: veronique.steyer@polytechnique.edu The number of possible participants is limited due to sanitary conditions.Blue-Gray Highlighted Text

Seminars are mandatory for students of the Master of Science and Technology in Artificial Intelligence and advanced Visual Computing (Master 2nd year).

September 20 - Ethical issues of digital and artificial intelligence - Difference between ethics, regulation, norms, standards and deontology - Jean-Gabriel Ganascia, Sorbonne University

Abstract: After a general introduction to ethics and some examples of violation of moral rules in today's world due to the use of AI, this conference will be organized around four major concepts that take a particular dimension in the case of artificial intelligence applications: autonomy, justice (and equity, but we will see that it is almost the same thing, (and that everything is in the almost…)), privacy (by distinguishing this concept with those of private sphere, intimacy and “extimacy”) and finally transparency and explicability. As we will see with illustrations of current AI applications, these four pillars of bioethics have been misleadingly reused by AI and digital ethics committees.

Bio: Jean-Gabriel Ganascia is Professor of Computer Science at Sorbonne University, honorary member of the Institut Universitaire de France, EurAI – European Association for Artificial Intelligence – fellow and member of the LIP6 (Laboratory of Computer Science of the Paris 6) where he heads the ACASA team. He chaired the COMETS that is the Ethical Committee of the CNRS between 2016 and 2021. He chairs the Pôle Emploi (that is the public agency in charge of employment in France) Ethics Committee and he is member of the CPEN-CCNE (comité pilote d’éthique du numérique), i.e. the Ethical committee of the CCNE (comité consultatif national d’éthique). Published in March 2022 and entitled “Virtual Servitudes”, his last book deal with the ethical issues of AI

October 4 - Performance and fairness of facial recognition algorithms, Stéphane Gentric, Research unit manager, IDEMIA

Abstract: 2D Facial Recognition is one of the oldest computer vision applications. As techniques improve, performance increases and new topics such as fairness appear. This lecture will review the whole facial recognition pipeline and show how Deep Learning addresses key issues. We will present major operational deployments and the most recent performance. Finally, we will discuss current limitations and avenues for future research.

Bio: Stéphane Gentric is Chief AI Scientist at Idemia and associate professor at Telecom ParisTech. He received his PhD on Pattern Recognition at UPMC in 1999. As principal researcher then team leader, he worked on Fingerprint recognition algorithms, then Face, then Iris and now also Video Analytics. As Fellow Expert, he was involved in most of Idemia’s projects in biometrics for the past 20 years, such as the India Identity program (UIDAI), the Changi border crossing System or the European border central system, as well as NIST benchmarks. His current research interests center around pattern recognition for improvement of biometric systems.

October 18 - From Phd to Startup creation: Real-estate Market Transparency using AI. Adrien Bernhardt, Homiwoo

Abstract: This presentation will mix a professional path, that include a Phd in computer graphics, 4 years in Criteo Data Science Team and the creation of a startup focussed on data science. The first part is dedicated to lessons from doing a Phd and working in a fast-growing company like Criteo at the time it was growing fast, while the second part is dedicated to what we do at Homiwoo, how we do and side projects we have.

Bio: Adrien Bernhardt is CTO and cofounder of Homiwoo, a startup focussed on doing data science to model the real estate market. Previously he worked 4 years at Criteo in the Machine Learning team where he had the opportunity to carry many tasks related to managing machine learning models used in production. He received a Phd in Computer Science from Grenoble University in 2011, done under the supervision of Professor Marie-Paule Cani.

Company: Homiwoo is a startup founded in 2017 focused on doing data science to model the real estate market. Our goal is to provide reliable and rich information to our customers, to help them in their decisions.

October 25 – Robust, Safe and Explainable Intelligent and Autonomous Systems - Raja Chatila, Sorbonne University

Abstract: Deploying unproven systems in critical applications, and even in seemingly not critical ones, might be dangerous and irresponsible, and therefore unethical and shouldn’t be acceptable. As AI systems based on Machine Learning which statistically process data to make decisions and to predict outcomes, have become of widespread use in almost all sectors, from Healthcare to Warfare, the need to ensure they “do the right thing” and provide reliable results has become of primary importance. Adopting a risk-based approach, the European Commission has proposed a new regulation for AI tailoring the regulation level to the risk level. But how to evaluate and mitigate risk? With millions of parameters computed from data using optimization processes, the practice of using various off-the-shelf components to build new systems without solid verification and validation processes, and the absence of causal links between inputs and outputs, what does it mean, concretely, to make AI systems robust, safe and explainable? Is this a reachable objective at all? And will this lead to trustable Intelligent and Autonomous Systems?

Bio: Raja Chatila is Professor Emeritus of Artificial Intelligence, Robotics and Ethics at Sorbonne University in Paris, France. He is former director of the SMART Laboratory of Excellence on Human-Machine Interactions and of the Institute of Intelligent Systems and Robotics. He contributed in several areas of Artificial Intelligence and autonomous and interactive Robotics along his career. His research interests currently focus on human-robot interaction, machine learning and ethics. He is IEEE Fellow and was President of the IEEE Robotics and Automation Society in 2014-2015. He is chair of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems and member of the CNPEN (comité national pilote d’éthique du numérique) in France.

November 8 - Can artificial intelligence be creative ? – François Levin, Ecole polytechnique

Abstract: We are witnessing AI creativity flourish : new algorithms (GAN, CAN…), new models (GPT-3, DALL-E, Midjourney), new applications (text, images, audio)… Is AI becoming really creative ? Or is it just a “simulation” of creativity ? What does it really mean to be “creative” ?

Bio: François Levin is a PHD-candidate in philosophy at the Ecole Polytechnique.

November 15 - Quentin PANISSOD - Leveraging AI for public interest and hands-on implications for projects

Abstract: As a major innovation trend worldwide, AI is triggering investments in tens of billion euros yearly. According to Stanford’s AI index, half of those invest- ments are leveraged by finance, marketing and surveillance sectors only, with autonomous mobility and healthcare as runners-up. This presentation will fo- cus on how AI can be leveraged for other activities and specifically towards public interest. Through practical examples from different organizational con- figurations (business, nonprofit, partnerships…), the potential and limits of leveraging ”responsible AI” will be addressed: how can AI solutions be devel- oped in ”digital-wary” sectors with high environmental costs like construction? What are the solutions and limits to building AI projects in activities without digital skills and without structured data? How do AI projects balance social stakes, environmental costs and expected results to claim the development of ”responsible AI”?

Bio: Quentin Panissod graduated as a robotics engineer at Polytech Sorbonne. As a student, he also led and co-founded nonprofit organizations for charity and na- tional students’ unions. For five years, he built the AI foresight and projects ac- tivity at Leonard, VINCI group’s innovation platform, delivering 35 AI projects for construction, energy and mobility sectors. With the covid crisis emerging, he co-founded AI For Tomorrow, nonprofit organization that supported 20 projects for environment, healthcare or society topics. More recently, aiming at larger scale AI dedicated to environmental purposes, he then led the creation of the RenovAIte project, a european platform of AI and data services to speed up and improve renovation for housing and roads. In parallel, he co-founded The Swarm Initiative, a mission-driven company that builds and coordinates collaborative projects for public interest using major innovation trends like AI.

November 29 – Artificial intelligence: what future European regulation? - Nathalie Nevejans, Artois University, France

Abstract: To face the challenges in terms of AI, gain the trust of citizens, and promote its development and use, the European Union has started for a few years to set up an ethical framework for AI, but without proposing any mandatory rules. In April 2021, the EU wanted to go further by unveiling its first draft of a future regulation, this time intended to set the legal rules for AI, which will have binding force in all EU Member States. This evolving future regulation of AI concerns the cycle of AI from its introduction to the market to its professional use. The seminar will not only make it possible to understand the issues and impacts of the AI Act, but also to examine the questions and criticisms that arise on the legal and ethical level.

Bio: Nathalie Nevejans is Assistant Professor in Law (Artois University, France) and Head of Chair in Law and Ethics in Artificial Intelligence - Responsible AI Chair - (Artois University). Her book is published in 2017 (Treatise of Law and Ethics of Civil Robotics/Traité de droit d’éthique de la robotique civile, 2017, 1232 pages). Author of numerous papers, participating in events from both the academic world and various professional sectors (industry, health, insurance, …), she is one of the few European specialists in the Law and Ethics of Artificial Intelligence, Robotics and Emerging Technologies. Her interdisciplinary publications include, for example, S. O'Sullivan, N. Nevejans, and alii. « Legal, Regulatory, and Ethical Frameworks for development of Standards in Artificial Intelligence and Autonomous Robotic Surgery », The International Journal of Medical Robotics and Computer Assisted Surgery, 2018. Caché pour les étudiants

January 10 - Confidence in AI, Sarah Lannes, IRT Systems X

Abstract: Uses of AI are overtaking traditional methods in several areas of the industry. However, this raises questions about confidence in these solutions: confidence in their robustness, confidence in how they err, confidence in the decision-making process. Understanding the how and most of all being able to trust these new methods has become key to being able to actually put them in practice.

Bio: Sarah Lannes graduated with an MS in Multidimensional Signal Processing and has 15 years of experience as a research engineer in computer vision and AI. She started her career in a start-up company called Let it Wave, working mostly on image and video quality questions, then went on to join Idemia (previously Safran) in the Face Biometrics research team, moving on to video analysis. She recently joined IRT Systeme X to act as an expert in a project for track surveillance for automated trains.

January 17 - Fighting blindness with bionic eyes, Vincent Bismuth, Pixium Vision

Abstract: Restoring vision for the blind has long been considered a science-fiction topic. However, since two decades accelerating efforts in the field of visual prostheses have yielded significant progress and several hundred patients worldwide have received such devices, with various outcomes. This seminar will briefly present the field with a special focus on the image processing side, providing an overview of the main approaches, limitations and results.

Bio: Vincent Bismuth is leading a career in the field of medical devices, centered on expertise in image processing. He spent 10+ years developing image and video processing algorithms for interventional Xray procedures at General Electric Healthcare before moving into a French start-up, Pixium Vision, that design vision restoration systems for the visually impaired. He recently moved to the mammography division of General Electric where he is leading image processing developments.

January 24 - Bring in the algorithm! The biases of predictive models under scrutiny - Fabien Tarrissan, CNRS

Abstract: While the applications resulting from AI techniques continue to diversify, the law could not escape the trend of automating decision-making. This is manifested in particular by the proposition of using predictive models issued from machine learning (ML) techniques to drive future decisions in judiciary contexts. If the use of these techniques in the courts is legitimately debated, they are on the other hand already used in law firms and, more broadly, in legal branches of private companies in order to establish or support their litigation strategies.

Described in broad terms, the ML approach consists of the analysis of a corpus of legal decisions seeking to identify what are the main characteristics that have been taken into account by the judges in settling the cases. This knowledge is then presented, in a second step, as a useful way to inform forthcoming decisions on new cases.

This talk will be the opportunity to briefly present the concepts at the core of ML techniques before discussing how their efficiency and, more importantly, their potential biases are formally assessed by computer scientists. This will address the question of a possible discrimination in algorithmic recommendations and we will see that different formulations of what could be fair recommendations lead in fact to different biases that are irreconcilable. This will raise the question of how to regulate the use of AI approaches in such a context.

Bio : Fabien Tarissan is a researcher in computer science at the French National Centre for Scientific Research (CNRS) and adjunct professor at École Normale Supérieure de Paris-Saclay. His work mainly concerns the analysis and modeling of large networks encountered in practice, such as the Internet, the web, social networks or legal networks. His research involves in particular the study of recommendation systems such as the ones using machine learning techniques.

January 31 - Ethics in artificial intelligence, Issam Ibnouhsein, Implicity

Abstract: Artificial intelligence developments are raising a wide variety of ethical questions, ranging from most practical ones such as decision-making for autonomous cars to epistemic ones about the ability of machine learning to massively serve as decision-aid. The goal of this session is to better understand how ethics and artificial intelligence overlap, and analyze the similarities and differences between classical procedures and machine learning ones.

Bio: Issam Ibnouhsein, Head of Data Science at Implicity, has a PhD in quantum computing, and has lead various research projects at the intersection of AI and healthcare.

February 1st - Thierry Rayna, Professor of Innovation Management, Ecole Polytechnique, Assessing the impact of A.I. on Business Model Innovation

Abstract: While technological innovation is generally seen as the pinnacle of competitiveness, market success is seldom achieved without business model innovation, and it could indeed be argued that technological and business model innovation are two sides of a same coin. As a matter of fact, numerous examples can be found of companies that had innovated technologically without adapting their business model and have, as a result, met their downfall. This is particularly the case for ‘emerging’ and ‘deep-tech’ technologies, where the use cases first envisaged are rarely those that eventually prevail. A.I., as an umbrella of heterogenous technologies, is certainly one of such technologies surrounded by myths and fantasy. The objective of this talk is first to shed a light on what business model innovation actually is and to present tools that can be used to anticipate impact of technologies on business models. This will be then used to discuss the expected impact of A.I. on business models.

Bio: https://www.polytechnique.edu/annuaire/fr/user/12853/thierry.rayna#

February 7 - Leveraging computer vision advances to address real-world challenges, Jean-Baptiste Rouquier and Margarita Khokhlova, FUJITSU

Abstract: In the last years, deep learning applications have gained in popularity and drawn the attention of business leaders in all market segments. Deep neural networks performance in computer vision for object recognition, detection and segmentation, competing with human performance, has opened a new world of applications in a large variety of domains such as Retailing, Manufacturing, Security, Automotive, Energy and Healthcare. While deep learning is a hot topic in academic research, building real-world deep learning solutions suited to customer needs remains a difficult task which has to take into account the business specificities, the solution scalability, some ethics or legal concerns, and the potential risks related to algorithm mistakes. In this seminar, we will focus on solutions developed on top of object detection and multiple object tracking (MOT) to build computer vision systems that can be used to address various use cases developed at the Fujitsu Center of Excellence.

Bio: Jean-Baptiste Rouquier graduated from École Normale Supérieure de Lyon and worked 6 years in academic research, on complex systems, complex networks and data science applications. He was then employed as an NLP researcher for a hedge fund, then software engineer for feature engineering at Criteo. He went to Dataiku to work as a data scientist, trainer, expert support and consultant, then in a mutual health insurance, before joining Fujitsu as senior data scientist for the creation of the AI Center of Excellence. He is a Fujitsu associate distinguished engineer. Margarita Khokhlova is a data scientist at Fujitsu. Her primary area of expertise is computer vision. Before joining Fujitsu, she mainly worked in public research in IGN and LIRIS. She obtained a Ph.D. degree from the University of Burgundy in 2018, where her dissertation was dedicated to automatic gait analysis using 3D active sensors. She also holds two separate master degrees. The first is a joint degree in computer vision from the University of Jean-Monnet Saint-Etienne and NTNU Gjovik Norway. The second is in business management administration from the University of Burgundy Dijon. Her research interests include computer vision, deep learning, and clinical data analysis.

February 14 - Augmenting bodies using AI: from human know-how to Computer Aided Design - François Faure, Anatoscope

Abstract: From walking sticks to bionic arms, people have always augmented their bodies with supplementary or replacement parts to improve their function, comfort or aesthetics. For optimal efficiency, these must be personalized to precisely fit the body, and their design requires significant knowledge and skills on anatomy and mechanics. The Orthotics and Prosthetics (O&P) domain has developed a large body of know-how to replicate body parts using plaster, design and sculpt shapes, and mold corresponding devices. This is applied to various body parts such as teeth, limbs, ears. Unfortunately, these techniques are empirical and operator-dependent. To improve precision, O&P increasingly uses digital imaging and design software. However, most of the current software essentially consist of digital sculpting toolboxes, therefore the design process remains virtually as empirical and operator-dependent as before. In this talk, we present Anatoscope’s approach to tackle the challenge of precision in O&P. To really improve on Computer Assisted Design for O&P, we need to map the skills of good practitioners to numerical methods implemented in computers. Knowledge can be formulated using models and algorithms, while some skills are easily expressed as rules, and others are more easily described using examples. Our artificial intelligence combines these paradigms through constrained optimizations solved using various strategies. We illustrate these using various examples of dental and orthopedic design.

Bio: François Faure, 50, graduated in Mechanical Engineering at ENS Cachan in 1993, and became a full university professor in Computer Science in Grenoble, 2011. His research contributions range from the simulation of rigid and deformable solids, collision detection, to the computation of personalized models for medical simulation. He founded Anatoscope with four colleagues in 2015, and he has been fully focused on its development since then. In three years, the company has signed strategic partnerships in the dental and orthopedic domains, and grown to 40 employees.

February 21 - Social and Market Context for AI Advances, Hugo Loi, Pixminds

Abstract: As the computing tools to match and best human intelligence get better, a massive amount of real-world problems becomes accessible to artificial intelligence. Still, every and each one of us has been educated to have humans solve these problems, such as medical diagnosis and airplane piloting. As a consequence, very few AI solutions will make it through ethical, legal, financial and market barriers before getting to the users in the next five years. Which ones? That is the question every AI entrepreneur wants to answer, and the topic of this talk

Bio: Hugo Loi holds an engineering degree from Ensimag and a PhD from Grenoble University. Hugo started his career in Computer Graphics at Inria, the French National Mapping Institute, the Walt Disney Company and Princeton University. In 2016 Hugo joined Lionel Chataignier for the creation of Pixminds, a gaming hardware corporation at the heart of the French Alps. Since then their work of transferring great tech into great products was awarded multiple times by various organisms such as the Consumer Electronics Show, the French Ministry of Interior, Bpifrance and the German Design Council.

March 7 - Big data approaches to brain imaging & applications in psychiatry, Bertrand Thirion, Dataia, Inria

Abstract: Population imaging consists in studying the characteristics of populations as measured by brain imaging. The transition to big data in this field has consisted in imaging and behavioral acquisition from larger cohorts and in the agglomeration of such datasets. An important question is whether the loss of homogeneity inherent to working on composite datasets is detrimental to prediction accuracy. We provide evidence that this is not the case: larger datasets ultimately provide more power for individual prediction and diagnosis. We also outline technical aspects of the work on large imaging datasets and benefits of challenges and collaborative work.

Bio: Bertrand Thirion is researcher of Inria research institute, Saclay, France, that develops statistics and machine learning techniques for brain imaging. He contributes both algorithms and software, with a special focus on functional neuroimaging applications. He is involved in the Neurospin, CEA neuroimaging center, one of the leading high-field MRI for brain imaging places. Bertrand Thirion has created and managed the Parietal team (2009-2022). From 2018 to 2021, Bertrand Thirion has been the head of the DATAIA Institute that federates research on AI, data science and their societal impact in Paris-Saclay University. In 2020, he was appointed as member of the expert committee in charge of advising the government during the Covid-19 pandemic. In 2021, he has become the Head of science (délégué scientifique) of the Inria Saclay-Île-de-France research center. Bertrand Thirion is PI of the Karaib AI Chair of the Individual Brain CHarting project.

Seminar 2021/2022: Ethical issues, law & novel applications of AI (MIE 630)

On Tuesdays, 1:30 pm-3:00pm, Amphitheatre Gregory, in the X building (near main entrance/Receptions) or by Videoconference if necessary

To participate in the face-to-face seminars, please contact to register by mail: veronique.steyer@polytechnique.edu The number of possible participants is limited due to sanitary conditions.Blue-Gray Highlighted Text

Seminars are mandatory for students of the Master of Science and Technology in Artificial Intelligence and advanced Visual Computing (Master 2nd year).

September 21 - From artificial intelligence to computational ethics - Jean-Gabriel Ganascia, Sorbonne University, Lip6, Chairman of the COMETS (CNRS Ethical Committee) introduced by Véronique Steyer/Louis Vuarin

Videoconference link: https://ecolepolytechnique.zoom.us/j/98734008049?pwd=UUwzaHhwUEp6MkFudksrOEZYMmZXdz09

Abstract: With the development of artificial intelligence, it is now possible to design agents that are said autonomous in the sense that their behavior results from a chain of physical causalities from sensors acquisition of signals to action without any human intervention. There are many possible applications of such agents, for instance in transportation, with autonomous cars, or in war, with autonomous weapons. Since human is not present in the loop, many fears the robots animated by such agents be predatory. In order to prevent unsafe behaviors, references to humane values have to be included in the agent programming. More technically, it means that engineers have now to design what is called ‘’ethical controllers’’ to restrict the robot actions according to moral criteria. To do so, it is necessary to mimic what philosophers call the ‘’jugement’’, which is an operation of mind, and to encode the deliberations, in case of conflicts of norms, using different ethical systems. Both, the simulation of the jugement and the deliberation modeling, give birth to what is called “Computational Ethics”. At the light of the recent autonomous car accident that happened March 2018 in Arizona, we shall detail the different dimensions that such a controller has to satisfy and the technical difficulties that artificial intelligence researchers who deal with computational ethics are facing.

Bio: Jean-Gabriel Ganascia is Professor of Computer Science at Sorbonne University, senior member of the Institut Universitaire de France, EurAI – European Association for Artificial Intelligence – fellow and member of the LIP6 (Laboratory of Computer Science of the Paris 6) where he heads the ACASA team. In addition, he chairs the COMETS that is the Ethical Committee of the CNRS and he is member of the CPEN-CCNE (comité pilote d’éthique du numérique), i.e. the Ethical committee of the CCNE (comité consultatif national d’éthique).

September 28Robust, Safe and Explainable Intelligent and Autonomous Systems - Raja Chatila (room: Amphi Gregory) introduced by Véronique Steyer/Louis Vuarin

Videoconference link: https://ecolepolytechnique.zoom.us/j/97910981213?pwd=SUlmbUNOWFFDWjlCYSt6UUlLNVZEQT09

Abstract: Deploying unproven systems in critical applications (and even in not critical ones) is dangerous and irresponsible, and therefore unethical and shouldn’t be acceptable. As AI systems based on Machine Learning which statistically process data to take decisions and to predict outcomes, have become of widespread use in almost all sectors, from Healthcare to Warfare, the need to ensure they “do the right thing” and provide reliable results has become of primary importance. Hence a full research stream was started, trying to address the blackbox paradigm limitations characterizing such systems. With millions of parameters computed from data using optimization processes, the practice of using various off-the-shelf components to build new systems without solid verification and validation processes, and the absence of causal links between inputs and outputs, what does it mean, concretely, to make AI systems robust, safe and explainable? Is this a reachable objective at all? And will this lead to trustable Intelligent and Autonomous Systems?

Bio: Raja Chatila is Professor of Artificial Intelligence, Robotics and Ethics at Sorbonne University in Paris, France. He is director of the SMART Laboratory of Excellence on Human-Machine Interactions and former director of the Institute of Intelligent Systems and Robotics. He contributed in several areas of Artificial Intelligence and autonomous and interactive Robotics along his career. His research interests currently focus on human-robot interaction, machine learning and ethics. He is IEEE Fellow and was President of the IEEE Robotics and Automation Society in 2014-2015. He is chair of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, member of the High-Level Expert Group on AI with the European Commission and member of the CNPEN (comité national pilote d’éthique du numérique) in France.

October 12 or 05 (to be confirmed) - Facial recognition: from early methods to deep learning, Stéphane Gentric, Research unit manager, IDEMIA (room: Curie Amphitheatre, in the X building - compulsory registration for external persons (students of the AI-ViC master programme are automatically registered)) introduced by Véronique Steyer/Louis Vuarin

Abstract: 2D Face Recognition is one of the oldest computer vision application. As techniques improve, more complex databases arise, always leaving space for algorithm improvements. This lecture will review the whole face recognition pipeline, how early methods address the main issues and how Deep Learning handles them now. We will present major operational deployments and the most recent performances. Finally, we will discuss current limitations and future research avenues.

Bio: Stéphane Gentric is Research Unit Manager at Idemia and Associate Professor at Telecom Paris. He obtained his PhD on Pattern Recognition at UPMC in 1999. As principal researcher, then team leader, he worked on Fingerprint recognition algorithms, then Face, then Iris and now also Video Analytics. As Senior Expert, he was involved in most of Idemia’s biometrics projects over the past 15 years, such as the Changi border crossing system as well as NIST benchmarks, or the UIDAI project. His current research interests are focused around pattern recognition for the improvement of biometric systems.

October 19 (to be confirmed) - Augmenting bodies using AI: from human know-how to Computer Aided Design - François Faure, CEO Anatoscope (room: Curie Amphitheatre, in the X building - compulsory registration for external persons ; students of the AI-ViC master programme are automatically registered)) introduced by Véronique Steyer/Louis Vuarin

Abstract: From walking sticks to bionic arms, people have always augmented their bodies with supplementary or replacement parts to improve their function, comfort or aesthetics. For optimal efficiency, these must be personalized to precisely fit the body, and their design requires significant knowledge and skills on anatomy and mechanics. The Orthotics and Prosthetics (O&P) domain has developed a large body of know-how to replicate body parts using plaster, design and sculpt shapes, and mold corresponding devices. This is applied to various body parts such as teeth, limbs, ears. Unfortunately, these techniques are empirical and operator-dependent. To improve precision, O&P increasingly uses digital imaging and design software. However, most of the current software essentially consist of digital sculpting toolboxes, therefore the design process remains virtually as empirical and operator-dependent as before. In this talk, we present Anatoscope’s approach to tackle the challenge of precision in O&P. To really improve on Computer Assisted Design for O&P, we need to map the skills of good practitioners to numerical methods implemented in computers. Knowledge can be formulated using models and algorithms, while some skills are easily expressed as rules, and others are more easily described using examples. Our artificial intelligence combines these paradigms through constrained optimizations solved using various strategies. We illustrate these using various examples of dental and orthopedic design.

Bio: François Faure, 49, graduated in Mechanical Engineering at ENS Cachan in 1993, and became a full university professor in Computer Science in Grenoble, 2011. His research contributions range from the simulation of rigid and deformable solids, collision detection, to the computation of personalized models for medical simulation. He founded Anatoscope with four colleagues in 2015, and he has been fully focused on its development since then. In three years, the company has signed strategic partnerships in the dental and orthopedic domains, and grown to 40 employees.

November 16 - From Phd to Startup creation: Real-estate Market Transparency using AI. Adrien Bernhardt, CTO Homiwoo (room: Curie Amphitheatre, in the X building - compulsory registration for external persons; students of the AI-ViC master programme are automatically registered)) introduced by Véronique Steyer/Louis Vuarin

Abstract: This presentation will mix a professional path, that include a Phd in computer graphics, 4 years in Criteo Data Science Team and the creation of a startup focussed on data science. The first part is dedicated to lessons from doing a Phd and working in a fast-growing company like Criteo at the time it was growing fast, while the second part is dedicated to what we do at Homiwoo, how we do and side projects we have.

Bio: Adrien Bernhardt is CTO and cofounder of Homiwoo, a startup focussed on doing data science to model the real estate market. Previously he worked 4 years at Criteo in the Machine Learning team where he had the opportunity to carry many tasks related to managing machine learning models used in production. He received a Phd in Computer Science from Grenoble University in 2011, done under the supervision of Professor Marie-Paule Cani.

Compagny: Homiwoo is a startup founded in 2017 focused on doing data science to model the real estate market. Our goal is to provide reliable and rich information to our customers, to help them in their decisions.

November 23Law and ethics of autonomous robots, Nathalie NEVEJANS, Professor and Chairholder, Responsible AI (Artois University, France) (room: Curie Amphitheatre, in the X building - compulsory registration for external persons ; students of the AI-ViC master programme are automatically registered)) introduced by Véronique Steyer/Louis Vuarin

Abstract: In recent years, progress of robotics and artificial intelligence are prodigious. All the civil robotics (surgical robots, industrial robots, robots for elderly, services robots, …) and military robotics (war robots, war drones, …) renew the debates. However, the development of autonomous robotics will have an important impact in economics, social, legal and ethics terms. Autonomous robotics has drawn the attention of the European legislator, as it is shown by the European Resolution on European Civil Law Rules in Robotics of the 16 of February, 2017. However, this text, without any obligatory value, causes more difficulties than it solves. Indeed, it tends to deform the state‐of‐the‐art in robotics and to adopt a vision tinged with science fiction. By limiting the reflection on the Law and the Ethics of the civil robots, we notice that they pose several difficulties very delicate as well as legal than ethical, especially: Should we grant legal status for autonomous robots? How to determine who is responsible for the damages caused by a robot? How will ethical issues affect the whole civil society when using autonomous robots? It is essential that all these difficulties should be understood right now because of their inevitable impact on society, and on the human being himself.

Bio: Nathalie Nevejans is Professor and Chairholder in Artificial Intelligence (Artois University Chair, France), and Member of the CNRS (Centre National de la Recherche Scientifique) Ethics Committee (COMETS, France). She is also a European Parliament expert. She went on further to create a new discipline of Law and Ethics in Robotics (Treatise of Law and Ethics of Civil Robotics/Traité de droit d’éthique de la robotique civile, 2017). Author of numerous papers, participating in events from both the academic world and various professional sectors (industry, health, insurance), she is one of the few European specialists in the Law and Ethics of Artificial Intelligence, Robotics and Emerging Technologies. Her interdisciplinary publications include, for example, S. O'Sullivan, N. Nevejans, and alii. « Legal, Regulatory, and Ethical Frameworks for development of Standards in Artificial Intelligence and Autonomous Robotic Surgery », The International Journal of Medical Robotics and Computer Assisted Surgery, 2018.

November 30 - Google AI principles, Ludovic Peran, Public Policy and Government Affairs Manager- AI, Google (room: Sophie Germain) introduced by Véronique Steyer/Louis Vuarin

Exceptionally, we will use Google Meet. Please let me know if you want to attend the seminar, I will send you the link.Italic Text

Abstract: Google's principles for the responsible use of AI and their application in engineering and research. Using the What-If-Tool to inspect your models and detect equity issues.

Bio: Ludovic Péran is Product Manager in Google's Artificial Intelligence Research Department. He was previously responsible for institutional relations and public policies related to artificial intelligence at Google France. He is a lecturer at ESCP in the Master of Digital Innovation and a member of the OECD Expert Group on AI and the Board of Directors of the Digital Renaissance think tank. He is a graduate of ESCP and the Ecole d' Economie de Paris (master APE).

December 7 - Confidence in AI, Sarah Lannes, Senior Research Engineer at IRT SystemX (room: Curie Amphitheatre, in the X building - compulsory registration for external persons ; students of the AI-ViC master programme are automatically registered)) introduced by Véronique Steyer/Louis Vuarin

Abstract: Uses of AI are overtaking traditional methods in several areas of the industry. However, this raises questions about confidence in these solutions: confidence in their robustness, confidence in how they err, confidence in the decision-making process. Understanding the how and most of all being able to trust these new methods has become key to being able to actually put them in practice.

Bio: Sarah Lannes graduated with an MS in Multidimensional Signal Processing and has 15 years of experience as a research engineer in computer vision and AI. She started her career in a start-up company called Let it Wave, working mostly on image and video quality questions, then went on to join Idemia (previously Safran) in the Face Biometrics research team, moving on to video analysis. She recently joined IRT Systeme X to act as an expert in a project for track surveillance for automated trains.

January 4 - Assessing the impact of A.I. on Business Model Innovation; Thierry Rayna, Professor of Innovation Management, Ecole Polytechnique - room: Sophie Germain) introduced by Véronique Steyer/Louis Vuarin

Abstract: While technological innovation is generally seen as the pinnacle of competitiveness, market success is seldom achieved without business model innovation, and it could indeed be argued that technological and business model innovation are two sides of a same coin. As a matter of fact, numerous examples can be found of companies that had innovated technologically without adapting their business model and have, as a result, met their downfall. This is particularly the case for ‘emerging’ and ‘deep-tech’ technologies, where the use cases first envisaged are rarely those that eventually prevail. A.I., as an umbrella of heterogenous technologies, is certainly one of such technologies surrounded by myths and fantasy. The objective of this talk is first to shed a light on what business model innovation actually is and to present tools that can be used to anticipate impact of technologies on business models. This will be then used to discuss the expected impact of A.I. on business models.

Bio: https://www.polytechnique.edu/annuaire/fr/user/12853/thierry.rayna#

January 11 (to be confirmed) - Fighting blindness with bionic eyes, Vincent Bismuth, General Electric Healthcare (room: Gregory) introduced by Véronique Steyer/Louis Vuarin

Abstract: Restoring vision for the blind has long been considered a science-fiction topic. However, since two decades accelerating efforts in the field of visual prostheses have yielded significant progress and several hundred patients worldwide have received such devices, with various outcomes. This seminar will briefly present the field with a special focus on the image processing side, providing an overview of the main approaches, limitations and results.

Bio: Vincent Bismuth is leading a career in the field of medical devices, centered on expertise in image processing. He spent 10+ years developing image and video processing algorithms for interventional Xray procedures at General Electric Healthcare before moving into a French start-up, Pixium Vision, that design vision restoration systems for the visually impaired. He recently moved to the mammography division of General Electric where he is leading image processing developments.

January 18 (to be confirmed) - Transforming the digital customer journey with AI – Olivier Morillot & Sylvain Marsault, Carrefour (room: Sophie Germain) introduced by Véronique Steyer/Louis Vuarin

Abstract: Food retail is one of the last sectors of commerce largely dominated by offline business-models and for which online and digital models must be invented. The management of the product offering, the operations and the customer journey must be completely transformed to propose to the customer an enhanced digital experience. Understanding and anticipating each customer event to better interact with him and offer him the expected content is the biggest challenge. Massive collection of information and the use of ML and AI techniques such as image or voice recognition, recommender systems, time series forecasting, and conversational agents are the main levers of this transformation. During this presentation, you will discover how the DataLab of Carrefour activate these levers through the use of AI techniques such as conversational agents, recommendation systems, personal and contextual search engines, and other ML services. You will dive into the main data and AI challenges of tomorrow in the retail industry.

Bio: Olivier Morillot currently works as Carrefour-Google AI Lab tech lead. Previously, he worked 5 years at Photobox, where he productionalised deep learning models for e-commerce. He received his PhD in machine learning from Telecom Paris in 2014.

February 1 - Ethics in artificial intelligence, Issam Ibnouhsein Quantmetry, (room: Sophie Germain) introduced by Véronique Steyer/Louis Vuarin

Abstract: Artificial intelligence developments are raising a wide variety of ethical questions, ranging from most practical ones such as decision-making for autonomous cars to epistemic ones about the ability of machine learning to massively serve as decision-aid. The goal of this session is to better understand how ethics and artificial intelligence overlap, and analyze the similarities and differences between classical procedures and machine learning ones.

Bio: Issam Ibnouhsein, Head of Research & Development at Quantmetry, has a PhD in quantum computing, worked as a data scientist and is now heading the research and development activities at Quantmetry.

February 8 - Leveraging computer vision advances to address real-world challenges, Jean-Baptiste Rouquier, Senior Data scientist, Associate distinguished engineer, FUJITSU (room: Sophie Germain) introduced by Véronique Steyer/Louis Vuarin

Abstract: In the last years, deep learning applications have gained in popularity and drawn the attention of business leaders in all market segments. Deep neural networks performance in computer vision for object recognition, detection and segmentation, competing with human performance, has opened a new world of applications in a large variety of domains such as Retailing, Manufacturing, Security, Automotive, Energy and Healthcare. While deep learning is a hot topic in academic research, building real-world deep learning solutions suited to customer needs remains a difficult task which has to take into account the business specificities, the solution scalability, some ethics or legal concerns, and the potential risks related to algorithm mistakes. In this seminar, we will focus on solutions developed on top of object detection and multiple object tracking (MOT) to build computer vision systems that can be used to address various use cases developed at the Fujitsu Center of Excellence.

Bio: Jean-Baptiste Rouquier graduated from École Normale Supérieure de Lyon and worked 6 years in academic research, on complex systems, complex networks and data science applications. He was then employed as an NLP researcher for a hedge fund, then software engineer for feature engineering at Criteo. He went to Dataiku to work as a data scientist, trainer, expert support and consultant, then in a mutual health insurance, before joining Fujitsu as senior data scientist for the creation of the AI Center of Excellence. He is a Fujitsu associate distinguished engineer.

February 15 (to be confirmed) - “AI Ethics: Principles and beyond?”, Véronique Magnier, Agrégée des facultés de droit, Professeur de Droit, Université Paris-Sud Saclay, Directeur de l'Institut Droit Ethique Patrimoine (room: Gregory) introduced by Véronique Steyer/Louis Vuarin

Abstract: AI Ethics has become a global topic in policy and academic circles. It is now recognised that AI technologies offer opportunities for fostering human dignity and promoting human flourishing but may be associated with risks also. Hence, the major impact AI will have on society is no longer in question. Current debates turn instead on how far this impact will be positive or negative. This seminar examines the new pertinent questions, which is no more the one of whether AI will have an impact, but by whom, how, where, and when this positive or negative impact will be felt. From a legal perspective, these questions address the way national jurisdictions (should) offer a frame for AI activities, in a global context. So far, AI Ethics seems to converge on a set of principles. But can principles alone guarantee ethical AI? The seminar mainly questions the advantages and limits of an ethical approach to AI, questionning the legal methodologies associated.

Bio: PhD in Law and Sciences Po Paris, Véronique MAGNIER is Professor of Law at the Law school of Paris-Sud/Paris-Saclay University. She is responsible for the Master’s degree in Business, Tax & Financial Market law and founded the Grande Ecole du Droit and the legal clinic of Paris-Saclay. She joined Georgetown University as an Adjunct Professor in 2010, where she teaches an annual course, “Comparative Coporate Governance” to LL.M. and JD students. Veronique Magnier is the Director of the Institute “Law, Ethics & Patrimony” at Paris-Sud/Paris-Saclay University. She is the author or co-author of seminal books and articles in the areas of corporate Law and Corporate governance, Business and Ethics, European and comparative law, and Constitutional Civil Procedure. She is the co-author, with Prof. Michel Germain, of the treaty « Sociétés commerciales, Traité de Droit commercial par Ripert et Roblot » published by LGDJ. Her most recent publications include a monography entitled “Comparative corporate governance. A legal perspective”, edited by Edward Elgar publishing in 2017 (http://www.e-elgar.com/shop/comparative-corporate-governance), and she co-authored a book entitled “Blockchain and company law” (Dalloz 2019). She is the Scientific Director of The Dalloz Encyclopedia for Corporate Law since 2003. Véronique Magnier is on the board of Transparency International France and an active member of various national, European or international associations or institutes (Trans Europe Experts, Société de Legislation comparée, European Corporate Governance Institute…).

March 1 - Social and Market Context for AI Advances, Hugo Loi, General Manager Pixminds room: Curie Amphitheatre, in the X building - compulsory registration for external persons ; students of the AI-ViC master programme are automatically registered)) introduced by Véronique Steyer/Louis Vuarin

Abstract: As the computing tools to match and best human intelligence get better, a massive amount of real-world problems becomes accessible to artificial intelligence. Still, every and each one of us has been educated to have humans solve these problems, such as medical diagnosis and airplane piloting. As a consequence, very few AI solutions will make it through ethical, legal, financial and market barriers before getting to the users in the next five years. Which ones? That is the question every AI entrepreneur wants to answer. This talk discusses 5 simple yet unsolved AI problems that could make it through the filters.

Bio: Hugo Loi holds an engineering degree from Ensimag and a PhD from Grenoble University. Hugo started his career in Computer Graphics at Inria, the French National Mapping Institute, the Walt Disney Company and Princeton University. In 2016 Hugo joined Lionel Chataignier for the creation of Pixminds, a gaming hardware corporation at the heart of the French Alps. Since then their work of transferring great tech into great products was awarded multiple times by various organisms such as the Consumer Electronics Show, the French Ministry of Interior, Bpifrance and the German Design Council.

March 8 - Big data approaches to brain imaging & applications in psychiatry, Bertrand Thirion, Head of Parietal team, head of Dataia, Inria (room: Gregory) introduced by Véronique Steyer/Louis Vuarin

Abstract: Population imaging consists in studying the characteristics of populations as measured by brain imaging. The transition to big data in this field has consisted in imaging and behavioral acquisition from larger cohorts and in the agglomeration of such datasets. An important question is whether the loss of homogeneity inherent to working on composite datasets is detrimental to prediction accuracy. We provide evidence that this is not the case: larger datasets ultimately provide more power for individual prediction and diagnosis. We also outline technical aspects of the work on large imaging datasets and benefits of challenges and collaborative work.

Bio: Bertrand Thirion is the leader of the Parietal team, part of INRIA research institute, Saclay, France, that addresses the development of statistics and machine learning techniques for brain imaging. He contributes both algorithms and software, with a special focus on functional neuroimaging applications. He is involved in the Neurospin (CEA) neuroimaging center, one of the leading places on the use of high-field MRI for brain imaging. Bertrand Thirion is currently head of the DATAIA convergence institute.

Headline

====== Seminar 2020/2021: Ethical issues, law & novel applications of AI (MIE 630) ======

On Tuesdays, 1:30 pm-3:00pm, Curie Amphitheatre, in the X building or by Videoconference link (According to the sanitary rules of our school and/or the government during the Covid 19 period): https://ecolepolytechnique.zoom.us/j/97910981213?pwd=SUlmbUNOWFFDWjlCYSt6UUlLNVZEQT09

To participate in the face-to-face seminars, please contact to register by mail: veronique.steyer@polytechnique.edu The number of possible participants is limited due to sanitary conditions.Blue-Gray Highlighted Text

Seminars are mandatory for students of the Master of Science and Technology in Artificial Intelligence and advanced Visual Computing (Master 2nd year).

September 22 - From artificial intelligence to computational ethics - Jean-Gabriel Ganascia, Sorbonne University, Lip6, Chairman of the COMETS (CNRS Ethical Committee) (room cancelled: Sophie Germain / Cancelled room / only in distance for respect of sanitary rules) introduced by Véronique Steyer/Louis Vuarin

Videoconference link: https://ecolepolytechnique.zoom.us/j/98734008049?pwd=UUwzaHhwUEp6MkFudksrOEZYMmZXdz09

Abstract: With the development of artificial intelligence, it is now possible to design agents that are said autonomous in the sense that their behavior results from a chain of physical causalities from sensors acquisition of signals to action without any human intervention. There are many possible applications of such agents, for instance in transportation, with autonomous cars, or in war, with autonomous weapons. Since human is not present in the loop, many fears the robots animated by such agents be predatory. In order to prevent unsafe behaviors, references to humane values have to be included in the agent programming. More technically, it means that engineers have now to design what is called ‘’ethical controllers’’ to restrict the robot actions according to moral criteria. To do so, it is necessary to mimic what philosophers call the ‘’jugement’’, which is an operation of mind, and to encode the deliberations, in case of conflicts of norms, using different ethical systems. Both, the simulation of the jugement and the deliberation modeling, give birth to what is called “Computational Ethics”. At the light of the recent autonomous car accident that happened March 2018 in Arizona, we shall detail the different dimensions that such a controller has to satisfy and the technical difficulties that artificial intelligence researchers who deal with computational ethics are facing.

Bio: Jean-Gabriel Ganascia is Professor of Computer Science at Sorbonne University, senior member of the Institut Universitaire de France, EurAI – European Association for Artificial Intelligence – fellow and member of the LIP6 (Laboratory of Computer Science of the Paris 6) where he heads the ACASA team. In addition, he chairs the COMETS that is the Ethical Committee of the CNRS and he is member of the CPEN-CCNE (comité pilote d’éthique du numérique), i.e. the Ethical committee of the CCNE (comité consultatif national d’éthique).

September 29Robust, Safe and Explainable Intelligent and Autonomous Systems - Raja Chatila (room: Sophie Germain) introduced by Véronique Steyer/Louis Vuarin

Videoconference link: https://ecolepolytechnique.zoom.us/j/97910981213?pwd=SUlmbUNOWFFDWjlCYSt6UUlLNVZEQT09

Abstract: Deploying unproven systems in critical applications (and even in not critical ones) is dangerous and irresponsible, and therefore unethical and shouldn’t be acceptable. As AI systems based on Machine Learning which statistically process data to take decisions and to predict outcomes, have become of widespread use in almost all sectors, from Healthcare to Warfare, the need to ensure they “do the right thing” and provide reliable results has become of primary importance. Hence a full research stream was started, trying to address the blackbox paradigm limitations characterizing such systems. With millions of parameters computed from data using optimization processes, the practice of using various off-the-shelf components to build new systems without solid verification and validation processes, and the absence of causal links between inputs and outputs, what does it mean, concretely, to make AI systems robust, safe and explainable? Is this a reachable objective at all? And will this lead to trustable Intelligent and Autonomous Systems?

Bio: Raja Chatila is Professor of Artificial Intelligence, Robotics and Ethics at Sorbonne University in Paris, France. He is director of the SMART Laboratory of Excellence on Human-Machine Interactions and former director of the Institute of Intelligent Systems and Robotics. He contributed in several areas of Artificial Intelligence and autonomous and interactive Robotics along his career. His research interests currently focus on human-robot interaction, machine learning and ethics. He is IEEE Fellow and was President of the IEEE Robotics and Automation Society in 2014-2015. He is chair of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, member of the High-Level Expert Group on AI with the European Commission and member of the CNPEN (comité national pilote d’éthique du numérique) in France.

October 13 - Facial recognition: from early methods to deep learning, Stéphane Gentric, Research unit manager, IDEMIA (room: Curie Amphitheatre, in the X building - compulsory registration for external persons (students of the AI-ViC master programme are automatically registered)) introduced by Véronique Steyer/Louis Vuarin

Abstract: 2D Face Recognition is one of the oldest computer vision application. As techniques improve, more complex databases arise, always leaving space for algorithm improvements. This lecture will review the whole face recognition pipeline, how early methods address the main issues and how Deep Learning handles them now. We will present major operational deployments and the most recent performances. Finally, we will discuss current limitations and future research avenues.

Bio: Stéphane Gentric is Research Unit Manager at Idemia and Associate Professor at Telecom Paris. He obtained his PhD on Pattern Recognition at UPMC in 1999. As principal researcher, then team leader, he worked on Fingerprint recognition algorithms, then Face, then Iris and now also Video Analytics. As Senior Expert, he was involved in most of Idemia’s biometrics projects over the past 15 years, such as the Changi border crossing system as well as NIST benchmarks, or the UIDAI project. His current research interests are focused around pattern recognition for the improvement of biometric systems.

November 3 - Augmenting bodies using AI: from human know-how to Computer Aided Design - François Faure, CEO Anatoscope (room: Curie Amphitheatre, in the X building - compulsory registration for external persons ; students of the AI-ViC master programme are automatically registered)) introduced by Véronique Steyer/Louis Vuarin

Abstract: From walking sticks to bionic arms, people have always augmented their bodies with supplementary or replacement parts to improve their function, comfort or aesthetics. For optimal efficiency, these must be personalized to precisely fit the body, and their design requires significant knowledge and skills on anatomy and mechanics. The Orthotics and Prosthetics (O&P) domain has developed a large body of know-how to replicate body parts using plaster, design and sculpt shapes, and mold corresponding devices. This is applied to various body parts such as teeth, limbs, ears. Unfortunately, these techniques are empirical and operator-dependent. To improve precision, O&P increasingly uses digital imaging and design software. However, most of the current software essentially consist of digital sculpting toolboxes, therefore the design process remains virtually as empirical and operator-dependent as before. In this talk, we present Anatoscope’s approach to tackle the challenge of precision in O&P. To really improve on Computer Assisted Design for O&P, we need to map the skills of good practitioners to numerical methods implemented in computers. Knowledge can be formulated using models and algorithms, while some skills are easily expressed as rules, and others are more easily described using examples. Our artificial intelligence combines these paradigms through constrained optimizations solved using various strategies. We illustrate these using various examples of dental and orthopedic design.

Bio: François Faure, 49, graduated in Mechanical Engineering at ENS Cachan in 1993, and became a full university professor in Computer Science in Grenoble, 2011. His research contributions range from the simulation of rigid and deformable solids, collision detection, to the computation of personalized models for medical simulation. He founded Anatoscope with four colleagues in 2015, and he has been fully focused on its development since then. In three years, the company has signed strategic partnerships in the dental and orthopedic domains, and grown to 40 employees.

November 10 / Seminar Cancelled and postponed (on March 5, 2021)-

November 17 - From Phd to Startup creation: Real-estate Market Transparency using AI. Adrien Bernhardt, CTO Homiwoo (room: Curie Amphitheatre, in the X building - compulsory registration for external persons ; students of the AI-ViC master programme are automatically registered)) introduced by Véronique Steyer/Louis Vuarin

Abstract: This presentation will mix a professional path, that include a Phd in computer graphics, 4 years in Criteo Data Science Team and the creation of a startup focussed on data science. The first part is dedicated to lessons from doing a Phd and working in a fast-growing company like Criteo at the time it was growing fast, while the second part is dedicated to what we do at Homiwoo, how we do and side projects we have.

Bio: Adrien Bernhardt is CTO and cofounder of Homiwoo, a startup focussed on doing data science to model the real estate market. Previously he worked 4 years at Criteo in the Machine Learning team where he had the opportunity to carry many tasks related to managing machine learning models used in production. He received a Phd in Computer Science from Grenoble University in 2011, done under the supervision of Professor Marie-Paule Cani.

Compagny: Homiwoo is a startup founded in 2017 focused on doing data science to model the real estate market. Our goal is to provide reliable and rich information to our customers, to help them in their decisions.

November 24 - No seminar

December 1stLaw and ethics of autonomous robots, Nathalie NEVEJANS, Professor and Chairholder, Responsible AI (Artois University, France) (room: Curie Amphitheatre, in the X building - compulsory registration for external persons ; students of the AI-ViC master programme are automatically registered)) introduced by Véronique Steyer/Louis Vuarin

Abstract: In recent years, progress of robotics and artificial intelligence are prodigious. All the civil robotics (surgical robots, industrial robots, robots for elderly, services robots, …) and military robotics (war robots, war drones, …) renew the debates. However, the development of autonomous robotics will have an important impact in economics, social, legal and ethics terms. Autonomous robotics has drawn the attention of the European legislator, as it is shown by the European Resolution on European Civil Law Rules in Robotics of the 16 of February, 2017. However, this text, without any obligatory value, causes more difficulties than it solves. Indeed, it tends to deform the state‐of‐the‐art in robotics and to adopt a vision tinged with science fiction. By limiting the reflection on the Law and the Ethics of the civil robots, we notice that they pose several difficulties very delicate as well as legal than ethical, especially: Should we grant legal status for autonomous robots? How to determine who is responsible for the damages caused by a robot? How will ethical issues affect the whole civil society when using autonomous robots? It is essential that all these difficulties should be understood right now because of their inevitable impact on society, and on the human being himself.

Bio: Nathalie Nevejans is Professor and Chairholder in Artificial Intelligence (Artois University Chair, France), and Member of the CNRS (Centre National de la Recherche Scientifique) Ethics Committee (COMETS, France). She is also a European Parliament expert. She went on further to create a new discipline of Law and Ethics in Robotics (Treatise of Law and Ethics of Civil Robotics/Traité de droit d’éthique de la robotique civile, 2017). Author of numerous papers, participating in events from both the academic world and various professional sectors (industry, health, insurance), she is one of the few European specialists in the Law and Ethics of Artificial Intelligence, Robotics and Emerging Technologies. Her interdisciplinary publications include, for example, S. O'Sullivan, N. Nevejans, and alii. « Legal, Regulatory, and Ethical Frameworks for development of Standards in Artificial Intelligence and Autonomous Robotic Surgery », The International Journal of Medical Robotics and Computer Assisted Surgery, 2018.

December 8 - No seminar

January 5 - Google AI principles, Ludovic Peran, Public Policy and Government Affairs Manager- AI, Google (room: Sophie Germain) introduced by Véronique Steyer/Louis Vuarin

Exceptionally, we will use Google Meet. Please let me know if you want to attend the seminar, I will send you the link.Italic Text

Abstract: Google's principles for the responsible use of AI and their application in engineering and research. Using the What-If-Tool to inspect your models and detect equity issues.

Bio: Ludovic Péran is Product Manager in Google's Artificial Intelligence Research Department. He was previously responsible for institutional relations and public policies related to artificial intelligence at Google France. He is a lecturer at ESCP in the Master of Digital Innovation and a member of the OECD Expert Group on AI and the Board of Directors of the Digital Renaissance think tank. He is a graduate of ESCP and the Ecole d' Economie de Paris (master APE).

January 12 - Confidence in AI, Sarah Lannes, Senior Research Engineer at IRT SystemX (room: Curie Amphitheatre, in the X building - compulsory registration for external persons ; students of the AI-ViC master programme are automatically registered)) introduced by Véronique Steyer/Louis Vuarin

Abstract: Uses of AI are overtaking traditional methods in several areas of the industry. However, this raises questions about confidence in these solutions: confidence in their robustness, confidence in how they err, confidence in the decision-making process. Understanding the how and most of all being able to trust these new methods has become key to being able to actually put them in practice.

Bio: Sarah Lannes graduated with an MS in Multidimensional Signal Processing and has 15 years of experience as a research engineer in computer vision and AI. She started her career in a start-up company called Let it Wave, working mostly on image and video quality questions, then went on to join Idemia (previously Safran) in the Face Biometrics research team, moving on to video analysis. She recently joined IRT Systeme X to act as an expert in a project for track surveillance for automated trains.

January 19 - Assessing the impact of A.I. on Business Model Innovation; Thierry Rayna, Professor of Innovation Management, Ecole Polytechnique - room: Sophie Germain) introduced by Véronique Steyer/Louis Vuarin

Abstract: While technological innovation is generally seen as the pinnacle of competitiveness, market success is seldom achieved without business model innovation, and it could indeed be argued that technological and business model innovation are two sides of a same coin. As a matter of fact, numerous examples can be found of companies that had innovated technologically without adapting their business model and have, as a result, met their downfall. This is particularly the case for ‘emerging’ and ‘deep-tech’ technologies, where the use cases first envisaged are rarely those that eventually prevail. A.I., as an umbrella of heterogenous technologies, is certainly one of such technologies surrounded by myths and fantasy. The objective of this talk is first to shed a light on what business model innovation actually is and to present tools that can be used to anticipate impact of technologies on business models. This will be then used to discuss the expected impact of A.I. on business models.

Bio: https://www.polytechnique.edu/annuaire/fr/user/12853/thierry.rayna#

January 26 - Fighting blindness with bionic eyes, Vincent Bismuth, General Electric Healthcare (room: Sophie Germain) introduced by Véronique Steyer/Louis Vuarin

Abstract: Restoring vision for the blind has long been considered a science-fiction topic. However, since two decades accelerating efforts in the field of visual prostheses have yielded significant progress and several hundred patients worldwide have received such devices, with various outcomes. This seminar will briefly present the field with a special focus on the image processing side, providing an overview of the main approaches, limitations and results.

Bio: Vincent Bismuth is leading a career in the field of medical devices, centered on expertise in image processing. He spent 10+ years developing image and video processing algorithms for interventional Xray procedures at General Electric Healthcare before moving into a French start-up, Pixium Vision, that design vision restoration systems for the visually impaired. He recently moved to the mammography division of General Electric where he is leading image processing developments.

February 2 - Transforming Retail industry with AI – Olivier Morillot & Sylvain Marsault, Carrefour (room: Sophie Germain) introduced by Véronique Steyer/Louis Vuarin

Abstract: Food retail is one of the last sectors of commerce largely dominated by offline business-models and for which online and digital models must be invented. The management of the product offering, the operations and the customer journey must be completely transformed to propose to the customer an enhanced digital experience. Understanding and anticipating each customer event to better interact with him and offer him the expected content is the biggest challenge. Massive collection of information and the use of ML and AI techniques such as image or voice recognition, recommender systems, time series forecasting, and conversational agents are the main levers of this transformation. During this presentation, you will discover how the DataLab of Carrefour activate these levers through the use of AI techniques such as conversational agents, recommendation systems, personal and contextual search engines, and other ML services. You will dive into the main data and AI challenges of tomorrow in the retail industry.

Bio: Olivier Morillot currently works as Carrefour-Google AI Lab tech lead. Previously, he worked 5 years at Photobox, where he productionalised deep learning models for e-commerce. He received his PhD in machine learning from Telecom Paris in 2014.

February 9 - Ethics in artificial intelligence, Issam Ibnouhsein Quantmetry, (room: Sophie Germain) introduced by Véronique Steyer/Louis Vuarin

Abstract: Artificial intelligence developments are raising a wide variety of ethical questions, ranging from most practical ones such as decision-making for autonomous cars to epistemic ones about the ability of machine learning to massively serve as decision-aid. The goal of this session is to better understand how ethics and artificial intelligence overlap, and analyze the similarities and differences between classical procedures and machine learning ones.

Bio: Issam Ibnouhsein, Head of Research & Development at Quantmetry, has a PhD in quantum computing, worked as a data scientist and is now heading the research and development activities at Quantmetry.

February 23 - Leveraging computer vision advances to address real-world challenges, Jean-Baptiste Rouquier, Senior Data scientist, Associate distinguished engineer, FUJITSU (room: Sophie Germain) introduced by Véronique Steyer/Louis Vuarin

Abstract: In the last years, deep learning applications have gained in popularity and drawn the attention of business leaders in all market segments. Deep neural networks performance in computer vision for object recognition, detection and segmentation, competing with human performance, has opened a new world of applications in a large variety of domains such as Retailing, Manufacturing, Security, Automotive, Energy and Healthcare. While deep learning is a hot topic in academic research, building real-world deep learning solutions suited to customer needs remains a difficult task which has to take into account the business specificities, the solution scalability, some ethics or legal concerns, and the potential risks related to algorithm mistakes. In this seminar, we will focus on solutions developed on top of object detection and multiple object tracking (MOT) to build computer vision systems that can be used to address various use cases developed at the Fujitsu Center of Excellence.

Bio: Jean-Baptiste Rouquier graduated from École Normale Supérieure de Lyon and worked 6 years in academic research, on complex systems, complex networks and data science applications. He was then employed as an NLP researcher for a hedge fund, then software engineer for feature engineering at Criteo. He went to Dataiku to work as a data scientist, trainer, expert support and consultant, then in a mutual health insurance, before joining Fujitsu as senior data scientist for the creation of the AI Center of Excellence. He is a Fujitsu associate distinguished engineer.

March 2 - “AI Ethics: Principles and beyond?”, Véronique Magnier, Agrégée des facultés de droit, Professeur de Droit, Université Paris-Sud Saclay, Directeur de l'Institut Droit Ethique Patrimoine (room: Sophie Germain) introduced by Véronique Steyer/Louis Vuarin

Abstract: AI Ethics has become a global topic in policy and academic circles. It is now recognised that AI technologies offer opportunities for fostering human dignity and promoting human flourishing but may be associated with risks also. Hence, the major impact AI will have on society is no longer in question. Current debates turn instead on how far this impact will be positive or negative. This seminar examines the new pertinent questions, which is no more the one of whether AI will have an impact, but by whom, how, where, and when this positive or negative impact will be felt. From a legal perspective, these questions address the way national jurisdictions (should) offer a frame for AI activities, in a global context. So far, AI Ethics seems to converge on a set of principles. But can principles alone guarantee ethical AI? The seminar mainly questions the advantages and limits of an ethical approach to AI, questionning the legal methodologies associated.

Bio: PhD in Law and Sciences Po Paris, Véronique MAGNIER is Professor of Law at the Law school of Paris-Sud/Paris-Saclay University. She is responsible for the Master’s degree in Business, Tax & Financial Market law and founded the Grande Ecole du Droit and the legal clinic of Paris-Saclay. She joined Georgetown University as an Adjunct Professor in 2010, where she teaches an annual course, “Comparative Coporate Governance” to LL.M. and JD students. Veronique Magnier is the Director of the Institute “Law, Ethics & Patrimony” at Paris-Sud/Paris-Saclay University. She is the author or co-author of seminal books and articles in the areas of corporate Law and Corporate governance, Business and Ethics, European and comparative law, and Constitutional Civil Procedure. She is the co-author, with Prof. Michel Germain, of the treaty « Sociétés commerciales, Traité de Droit commercial par Ripert et Roblot » published by LGDJ. Her most recent publications include a monography entitled “Comparative corporate governance. A legal perspective”, edited by Edward Elgar publishing in 2017 (http://www.e-elgar.com/shop/comparative-corporate-governance), and she co-authored a book entitled “Blockchain and company law” (Dalloz 2019). She is the Scientific Director of The Dalloz Encyclopedia for Corporate Law since 2003. Véronique Magnier is on the board of Transparency International France and an active member of various national, European or international associations or institutes (Trans Europe Experts, Société de Legislation comparée, European Corporate Governance Institute…).

March 5 - Social and Market Context for AI Advances, Hugo Loi, General Manager Pixminds room: Curie Amphitheatre, in the X building - compulsory registration for external persons ; students of the AI-ViC master programme are automatically registered)) introduced by Véronique Steyer/Louis Vuarin

Abstract: As the computing tools to match and best human intelligence get better, a massive amount of real-world problems becomes accessible to artificial intelligence. Still, every and each one of us has been educated to have humans solve these problems, such as medical diagnosis and airplane piloting. As a consequence, very few AI solutions will make it through ethical, legal, financial and market barriers before getting to the users in the next five years. Which ones? That is the question every AI entrepreneur wants to answer. This talk discusses 5 simple yet unsolved AI problems that could make it through the filters.

Bio: Hugo Loi holds an engineering degree from Ensimag and a PhD from Grenoble University. Hugo started his career in Computer Graphics at Inria, the French National Mapping Institute, the Walt Disney Company and Princeton University. In 2016 Hugo joined Lionel Chataignier for the creation of Pixminds, a gaming hardware corporation at the heart of the French Alps. Since then their work of transferring great tech into great products was awarded multiple times by various organisms such as the Consumer Electronics Show, the French Ministry of Interior, Bpifrance and the German Design Council.

March 9 - Big data approaches to brain imaging & applications in psychiatry, Bertrand Thirion, Head of Parietal team, head of Dataia, Inria (room: Sophie Germain) introduced by Véronique Steyer/Louis Vuarin

Abstract: Population imaging consists in studying the characteristics of populations as measured by brain imaging. The transition to big data in this field has consisted in imaging and behavioral acquisition from larger cohorts and in the agglomeration of such datasets. An important question is whether the loss of homogeneity inherent to working on composite datasets is detrimental to prediction accuracy. We provide evidence that this is not the case: larger datasets ultimately provide more power for individual prediction and diagnosis. We also outline technical aspects of the work on large imaging datasets and benefits of challenges and collaborative work.

Bio: Bertrand Thirion is the leader of the Parietal team, part of INRIA research institute, Saclay, France, that addresses the development of statistics and machine learning techniques for brain imaging. He contributes both algorithms and software, with a special focus on functional neuroimaging applications. He is involved in the Neurospin (CEA) neuroimaging center, one of the leading places on the use of high-field MRI for brain imaging. Bertrand Thirion is currently head of the DATAIA convergence institute.

Headline

Seminar 2019/2020: Ethical issues, law & novel applications of AI (MIE 630)

On Tuesdays, 1:30 pm-3:00pm, Room Gilles Kahn or Sophie Germain, Alan Turing Buidling.

  • September 17 - From artificial intelligence to computational ethics, Jean-Gabriel Ganascia, Sorbonne University, Lip6, Chairman of the COMETS (CNRS Ethical Committee) (Amhitheatre Gay Lussac), introduit par Marie-Paule Cani

Abstract: With the development of artificial intelligence, it is now possible to design agents that are said autonomous in the sense that their behavior results from a chain of physical causalities from sensors acquisition of signals to action without any human contribution. There are many possible applications of such agents, for instance in transportation, with autonomous cars, or in war, with autonomous weapons. Since human is not present in the loop, many fears the robots animated by such agents be predatory. In order to prevent unsafe behaviors, references to humane values have to be included in the agent programming. More technically, it means that engineers are now designing the equivalent of an “ethical controller” to restrict the robot actions according to moral criteria. To do so, it is necessary to encode different ethical systems, which gives birth to what is called “Computational Ethics”. At the light of the recent autonomous car accident that happened March 2018 in Arizona, we shall detail de different ethical dimensions that such a controller has to satisfy and the technical difficulties that artificial intelligence researchers who deal with computational ethics are facing.

Bio: Jean-Gabriel Ganascia is Professor of Computer Science at Sorbonne University, senior member of the Institut Universitaire de France, EurAI – European Association for Artificial Intelligence – fellow and member of the LIP6 (Laboratory of Computer Science of the Paris 6) where he heads the ACASA team. In addition, he chairs the COMETS that is the Ethical Committee of the CNRS and he is member of the CERNA, i.e. the Ethical committee of the Digital Sciences of ALLISTENE, which is the coordination of the French research institutes in computing.

  • September 24 - Ethical Issues in AI, chatbots & robots - Laurence Devillers, Professor in Artificial Intelligence at Sorbonne University, Researcher at LIMSI-CNRS - France , Head of the team “Affective and social dimensions in Spoken interaction with (ro)bots : technological and ethical issues” LIMSI & Paris-Sorbonne University. (Room Grace Hopper - 2nd floor), introduit par Marie-Paule Cani

Abstract: In a near future, socially assistive robotics/chatbots aims to address some critical gaps in care by automating supervision, coaching, motivation, and companionship aspects of interactions with the elderly, children, disabled people. Talk during social interactions naturally involves the exchange of propositional content but also and perhaps more importantly the expression of interpersonal relationships, as well as displays of emotion, affect, interest, etc. It is thus necessary that a bigger ethical thought is combined with the scientific and technological development of robots, to ensure the harmony and acceptability of their relation with the human beings. The new AI and Robotics applications in domains such as healthcare or education must be introduced in ways that build trust and understanding, and respect human and civil rights.

Bio: Prof. Laurence Devillers Professor in Artificial Intelligence at Sorbonne University Researcher at LIMSI-CNRS - France Head of the team “Affective and social dimensions in Spoken interaction with (ro)bots : technological and ethical issues”

Prof. Laurence Devillers received her PhD degree in Computer Science from University Paris-Orsay, France, in 1992 and her HDR (habilitation dissertation) in Computer Science in 2006, “Emotion in interaction: Perception, detection and generation” from University Paris-Orsay, France

Laurence Devillers is a full Professor of Computer Sciences and Artificial Intelligence at Sorbonne University/ CNRS (LIMSI lab., Orsay) on affective Robotics, Spoken dialog, Machine learning, and Ethics. She is the author of more than 150 scientific publications (h-index: 35). In 2017, she wrote the book “Des Robots et des Hommes : mythes, fantasmes et réalité” (Plon, 2017) for explaining the urgence of building Social and Affective Robotic Systems with Ethics by design. Since 2014, she is member of the French Commission on the Ethics of Research in Digital Sciences and Technologies (CERNA) of Allistène and participated to several reports on Research Ethics on Robotics (2014) and Research Ethics on Machine learning. Since 2016, she is involved in “The IEEE Global Initiative for Ethical Considerations in the Design of Autonomous Systems” and the 7008 working group on “Standard for Ethically Driven Nudging for Robotic, Intelligent and Autonomous Systems”. She is also involved in the DataIA institut (Orsay). She participated to the AiForHumanity Forum at the “Collège de France” (https://www.aiforhumanity.fr) when the AI report of Cedric Villani “For a meaningful artificial intelligence: Towards a French and European strategy” was published and the French President Emmanuel Macron presented his vision and strategy for France and Europe in Artificial Intelligence. She will participate in the “Global Forum on AI for Humanity” on October 28 and 30 2019 in the Science Academy in Paris.

  • October 1st - Facial recognition: from early methods to deep learning, Stéphane Gentric, Research unit manager, IDEMIA (room Sophie Germain), introduit par Nicolas Donati &/or Magali Payan

Abstract: 2D Face Recognition is one of the oldest computer vision application. As techniques improve, more complex databases arise, always leaving space for algorithm improvements. This lecture will review the whole face recognition pipeline, how early methods address the main issues and how Deep Learning handles them now. We will present major operational deployments and the most recent performances. Finally, we will discuss current limitations and future research avenues.

Bio: Stéphane Gentric is Research Unit Manager at Idemia (ex-Morpho) and a Deep Learning lecturer at ESIEA and Telecom ParisTech. He received his PhD on Pattern Recognition at UPMC in 1999. As principal researcher then team leader, he worked on Fingerprint recognition algorithms, then Face, then Iris and now also Video Analytics. As Senior Expert, he was involved in most of Idemia’s projects in biometrics for the past 15 years, such as the Changi border crossing System as well as NIST benchmarks, or the UIDAI project. His current research interests center around pattern recognition for improvement of biometric systems.

  • October 22 - Augmenting bodies using AI : from human know-how to Computer Aided Design - François Faure, CEO Anatoscope. (room Sophie Germain, introduit par Marie-Paule Cani).

Abstract: From walking sticks to bionic arms, people have always augmented their bodies with supplementary or replacement parts to improve their function, comfort or aesthetics. For optimal efficiency, these must be personalized to precisely fit the body, and their design requires significant knowledge and skills on anatomy and mechanics. The Orthotics and Prosthetics (O&P) domain has developed a large body of know-how to replicate body parts using plaster, design and sculpt shapes, and mold corresponding devices. This is applied to various body parts such as teeth, limbs, ears. Unfortunately, these techniques are empirical and operator-dependent. To improve precision, O&P increasingly uses digital imaging and design software. However, most of the current software essentially consist of digital sculpting toolboxes, therefore the design process remains virtually as empirical and operator-dependent as before.

In this talk, we present Anatoscope’s approach to tackle the challenge of precision in O&P. To really improve on Computer Assisted Design for O&P, we need to map the skills of good practitioners to numerical methods implemented in computers. Knowledge can be formulated using models and algorithmes, while some skills are easily expressed as rules, and others are more easily described using examples. Our artificial intelligence combines these paradigms through constrained optimizations solved using various strategies. We illustrate these using various examples of dental and orthopedic design.

Bio: François Faure, 49, graduated in Mechanical Engineering at ENS Cachan in 1993, and became a full university professor in Computer Science in Grenoble, 2011. His research contributions range from the simulation of rigid and deformable solids, collision detection, to the computation of personalized models for medical simulation. He founded Anatoscope with four colleagues in 2015, and he has been fully focused on its development since then. In three years the company has signed strategic partnerships in the dental and orthopedic domains, and grown to 40 employees.

  • November 5 - Social and Market Context for AI Advances, Hugo Loi, General Manager Pixminds (room Sophie Germain, introduit par Damien Rohmer).

Abstract: As the computing tools to match and best human intelligence get better, a massive amount of real-world problems becomes accessible to artificial intelligence. Still, every and each one of us has been educated to have humans solve these problems, such as medical diagnosis and airplane piloting. As a consequence, very few AI solutions will make it through ethical, legal, financial and market barriers before getting to the users in the next five years. Which ones? That is the question every AI entrepreneur wants to answer. This talk discusses 5 simple yet unsolved AI problems that could make it through the filters.

Bio: Hugo Loi holds an engineering degree from Ensimag and a PhD from Grenoble University. Hugo started his career in Computer Graphics at Inria, the French National Mapping Institute, the Walt Disney Company and Princeton University. In 2016 Hugo joined Lionel Chataignier for the creation of Pixminds, a gaming hardware corporation at the heart of the French Alps. Since then their work of transferring great tech into great products was awarded multiple times by various organisms such as the Consumer Electronics Show, the French Ministry of Interior, Bpifrance and the German Design Council.

  • November 12 - From Phd to Startup creation: Real-estate Market Transparency using AI. Adrien Bernhardt, CTO Homiwoo (room Sophie Germain, introduit par Marie-Paule Cani).

Abstract: This presentation will mix a professional path, that include a Phd in computer graphics, 4 years in Criteo Data Science Team and the creation of a startup focussed on data science. The first part is dedicated to lessons from doing a Phd and working in a fast growing company like Criteo at the time it was growing fast, while the second part is dedicated to what we do at Homiwoo, how we do and side projects we have.

Bio: Adrien Bernhardt is CTO and cofounder of Homiwoo, a startup focussed on doing data science to model the real estate market. Previously he worked 4 years at Criteo in the Machine Learning team where he had the opportunity to carry many tasks related to managing machine learning models used in production. He received a Phd in Computer Science from Grenoble University in 2011, done under the supervision of Professor Marie-Paule Cani.

Compagny: Homiwoo is a startup founded in 2017 focused on doing data science to model the real estate market. Our goal is to provide reliable and rich information to our customers, to help them in their decisions.

  • November 19 - No seminar
  • November 26 - 14:30-16:00 Law and ethics of autonomous robots, Nathalie NEVEJANS, Lecturer in Law, University of Artois (France) (room Gilles Kahn, introduit par Erwan Scornet).

Abstract. In recent years, progress of robotics and artificial intelligence are prodigious. All the civil robotics (surgical robots, industrial robots, robots for elderly, services robots, …) and military robotics (war robots, war drones, …) renew the debates. However, the development of autonomous robotics will have an important impact in economics, social, legal and ethics terms.

Autonomous robotics has drawn the attention of the European legislator, as it is shown by the European Resolution on European Civil Law Rules in Robotics of the 16 of February, 2017. However, this text, without any obligatory value, causes more difficulties than it solves. Indeed, it tends to deform the state‐of‐the‐art in robotics and to adopt a vision tinged with science fiction.

By limiting the reflection on the Law and the Ethics of the civil robots, we notice that they pose several difficulties very delicate as well as legal than ethical, especially: Should we grant legal status for autonomous robots? How to determine who is responsible for the damages caused by a robot? How will ethical issues affect the whole civil society when using autonomous robots?

It’s impossible to skip these debates today, because the European Commission have to adopt an European Directive in 2019 on autonomous robotics, which will have an compulsory value in European Union. It’s therefore essential that all these difficulties should be understood right now because of their inevitable impact on society, and on human being himself.

Bio: Nathalie Nevejans is a lecturer in private law at the University of Artois (France), authorized to direct research projects and a member of the CNRS Ethics Committee (COMETS). Author of numerous articles, participating in events for not only the academic world but also industry, she is one of the few specialists in France on the law and ethics of robotics, artificial intelligence and emerging technologies. She is also a member of the Research Centre for Law, Ethics and Procedures (EA n° 2471), as well as the Institute for the Study of Human-Robot Relations (Etude des Relations Hommes-Robots – IERHR). Her book “Treatise of Law and Ethics of Civil Robotics”, LEH editions (1232 pages) was published in 2017.

1-law_and_ethics_of_autonomous_robots_polytechniques_paris_27_novembre_2018.pdf

2-civil_law_rules_on_robotics_2017.pdf

3-study_n._nevejans_european_civil_law_rules_on_robotics_2016.pdf

4-draft_report_on_civil_law_rules_on_robotics_2016.pdf

  • December 10 - Google AI principles, Ludovic Peran, Public Policy and Government Affairs Manager- AI, Google (room Sophie Germain, introduit par Erwan Scornet).

Abstract: Google's principles for the responsible use of AI and their application in engineering and research. Using the What-If-Tool to inspect your models and detect equity issues.

Bio: Ludovic Péran is Product Manager in Google's Artificial Intelligence Research Department. He was previously responsible for institutional relations and public policies related to artificial intelligence at Google France. He is a lecturer at ESCP in the Master of Digital Innovation and a member of the OECD Expert Group on AI and the Board of Directors of the Digital Renaissance think tank. He is a graduate of ESCP and the Ecole d' Economie de Paris (master APE).

  • January 7 - Ethical questions in the biometrics industry, Sarah Lannes, Research engineer, IDEMIA (room Sophie Germain, introduit par Marie-Paule Cani).

Abstract: The rise of IA has brought many ethical questions to light in the public and legal spheres. However even before this, ethics and legal matters have always been an intrinsic part of the biometrics field of research due to its very nature. We will present both the legal and technical challenges that we encounter and the solutions we propose.

Bio: Sarah Lannes has been a Research Engineer with Idemia for five years; her main focus is face detections and video analysis. She graduated from the Ecole Centrale de Lyon and Penn State University (MS) and went on to join Let it Wave as a research engineer focusing on image and video processing for a range of applications from geological exploration and satellite imaging to deinterlacing and video super-resolution.

  • January 14 - Thierry Rayna, Professor of Innovation Management? Ecole Polytechnique, Assessing the impact of A.I. on Business Model Innovation (room Sophie Germain, introduit par Erwan Scornet).

Abstract. While technological innovation is generally seen as the pinnacle of competitiveness, market success is seldom achieved without business model innovation, and it could indeed be argued that technological and business model innovation are two sides of a same coin. As a matter of fact, numerous examples can be found of companies that had innovated technologically without adapting their business model and have, as a result, met their downfall. This is particularly the case for ‘emerging’ and ‘deep-tech’ technologies, where the use cases first envisaged are rarely those that eventually prevail. A.I., as an umbrella of heterogenous technologies, is certainly one of such technologies surrounded by myths and fantasy. The objective of this talk is first to shed a light on what business model innovation actually is and to present tools that can be used to anticipate impact of technologies on business models. This will be then used to discuss the expected impact of A.I. on business models.

Bio: https://www.polytechnique.edu/annuaire/fr/user/12853/thierry.rayna#

  • January 21 - Fighting blindness with bionic eyes, Vincent Bismuth, Pixium Vision (room Sophie Germain, introduit par Erwan Scornet).

Abstract: Restoring vision for the blind has long been considered a science-fiction topic. However, since two decades accelerating efforts in the field of visual prostheses have yielded significant progress and several hundred patients worldwide have received such devices, with various outcomes. This seminar will briefly present the field with a special focus on the image processing side, providing an overview of the main approaches, limitations and results.

Bio: Vincent Bismuth is leading a career in the field of medical devices, centered on expertise in image processing. He spent 10+ years developing image and video processing algorithms for interventional Xray procedures at General Electric Healthcare before moving into a French start-up, Pixium Vision, that design vision restoration systems for the visually impaired. He recently moved to the mammography division of General Electric where he is leading image processing developments.

  • January 28 - Transforming the digital customer journey with AI, Gonzalo Casajus Rey, AI and Platforms Manager at Carrefour & Sylvain Marsault (room Sophie Germain, introduit par Erwan Scornet).

Abstract: Food retail is one of the last sectors of commerce largely dominated by offline business-models and for which online and digital models must be invented.

The management of the product offering, the operations and the customer journey must be completely transformed to propose to the customer an enhanced digital experience.

Understanding and anticipating each customer event to better interact with him and offer him the expected content is the biggest challenge.

Massive collection of information and the use of ML and AI techniques such as Recurrent Neural Networks, Image or Voice Recognition systems, and conversational agents are the main levers of this transformation.

During this presentation, you will discover how the DataLab of Carrefour activate these levers through the use of AI techniques such as conversational agents, recommendation systems, personal and contextual search engines, and other ML services. You will dive into the main data and AI challenges of tomorrow in the retail industry.

Bio: Gonzalo Casajus Rey is an engineer specializing in AI. For several years, he has been in charge of transforming the digital journey at Carrefour. He worked on chatbots, vocal assistants, personalization of the customer experience, recommendation systems leveraging AI and ML algorithms. Parallel to his career at Carrefour, Gonzalo is an active member of the data science community. He works for the chair and gives AI courses at the IE University of Madrid and the EOI of Madrid

  • February 11 - Ethics in artificial intelligence,Issam Ibnouhsein Quantmetry, (room Sophie Germain, introduit par Marie-Paule Cani).

Abstract: Artificial intelligence developments are raising a wide variety of ethical questions, ranging from most practical ones such as decision-making for autonomous cars to epistemic ones about the ability of machine learning to massively serve as decision-aid. The goal of this session is to better understand how ethics and artificial intelligence overlap, and analyze the similarities and differences between classical procedures and machine learning ones.

Bio:Issam Ibnouhsein, Head of Research & Development at Quantmetry, has a PhD in quantum computing, worked as a data scientist and is now heading the research and development activities at Quantmetry

  • February 25 - Leveraging computer vision advances to address real-world challenges,Jean-Baptiste Rouquier,Senior Data scientist, Associate distinguished engineer & Sébastien Ioos, Data scientist, FUJITSU (room Sophie Germain, introduit par Marie-Paule Cani).

Abstract: “In the last years, deep learning applications have gained in popularity and drawn the attention of business leaders in all market segments. Deep neural networks performance in computer vision for object recognition, detection and segmentation, competing with human performance, has opened a new world of applications in a large variety of domains such as Retailing, Manufacturing, Security, Automotive, Energy and Healthcare. While deep learning is a hot topic in academic research, building real-world deep learning solutions suited to customer needs remains a difficult task which has to take into account the business specificities, the solution scalability, some ethics or legal concerns, and the potential risks related to algorithm mistakes. In this seminar, we will focus on solutions developed on top of object detection and multiple object tracking (MOT) to build computer vision systems that can be used to address various use cases developed at the Fujitsu Center of Excellence.”

Bio: “Jean-Baptiste Rouquier graduated from École Normale Supérieure de Lyon and worked 6 years in academic research, on complex systems, complex networks and data science applications. He was then employed as an NLP researcher for a hedge fund, then software engineer for feature engineering at Criteo. He went to Dataiku to work as a data scientist, trainer, expert support and consultant, then in a mutual health insurance, before joining Fujitsu as senior data scientist for the creation of the AI Center of Excellence. He is a Fujitsu associate distinguished engineer.”

“Sébastien IOOSS is graduated from Ecole Centrale Paris and National University of Singapore. He joined Fujitsu in 2017 at the creation of the AI Center of Excellence as data scientist specialized in computer vision and deep learning applications.”

  • March 3 - “AI Ethics: Principles and beyond?”, Véronique Magnier, Agrégée des facultés de droit, Professeur de Droit, Université Paris-Sud Saclay, Directeur de l'Institut Droit Ethique Patrimoine (room Sophie Germain, introduit par Marie-Paule Cani).

Abstract: AI Ethics has become a global topic in policy and academic circles. It is now recognised that AI technologies offer opportunities for fostering human dignity and promoting human flourishing but may be associated with risks also. Hence, the major impact AI will have on society is no longer in question. Current debates turn instead on how far this impact will be positive or negative. This seminar examines the new pertinent questions, which is no more the one of whether AI will have an impact, but by whom, how, where, and when this positive or negative impact will be felt. From a legal perspective, these questions address the way national jurisdictions (should) offer a frame for AI activities, in a global context. So far, AI Ethics seems to converge on a set of principles. But can principles alone guarantee ethical AI? The seminar mainly questions the advantages and limits of an ethical approach to AI, questionning the legal methodologies associated

Bio: PhD in Law and Sciences Po Paris, Véronique MAGNIER is Professor of Law at the Law school of Paris-Sud/Paris-Saclay University. She is responsible for the Master’s degree in Business, Tax & Financial Market law and founded the Grande Ecole du Droit and the legal clinic of Paris-Saclay. She joined Georgetown University as an Adjunct Professor in 2010, where she teaches an annual course, “Comparative Coporate Governance” to LL.M. and JD students.

Veronique Magnier is the Director of the Institute “Law, Ethics & Patrimony” at Paris-Sud/Paris-Saclay University. She is the author or co-author of seminal books and articles in the areas of corporate Law and Corporate governance, Business and Ethics, European and comparative law, and Constitutional Civil Procedure. She is the co-author, with Prof. Michel Germain, of the treaty « Sociétés commerciales, Traité de Droit commercial par Ripert et Roblot » published by LGDJ. Her most recent publications include a monography entitled “Comparative corporate governance. A legal perspective”, edited by Edward Elgar publishing in 2017 (http://www.e-elgar.com/shop/comparative-corporate-governance), and she co-authored a book entitled “Blockchain and company law” (Dalloz 2019). She is the Scientific Director of The Dalloz Encyclopedia for Corporate Law since 2003.

Véronique Magnier is on the board of Transparency International France and an active member of various national, European or international associations or institutes (Trans Europe Experts, Société de Legislation comparée, European Corporate Governance Institute…).

  • March 10 - Big data approaches to brain imaging & applications in psychiatry, Bertrand Thirion,Head of Parietal team, head of Dataia, Inria (room Sophie Germain, introduit par Nicolas Donati &/or Magali Payan).

Abstract: Population imaging consists in studying the characteristics of populations as measured by brain imaging. The transition to big data in this field has consisted in imaging and behavioral acquisition from larger cohorts and in the agglomeration of such datasets. An important question is whether the loss of homogeneity inherent to working on composite datasets is detrimental to prediction accuracy. We provide evidence that this is not the case: larger datasets utlimately provide more power for individual prediction and diagnosis. We also outline technical aspects of the work on large imaging datasets and benefits of challenges and collaborative work.

Bio: Bertrand Thirion is the leader of the Parietal team, part of INRIA research institute, Saclay, France, that addresses the development of statistics and machine learning techniques for brain imaging. He contributes both algorithms and software, with a special focus on functional neuroimaging applications. He is involved in the Neurospin (CEA) neuroimaging center, one of the leading places on the use of high-field MRI for brain imaging. Bertrand Thirion is currently head of the DATAIA convergence institut

Programm 2018-2019

  • September 25 - Introduction & Presentation of the “Cases studies”, Marie-Paule Cani & Erwan Scornet
  • October 2 - Toward Responsible and Safe AI, Nozha Boujemaa, Dr Inria, director of the DataIA institute (Data Sciences, Intelligence & Society)
  • October 16 - From artificial intelligence to computational ethics, Jean-Gabriel Ganascia, Sorbonne University, Lip6, Chairman of the COMETS (CNRS Ethical Committee)
  • October 23 - Facial recognition: from early methods to deep learning, Stéphane Gentrix, Research unit manager, IDEMIA
  • November 13 - Google's AI principles, Ludovic Peran, Google Paris.
  • November 20 - Unsupervised Learning on Homogeneous Manifolds, Lie groups and Structured Matrices based on Information Geometry and Souriau Lie Group Thermodynamics, Frederic Barbaresco, THALES
  • November 27 - Law and ethics of autonomous robots, Nathalie NEVEJANS, Lecturer in Law, University of Artois (France)
  • December 4 - Algorithmic fairness, Nicolas Usunier, Facebook.
  • January 8 - AI@Inria (Research in Artificial Intelligence at Inria). Bertrand Braunschweig, Inria, director of Inria Saclay research center
  • January 15 - Use of personal data and ethics: Vision & challenges from the industry, Sarah Lannes, Research engineer, IDEMIA
  • January 22 - From data protection to data empowerment : how can humans keep the upper hand? Geoffrey DELCROIX, Direction des technologies et de l’innovation, CNIL
  • January 29 - Fighting blindness with bionic eyes, Vincent Bismuth, Pixium Vision.
  • February 12 - Kernel methods for genetics - Jean-Philippe Vert, Google Paris.
  • February 19 - Is ethics computable? - Milad Doueihi, philosophe spécialiste du numérique, Université Paris-Sorbonne
  • March 5 - Learning Prosthetics Design : Function, Shape, Style - François Faure, Anatoscope
  • March 12 - Gender issues in AI, chatbots & robots - Laurence Devillers, LIMSI & Paris-Sorbonne University
seminarprogramm.txt · Last modified: 2023/02/15 15:35 by payan