User Tools

Site Tools


seminarprogramm

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
seminarprogramm [2021/09/15 16:42]
scornet [Seminar 2021/2022: Ethical issues, law & novel applications of AI (MIE 630)]
seminarprogramm [2023/02/15 15:35] (current)
payan [Seminar 2021/2022: Ethical issues, law & novel applications of AI (MIE 630)]
Line 1: Line 1:
 +====== **Seminar 2022/2023: Ethical issues, law & novel applications of AI (MIE 630)** ======
 +
 +On **Tuesdays, 1:30 pm-3:​00pm**,​ Amphitheater Sophie Germain, in the X building (near main entrance/​Receptions) ​ or by Videoconference if necessary
 +
 +//**To participate in the face-to-face seminars**, please contact to register by mail: veronique.steyer@polytechnique.edu
 +The number of possible participants is limited due to sanitary conditions.//<​color #​7092be>​Blue-Gray Highlighted Text</​color> ​
 +
 +
 +//Seminars are mandatory for students of the Master of Science and Technology in Artificial Intelligence and advanced Visual Computing (Master 2nd year).//
 +
 +
 +**<color #​00a2e8>​September 20</​color>​ - Ethical issues of digital and artificial intelligence - Difference between ethics, regulation, norms, standards and deontology - Jean-Gabriel Ganascia, Sorbonne University**
 +
 +**Abstract**:​ After a general introduction to ethics and some examples of violation of moral rules in today'​s world due to the use of AI, this conference will be organized around four major concepts that take a particular dimension in the case of artificial intelligence applications:​ autonomy, justice (and equity, but we will see that it is almost the same thing, (and that everything is in the almost...)),​ privacy (by distinguishing this concept with those of private sphere, intimacy and “extimacy”) and finally transparency and explicability. As we will see with illustrations of current AI applications,​ these four pillars of bioethics have been misleadingly reused by AI and digital ethics committees.
 +
 +**Bio:** Jean-Gabriel Ganascia is Professor of Computer Science at Sorbonne University, honorary member of the Institut Universitaire de France, EurAI – European Association for Artificial Intelligence – fellow and member of the LIP6 (Laboratory of Computer Science of the Paris 6) where he heads the ACASA team. He chaired the COMETS that is the Ethical Committee of the CNRS between 2016 and 2021. He chairs the Pôle Emploi (that is the public agency in charge of employment in France) Ethics Committee and he is member of the CPEN-CCNE (comité pilote d’éthique du numérique),​ i.e. the Ethical committee of the CCNE (comité consultatif national d’éthique). Published in March 2022 and entitled “Virtual Servitudes”,​ his last book deal with the ethical issues of AI
 +
 +**<color #​00a2e8>​October 4</​color>​ - Performance and fairness of facial recognition algorithms, Stéphane Gentric, Research unit manager, IDEMIA**
 +
 +**Abstract:​** 2D Facial Recognition is one of the oldest computer vision applications. As techniques improve, performance increases and new topics such as fairness appear. This lecture will review the whole facial recognition pipeline and show how Deep Learning addresses key issues. We will present major operational deployments and the most recent performance. Finally, we will discuss current limitations and avenues for future research.
 +
 +**Bio:** Stéphane Gentric is Chief AI Scientist at Idemia and associate professor at Telecom ParisTech. He received his PhD on Pattern Recognition at UPMC in 1999. As principal researcher then team leader, he worked on Fingerprint recognition algorithms, then Face, then Iris and now also Video Analytics. As Fellow Expert, he was involved in most of Idemia’s projects in biometrics for the past 20 years, such as the India Identity ​ program (UIDAI), the Changi border crossing System or the European border central system, as well as NIST benchmarks. His current research interests center around pattern recognition for improvement of biometric systems.
 +
 +
 +**<color #​00a2e8>​October 18</​color>​ - From Phd to Startup creation: Real-estate Market Transparency using AI. Adrien Bernhardt, Homiwoo**
 +
 +**Abstract:​** This presentation will mix a professional path, that include a Phd in computer graphics, 4 years in Criteo Data Science Team and the creation of a startup focussed on data science. The first part is dedicated to lessons from doing a Phd and working in a fast-growing company like Criteo at the time it was growing fast, while the second part is dedicated to what we do at Homiwoo, how we do and side projects we have. 
 +
 +**Bio:** Adrien Bernhardt is CTO and cofounder of Homiwoo, a startup focussed on doing data science to model the real estate market. Previously he worked 4 years at Criteo in the Machine Learning team where he had the opportunity to carry many tasks related to managing machine learning models used in production. He received a Phd in Computer Science from Grenoble University in 2011, done under the supervision of Professor Marie-Paule Cani. 
 +
 +**Company:​** Homiwoo is a startup founded in 2017 focused on doing data science to model the real estate market. Our goal is to provide reliable and rich information to our customers, to help them in their decisions. ​
 +
 +
 +**<color #​00a2e8>​October 25</​color>​ – Robust, Safe and Explainable Intelligent and Autonomous Systems - Raja Chatila, Sorbonne University**
 +
 +**Abstract:​** Deploying unproven systems in critical applications,​ and even in seemingly not critical ones, might be dangerous and irresponsible,​ and therefore unethical and shouldn’t be acceptable. As AI systems based on Machine Learning which statistically process data to make decisions and to predict outcomes, have become of widespread use in almost all sectors, from Healthcare to Warfare, the need to ensure they "do the right thing" and provide reliable results has become of primary importance. Adopting a risk-based approach, the European Commission has proposed a new regulation for AI tailoring the regulation level to the risk level. But how to evaluate and mitigate risk? With millions of parameters computed from data using optimization processes, the practice of using various off-the-shelf components to build new systems without solid verification and validation processes, and the absence of causal links between inputs and outputs, what does it mean, concretely, to make AI systems robust, safe and explainable?​ Is this a reachable objective at all? And will this lead to trustable Intelligent and Autonomous Systems?
 +
 +**Bio:** Raja Chatila is Professor Emeritus of Artificial Intelligence,​ Robotics and Ethics at Sorbonne University in Paris, France. He is former director of the SMART Laboratory of Excellence on Human-Machine Interactions and of the Institute of Intelligent Systems and Robotics. He contributed in several areas of Artificial Intelligence and autonomous and interactive Robotics along his career. His research interests currently focus on human-robot interaction,​ machine learning and ethics. He is IEEE Fellow and was President of the IEEE Robotics and Automation Society in 2014-2015. He is chair of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems and member of the CNPEN (comité national pilote d’éthique du numérique) in France.
 +
 +**<color #​00a2e8>​November 8</​color>​ - Can artificial intelligence be creative ? – François Levin, Ecole polytechnique**
 +
 +**Abstract:​** We are witnessing AI creativity flourish : new algorithms (GAN, CAN...), new models (GPT-3, DALL-E, Midjourney),​ new applications (text, images, audio)... Is AI becoming really creative ? Or is it just a "​simulation"​ of creativity ? What does it really mean to be "​creative"​ ? 
 +
 +**Bio:** François Levin is a PHD-candidate in philosophy at the Ecole Polytechnique. ​
 +
 +**<color #​00a2e8>​November 15</​color>​ - Quentin PANISSOD - Leveraging AI for public interest and hands-on implications for projects**
 +
 +**Abstract**:​ As a major innovation trend worldwide, AI is triggering investments in tens of billion euros yearly. According to Stanford’s AI index, half of those invest- ments are leveraged by finance, marketing and surveillance sectors only, with autonomous mobility and healthcare as runners-up. This presentation will fo- cus on how AI can be leveraged for other activities and specifically towards public interest. Through practical examples from different organizational con- figurations (business, nonprofit, partnerships...),​ the potential and limits of leveraging ”responsible AI” will be addressed: how can AI solutions be devel- oped in ”digital-wary” sectors with high environmental costs like construction?​ What are the solutions and limits to building AI projects in activities without digital skills and without structured data? How do AI projects balance social stakes, environmental costs and expected results to claim the development of ”responsible AI”?
 +
 +**Bio:** Quentin Panissod graduated as a robotics engineer at Polytech Sorbonne. As a student, he also led and co-founded nonprofit organizations for charity and na- tional students’ unions. For five years, he built the AI foresight and projects ac- tivity at Leonard, VINCI group’s innovation platform, delivering 35 AI projects for construction,​ energy and mobility sectors. With the covid crisis emerging, he co-founded AI For Tomorrow, nonprofit organization that supported 20 projects for environment,​ healthcare or society topics. More recently, aiming at larger scale AI dedicated to environmental purposes, he then led the creation of the RenovAIte project, a european platform of AI and data services to speed up and improve renovation for housing and roads. In parallel, he co-founded The Swarm Initiative, a mission-driven company that builds and coordinates collaborative projects for public interest using major innovation trends like AI.
 +
 +**<color #​00a2e8>​November 29</​color>​ – Artificial intelligence:​ what future European regulation? - Nathalie Nevejans, Artois University, France**
 +
 +**Abstract:​** To face the challenges in terms of AI, gain the trust of citizens, and promote its development and use, the European Union has started for a few years to set up an ethical framework for AI, but without proposing any mandatory rules. In April 2021, the EU wanted to go further by unveiling its first draft of a future regulation, this time intended to set the legal rules for AI, which will have binding force in all EU Member States. This evolving future regulation of AI concerns the cycle of AI from its introduction to the market to its professional use. The seminar will not only make it possible to understand the issues and impacts of the AI Act, but also to examine the questions and criticisms that arise on the legal and ethical level.
 +
 +**Bio:** Nathalie Nevejans is Assistant Professor in Law (Artois University, France) and Head of Chair in Law and Ethics in Artificial Intelligence - Responsible AI Chair - (Artois University). Her book is published in 2017 (Treatise of Law and Ethics of Civil Robotics/​Traité de droit d’éthique de la robotique civile, 2017, 1232 pages). Author of numerous papers, participating in events from both the academic world and various professional sectors (industry, health, insurance, ...), she is one of the few European specialists in the Law and Ethics of Artificial Intelligence,​ Robotics and Emerging Technologies. Her interdisciplinary publications include, for example, S. O'​Sullivan,​ N. Nevejans, and alii. « Legal, Regulatory, and Ethical Frameworks for development of Standards in Artificial Intelligence and Autonomous Robotic Surgery », The International Journal of Medical Robotics and Computer Assisted Surgery, 2018.
 +Caché pour les étudiants
 +
 +**<color #​00a2e8>​January 10</​color>​ - Confidence in AI, Sarah Lannes, IRT Systems X**
 +
 +**Abstract:​** Uses of AI are overtaking traditional methods in several areas of the industry. However, this raises questions about confidence in these solutions: confidence in their robustness, confidence in how they err, confidence in the decision-making process. Understanding the how and most of all being able to trust these new methods has become key to being able to actually put them in practice.
 +
 +**Bio:** Sarah Lannes graduated with an MS in Multidimensional Signal Processing and has 15 years of experience as a research engineer in computer vision and AI. She started her career in a start-up company called Let it Wave, working mostly on image and video quality questions, then went on to join Idemia (previously Safran) in the Face Biometrics research team, moving on to video analysis. She recently joined IRT Systeme X to act as an expert in a project for track surveillance for automated trains.
 +
 +
 +**<color #​00a2e8>​January 17</​color>​ - Fighting blindness with bionic eyes, Vincent Bismuth, Pixium Vision**
 +
 +**Abstract:​** Restoring vision for the blind has long been considered a science-fiction topic. However, since two decades accelerating efforts in the field of visual prostheses have yielded significant progress and several hundred patients worldwide have received such devices, with various outcomes. This seminar will briefly present the field with a special focus on the image processing side, providing an overview of the main approaches, limitations and results. ​
 +
 +**Bio:** Vincent Bismuth is leading a career in the field of medical devices, centered on expertise in image processing. He spent 10+ years developing image and video processing algorithms for interventional Xray procedures at General Electric Healthcare before moving into a French start-up, Pixium Vision, that design vision restoration systems for the visually impaired. He recently moved to the mammography division of General Electric where he is leading image processing developments. ​
 +
 +**<color #​00a2e8>​January 24</​color>​ - Bring in the algorithm! The biases of predictive models under scrutiny - Fabien Tarrissan, C**NRS
 +
 +**Abstract:​** While the applications resulting from AI techniques continue to diversify, the law could not escape the trend of automating decision-making. This is manifested in particular by the proposition of using predictive models issued from machine learning (ML) techniques to drive future decisions in judiciary contexts. If the use of these techniques in the courts is legitimately debated, they are on the other hand already used in law firms and, more broadly, in legal branches of private companies in order to establish or support their litigation strategies.
 +
 +Described in broad terms, the ML approach consists of the analysis of a corpus of legal decisions seeking to identify what are the main characteristics that have been taken into account by the judges in settling the cases. This knowledge is then presented, in a second step, as a useful way to inform forthcoming decisions on new cases.
 +
 +This talk will be the opportunity to briefly present the concepts at the core of ML techniques before discussing how their efficiency and, more importantly,​ their potential biases are formally assessed by computer scientists. This will address the question of a possible discrimination in algorithmic recommendations and we will see that different formulations of what could be fair recommendations lead in fact to different biases that are irreconcilable. This will raise the question of how to regulate the use of AI approaches in such a context.
 +
 +**Bio :** Fabien Tarissan is a researcher in computer science at the French National Centre for Scientific Research (CNRS) and adjunct professor at École Normale Supérieure de Paris-Saclay.
 +His work mainly concerns the analysis and modeling of large networks encountered in practice, such as the Internet, the web, social networks or legal networks. His research involves in particular the study of recommendation systems such as the ones using machine learning techniques.
 +
 +**<color #​00a2e8>​January 31</​color>​ - Ethics in artificial intelligence,​ Issam Ibnouhsein, Implicity**
 +
 +**Abstract:​** Artificial intelligence developments are raising a wide variety of ethical questions, ranging from most practical ones such as decision-making for autonomous cars to epistemic ones about the ability of machine learning to massively serve as decision-aid. The goal of this session is to better understand how ethics and artificial intelligence overlap, and analyze the similarities and differences between classical procedures and machine learning ones. 
 +
 +**Bio:** Issam Ibnouhsein, Head of Data Science at Implicity, has a PhD in quantum computing, and has lead various research projects at the intersection of AI and healthcare.
 +
 +**<color #​00a2e8>​February 1st</​color>​ - Thierry Rayna, Professor of Innovation Management, Ecole Polytechnique,​ Assessing the impact of A.I. on Business Model Innovation**
 +
 +**Abstract:​** While technological innovation is generally seen as the pinnacle of competitiveness,​ market success is seldom achieved without business model innovation, and it could indeed be argued that technological and business model innovation are two sides of a same coin. As a matter of fact, numerous examples can be found of companies that had innovated technologically without adapting their business model and have, as a result, met their downfall. This is particularly the case for ‘emerging’ and ‘deep-tech’ technologies,​ where the use cases first envisaged are rarely those that eventually prevail. A.I., as an umbrella of heterogenous technologies,​ is certainly one of such technologies surrounded by myths and fantasy. The objective of this talk is first to shed a light on what business model innovation actually is and to present tools that can be used to anticipate impact of technologies on business models. This will be then used to discuss the expected impact of A.I. on business models. ​
 +
 +**Bio:** https://​www.polytechnique.edu/​annuaire/​fr/​user/​12853/​thierry.rayna#​
 +
 +**<color #​00a2e8>​February 7</​color>​ - Leveraging computer vision advances to address real-world challenges, Jean-Baptiste Rouquier and Margarita Khokhlova, FUJITSU**
 +
 +**Abstract:​** In the last years, deep learning applications have gained in popularity and drawn the attention of business leaders in all market segments. Deep neural networks performance in computer vision for object recognition,​ detection and segmentation,​ competing with human performance,​ has opened a new world of applications in a large variety of domains such as Retailing, Manufacturing,​ Security, Automotive, Energy and Healthcare. While deep learning is a hot topic in academic research, building real-world deep learning solutions suited to customer needs remains a difficult task which has to take into account the business specificities,​ the solution scalability,​ some ethics or legal concerns, and the potential risks related to algorithm mistakes. In this seminar, we will focus on solutions developed on top of object detection and multiple object tracking (MOT) to build computer vision systems that can be used to address various use cases developed at the Fujitsu Center of Excellence.
 +
 +**Bio:** Jean-Baptiste Rouquier graduated from École Normale Supérieure de Lyon and worked 6 years in academic research, on complex systems, complex networks and data science applications. He was then employed as an NLP researcher for a hedge fund, then software engineer for feature engineering at Criteo. He went to Dataiku to work as a data scientist, trainer, expert support and consultant, then in a mutual health insurance, before joining Fujitsu as senior data scientist for the creation of the AI Center of Excellence. He is a Fujitsu associate distinguished engineer.
 +Margarita Khokhlova is a data scientist at Fujitsu. Her primary area of expertise is computer vision. Before joining Fujitsu, she mainly worked in public research in IGN and LIRIS. She obtained a  Ph.D. degree from the University of Burgundy in 2018, where her dissertation was dedicated to automatic gait analysis using 3D active sensors. ​ She also holds two separate master degrees. The first is a joint degree in computer vision from the University of Jean-Monnet ​ Saint-Etienne and NTNU Gjovik Norway. The second is in business management administration from the University of Burgundy Dijon. Her research interests include computer vision, deep learning, and clinical data analysis.
 +
 +**<color #​00a2e8>​February 14</​color>​ - Augmenting bodies using AI: from human know-how to Computer Aided Design - François Faure, Anatoscope**
 +
 +**Abstract:​** From walking sticks to bionic arms, people have always augmented their bodies with supplementary or replacement parts to improve their function, comfort or aesthetics. For optimal efficiency, these must be personalized to precisely fit the body, and their design requires significant knowledge and skills on anatomy and mechanics. The Orthotics and Prosthetics (O&P) domain has developed a large body of know-how to replicate body parts using plaster, design and sculpt shapes, and mold corresponding devices. This is applied to various body parts such as teeth, limbs, ears. Unfortunately,​ these techniques are empirical and operator-dependent. To improve precision, O&P increasingly uses digital imaging and design software. However, most of the current software essentially consist of digital sculpting toolboxes, therefore the design process remains virtually as empirical and operator-dependent as before. ​
 +In this talk, we present Anatoscope’s approach to tackle the challenge of precision in O&P. To really improve on Computer Assisted Design for O&P, we need to map the skills of good practitioners to numerical methods implemented in computers. Knowledge can be formulated using models and algorithms, while some skills are easily expressed as rules, and others are more easily described using examples. Our artificial intelligence combines these paradigms through constrained optimizations solved using various strategies. We illustrate these using various examples of dental and orthopedic design. ​
 +
 +**Bio:** François Faure, 50, graduated in Mechanical Engineering at ENS Cachan in 1993, and became a full university professor in Computer Science in Grenoble, 2011. His research contributions range from the simulation of rigid and deformable solids, collision detection, to the computation of personalized models for medical simulation. He founded Anatoscope with four colleagues in 2015, and he has been fully focused on its development since then. In three years, the company has signed strategic partnerships in the dental and orthopedic domains, and grown to 40 employees. ​
 +
 +**<color #​00a2e8>​February 21</​color>​ - Social and Market Context for AI Advances, Hugo Loi, Pixminds**
 +
 +**Abstract:​** As the computing tools to match and best human intelligence get better, a massive amount of real-world problems becomes accessible to artificial intelligence. Still, every and each one of us has been educated to have humans solve these problems, such as medical diagnosis and airplane piloting. As a consequence,​ very few AI solutions will make it through ethical, legal, financial and market barriers before getting to the users in the next five years. Which ones? That is the question every AI entrepreneur wants to answer, and the topic of this talk
 +
 +**Bio:** Hugo Loi holds an engineering degree from Ensimag and a PhD from Grenoble University. Hugo started his career in Computer Graphics at Inria, the French National Mapping Institute, the Walt Disney Company and Princeton University. In 2016 Hugo joined Lionel Chataignier for the creation of Pixminds, a gaming hardware corporation at the heart of the French Alps. Since then their work of transferring great tech into great products was awarded multiple times by various organisms such as the Consumer Electronics Show, the French Ministry of Interior, Bpifrance and the German Design Council. ​
 + 
 +**<color #​00a2e8>​March 7</​color>​ - Big data approaches to brain imaging & applications in psychiatry, Bertrand Thirion, Dataia, Inria**
 +
 +**Abstract:​** Population imaging consists in studying the characteristics of populations as measured by brain imaging. The transition to big data in this field has consisted in imaging and behavioral acquisition from larger cohorts and in the agglomeration of such datasets. An important question is whether the loss of homogeneity inherent to working on composite datasets is detrimental to prediction accuracy. We provide evidence that this is not the case: larger datasets ultimately provide more power for individual prediction and diagnosis. We also outline technical aspects of the work on large imaging datasets and benefits of challenges and collaborative work. 
 +
 +**Bio:** Bertrand Thirion is researcher of Inria research institute, Saclay, France, that develops statistics and machine learning techniques for brain imaging. He contributes both algorithms and software, with a special focus on functional neuroimaging applications. He is involved in the Neurospin, ​ CEA neuroimaging center, one of the leading high-field MRI for brain imaging places. Bertrand Thirion has created and managed the Parietal team (2009-2022). From 2018 to 2021, Bertrand Thirion has been the head of the DATAIA Institute that federates research on AI, data science and their societal impact in Paris-Saclay University. In 2020, he was appointed as member of the expert committee in charge of advising the government during the Covid-19 pandemic. In 2021, he has become the Head of science (délégué scientifique) of the Inria Saclay-Île-de-France research center. Bertrand Thirion is PI of the Karaib AI Chair of the Individual Brain CHarting project.
 +
 +
 +
 ====== Seminar 2021/2022: Ethical issues, law & novel applications of AI (MIE 630) ====== ====== Seminar 2021/2022: Ethical issues, law & novel applications of AI (MIE 630) ======
  
seminarprogramm.txt · Last modified: 2023/02/15 15:35 by payan