Ferdinando Fioretto

Ferdinando Fioretto

Assistant Professor of Computer Science, University of Virginia

Title: Privacy and Fairness in Societal Systems

Abstract

Differential Privacy has become the go-to approach for protecting sensitive information in data releases and learning tasks that are used for critical decision processes. For example, census data is used to allocate funds and distribute benefits, while several corporations use machine learning systems for criminal assessments, hiring decisions, and more. While this privacy notion provides strong guarantees, we will show that it may also induce biases and fairness issues in downstream decision processes. These issues may adversely affect many individuals’ health, well-being, and sense of belonging, and are currently poorly understood.

In this talk, we delve into the intersection of privacy, fairness, and decision processes, with a focus on understanding and addressing these fairness issues. We first provide an overview of Differential Privacy and its applications in data release and learning tasks. Next, we examine the societal impacts of privacy through a fairness lens and present a framework to illustrate what aspects of the private algorithms and/or data may be responsible for exacerbating unfairness. We hence show how to extend this framework to assess the disparate impacts arising in Machine Learning tasks. Finally, we propose a path to partially mitigate these fairness issues and discuss grand challenges that require further exploration.


Short bio

Ferdinando Fioretto is an assistant professor at University of Virginia. He works at the juncture of Machine Learning, optimization, privacy, and ethics. His recent work focuses on two themes: (1) it analyzes the equity of AI systems in support of decision-making and learning tasks and designs algorithms that better align with societal values and (2) it develops the foundation to blend deep learning with mathematical optimization to enable the integration of knowledge, constraints, and physical principles into learning models.

He is a recipient of the 2022 NSF CAREER award, the 2022 Amazon Research Award, the 2022 Google Research Scholar Award, the 2022 Caspar Bowden PET award, the 2021 ISSNAF Mario Gerla Young Investigator Award, the 2021 ACP Early Career Researcher Award, the 2017 AI*AI Best AI dissertation award, and several best paper awards. He is also actively involved in the organization of several workshops, including the Privacy-Preserving Artificial Intelligence workshop at AAAI, the Algorithmic Fairness through the lens of Causality and Privacy at NeurIPS, and the Optimization and Learning in multiagent systems workshop at AAMAS.