Differential Privacy has Bounded Impact on Fairness in Classification

Michaël Perrot
Inria

In this talk, I will present our recent theoretical study on the impact of differential privacy on fairness in classification. More precisely, we prove that, given a class of models, popular group fairness measures are continuous with respect to the parameters of the model. This result is a consequence of a more general statement on accuracy conditioned on an arbitrary event (such as membership to a sensitive group), which may be of independent interest. We use this property to prove a non-asymptotic bound showing that, as the number of samples increases, the fairness level of private models gets closer to the one of their non-private counterparts. This bound also highlights the importance of the confidence margin of a model on the disparate impact of differential privacy.