Ljupcho Grozdanovski

Ljupcho Grozdanovski

Permanent Research Associate, University of Liège

Abstract

The aim of this presentation is to bridge the concept of explainability in connection to AI and (legal) evidence. We will specifically explore the epistemology and interrelationship between explanations relative to the (in)accuracy of AI output and causal explanations pertaining to the link between that output and a harm suffered.

With explanatory accuracy as fil rouge, our analytical framework includes two main theoretical touchstones (and corresponding methodologies). The first strand consists of general knowledge construction theory and the epistemology of legal evidence. This strand will inform us of the conditions that need to be met for explanations pertaining to AI output and those pertaining to causality in connection to that output to be considered as accurate (or at least plausible). The second theoretical touchstone is the theory of justice and procedural fairness. The choice of this strand is justified by the fact that trials are the privileged epistemic contexts where accurate explanations on disputed facts are sought. To meet the standards of accuracy required for those, litigants in AI liability disputes should be able to have procedural entitlements that could allow them to give evidence and explain causation in conditions of procedural parity. In light of this (procedural) equality requirement that should frame the pursuit of fact-accuracy in adjudicatory contexts, the key analytical referent for this study is the theory of procedural abilities i.e. entitlements that litigants should enjoy in order to effectively make their views known before a court.

Against this backdrop, and with a focus on the European Union’s (EU) regulation of AI, we will seek to answer two questions: 1. in cases of harm occasioned by the use of an AI system, does the accuracy of causal explanations in AI-related disputes depend on the accuracy of the explanations given on a system’s functionalities? 2. in the affirmative, should the applicable systems of evidence in the EU include the procedural ability for litigants to request and/or give evidence and explanation on how a given system had caused harm? To answer these questions, we will critically examine the systems of evidence in the EU’s upcoming procedural regulation on AI namely, the AI Liability Directive (AILD) and the Revised Product Liability Directive (RPLD). Both instruments include the right to request evidence (and explanation) for victims of AI-related harm, only not for the purpose of uncovering how an AI system actually caused harm (post hoc explainability), but for the purpose of determining whether a human agent (programmer or user) complied with technical standardization legislation, such as the AI Act (ad hoc explainability). Based on an analysis of the available (mostly North American) caselaw on AI liability, the EU’s systems of evidence are open to criticism. First, said caselaw reveals a trend of litigants consistently seeking evidence on how a given system actually caused harm. To this end, they naturally require post hoc explanations. The examined caselaw also reveals that ‘opening the black box’ is not always feasible, pushing courts to request expert evidence that can support arguments on the causal link between an AI system and a harm suffered. By limiting the evidence (and corresponding explanations) to ad hoc epxlainability (i.e. the compliance with the technical standards in the AI Act), the AILD and R-PLD do not seem to leave much room for litigants to request additional evidence - reverse engineering or expertise - that could provide them with the explanations they need to effectively argue causation.

Second, neither the AILD nor the R-PLD mention the proof of reliance on (harmful) automated decisions. This is the missing explanatory piece in the instruments considered: as the examined national caselaw shows, the explanation that victims highlight as necessary is - again - not whether a human agent complied with applicable technical standards. They seek explanations on the reasons why that agent believed they should rely on a given decision (the noteworthy point being that those reasons may or may not be rooted in the agent’s compliance with an instrument like the AI Act).

Perhaps, when the AILD and R-PLD become binding, court practice will interpret their provisions in a way that will enhance the litigants’ procedural abilities to request the evidence they need in order to better explain and debate causation. However, until national and EU courts begin applying these instruments, we remain in the wait-and-see zone and can but speculate on how they ought to apply, so that the basic requirements of fairness (like the equality of arms) can be fully observed.


Short bio

Ljupcho Grozdanovski is a Permanent Research Associate (National Research Foundation - FNRS/University of Liège) currently conducting a project on evidence and procedural justice in the field of Artificial Intelligence (AI). His project explores the procedural means and entitlements that can - or should - be offered to private parties in view of allowing them to effectively make their views known before a court and seek justice on the grounds of the law of the European Union (EU law). Prior to receiving tenure, Ljupcho taught EU law and International law at the University of Nantes (2021-2022) and was a member of the 2020/2021 class of the Emile Noël Fellowship at NYU Law School, under the direction of Professors Joseph Weiler and Gráinne de Búrca. Within this Fellowship, he conducted a research project aimed at suggesting a (fairness-based) general theory of evidence in EU law. In 2024, Ljupcho will be a Visiting Researcher at the Ethics in AI Institute (Oxford University) directed by Professor John Tasioulas, where he will conduct research on the interrelationship between procedural fairness, explainability of (harmful) AI output and (evidentiary) causal explanations given by the litigants in disputes dealing with so-called AI liability.

Ljupcho completed his BA in Law (2006) and holds a Master’s degree in International law and European Studies (2007) as well as in EU law (valedictorian, class of 2008) from the University of Strasbourg. In 2015 he defended his doctoral dissertation, under the supervision of Prof. Valérie Michel, on ‘Presumption in EU law’ (summa cum laude, University of Aix-Marseille, France). His dissertation provides a comprehensive framework of analysis of the principles governing the adducing, assessment and rebuttal of presumptive evidence in both EU Institutional Law and EU Substantive Law. During his doctoral research, he worked as a research and teaching assistant (2009-2014) at the University of Geneva. After a first post-doc at the University of Neuchâtel (2016-2018), he completed a post-doctoral research project at the University of Liège (2019-2020), focused on AI’s labour-replacement effects.

In line with the research project he currently conducts within his permanent FNRS position, in 2023, Ljupcho was awarded the Jean Monnet Centre of Excellence label and will act as coordinator of the Justice and AI Jean Monnet Centre of Excellence - Judicial Redress in the Rising European and Global AI Litigation (JUST-AI JMCE).

His fields of research include Theory of Legal Evidence, Theories of Justice, Legal Reasoning and Epistemology, New technologies and Data Protection, EU Institutional Law, EU Substantive law.