The need for an interdisciplinary approach to AI research Developing a research ethics framework for AI

Anais Resseguier
Trilateral Research

The need for an interdisciplinary appproach It is now well recognised that ethical AI requires an interdisciplinary approach, i.e., bringing together experts from a wide range of disciplines and sectors, including, in addition to technical developers and researchers, lawyers, social scientists, policy-makers, legislators/regulators, etc. This paper discusses one way to promote such interdisciplinary approach to AI. As such, rather than trade-offs (which suggest the need for parties to give up on something important to them to come to an agreeemnt), it seeks to bring about a space for dialogue to ensure the necessary interediscilinary approach to bring about ethical AI. It stresses the key role of research ethics frameworks for research projects developing and/or using Artificial Intelligence (AI) to do so. The paper highlights that these frameworks need both (a) requirements for compliance with emerging norms and principles to govern this technology and (b) an open process of reflection and attention to research and innovation in this area.

Research ethics frameworks for AI The field of AI ethics has seen intense developments since 2015 with numerous governmental and international bodies, institutions and companies creating guidelines, frameworks, and sets of principles for AI governance (Jobin et al., 2019). However, these initiatives have also received sharp critiques from experts in the field, including that of being a form of “ethics washing” (Wagner, 2018; Resseguier and Rodrigues, 2020) or of reproducing existing power structures and inequalities (D’Ignazio and Klein, 2020). The proposal in this paper seeks to address these critiques by focusing on review processes as handled by research ethics committees (RECs). As the AI ethics field is currently working toward its operationalisation, research ethics constitute a powerful, but so far underdeveloped framework to make AI ethics more effective at the level of research (Santy et al. 2021). The present paper proposes a two-pronged approach to the operationalisation of AI ethics in research ethics frameworks: (a) compliance with requirements imposed on researchers and (b) an open process of attention and reflection. In the words of the philosopher George Canguilhem, while the former aspect of ethics is about engaging with the norms, the second one attends to the capacity to determine the norms, i.e., the “normative capacity” (Canguilhem, 1991).

Compliance requirements and the potential role of the European AI Act Related to the side of the norms requiring compliance (a), the paper encourages the imposition of particular requirements within research ethics frameworks embedded in institutions. These norms, principles, or requirements should be accompanied by mechanisms to ensure compliance, such as through the possibility of withdrawing funding if these are not fulfilled (this is for instance the case with the ethics appraisal scheme for research projects funded under the Horizon Europe Funding Program of the European Commission). Requiring compliance with certain criteria allows to put red lines and better orient AI research in a way that avoids potential harms caused by this technology, such as mass surveillance or discrimination. Requirements from the European Union’s AI Act currently under development will assuredly constitute a key reference for research ethics norms. Although, in the current form of the draft (as of June 2023), the obligations of the AI Act do not apply to scientific research, it is most likely that these obligations will nonetheless have a strong impact on AI research considering the need to anticipate placement on the market or to test in real world conditions (European Parliament, 2023). This paper explores key implications of the AI Act for research ethics framework.

An open process of reflection and attention In addition, ethics review frameworks offer a space for an open process of reflection and attention (b). The focus here is on questioning established norms and ways of doing through an open reflection and a continuously renewed form of attention to both technical advances in the field and social developments and concerns. This corresponds to the level of the “normative capacity”, to use Canguilhem’s terms as defined in the first section, i.e., the capacity to pay attention to the new situation, reflect on it, and challenge existing norms if needed to best adapt to the novelty one faces. Considering the uncertainty AI brings to societies, this constantly renewed attention and reflection is essential. For instance, in-depth critical social science and humanity (SSH) studies are crucial to engage such open reflection and renewed attention (e.g., Crawford, 2021). The submission of a societal impacts statement as part of an ethics submission for AI research projects can serve to embed such reflection within the ethics review process (Bernstein et al., 2021; Ada Lovelace Institute, 2022).

By distinguishing the level of the norms and that of the open process of attention and reflection, highlighting their respective values, and the way they relate to each other, this paper contributes to advancing further AI ethics through its operationalisation in research ethics frameworks. By embedding legal requirements as well as insights from SSH, this proposal allows to bring in an interdisciplinary approach to the development of AI. The aim is eventually to make AI ethics more effective but also more thoughtful.

REFERENCES

Ada Lovelace Institute. (2022, Dec). Looking before We Leap. Expanding Ethical Review Processes for AI and Data Science Research. Retrieved from https://www.adalovelaceinstitute.org/report/looking-before-we-leap/

Bernstein, M. S., Levi, M., Magnus, D., Rajala, B. A., Satz, D., Waeiss, Q. (2021 Dec). Ethics and Society Review: Ethics Reflection as a Precondition to Research Funding. Proceedings of the National Academy of Sciences 118(52).

Canguilhem, G. (1991). The Normal and the Pathological. Translated by Carolyn R. Fawcett. Princeton: Princeton University Press.

Crawford, K. (2021) Atlas of AI. New Haven & London: Yale University Press.

D’Ignazio, C., Klein L. F. (2020). Data Feminism, Cambridge, MA; London, England: MIT Press. European Parliament (2023, June), Amendments adopted by the European Parliament on 14 June 2023 on the proposal for a regulation of the European Parliament and of the Council on laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts (COM(2021)0206 – C9-0146/2021 – 2021/0106(COD)) https://www.europarl.europa.eu/doceo/document/TA-9-2023-0236_EN.html

Jobin, A., Ienca M., Vayena, E. (2019). The Global Landscape of AI Ethics Guidelines. Nature Machine Intelligence 1(9), 389–99.

Mills, C. (2005). “Ideal Theory” as an Ideology. Hypatia, 20(3), 165–84.

Santy, S., Rani, A., & Choudhury, M. (2021). Use of Formal Ethical Reviews in NLP Literature: Historical Trends and Current Practices. CoRR, abs/2106.01105.

Rességuier, A., Rodrigues, R. (2020). AI Ethics Should Not Remain Toothless! A Call to Bring Back the Teeth of Ethics. Big Data & Society 7(2).

Wagner, B. (2018). Ethics as an Escape from Regulation: From Ethics-Washing to Ethics-Shopping. In Being Profiled: Cogitas Ergo Sum: 10 Years of Profiling the European Citizen, ed. Emre Bayamlioglu et al., Amsterdam: Amsterdam University Press.