Neues Preprint: Updating Purpose Limitation for AI

Hannah Ruschemeier und ich stellen “Zweckbindung für Modelle” als Regulierungsansatz zur Eindämmung von Missbrauchsrisiken trainierter KI-Modelle vor.

Abstract: This paper addresses a critical regulatory gap in the EU’s digital legislation, including the proposed AI Act and the GDPR: the risk of secondary use of trained models and anonymized training datasets. Anonymized training data, such as patients’ medical data consented for clinical research, as well as AI models trained from this data, pose the threat of being freely reused in potentially harmful contexts such as insurance risk scoring and automated job applicant screening. To address this, we propose a novel approach to AI regulation, introducing what we term purpose limitation for training and reusing AI models. This approach mandates that those training AI models define the intended purpose (e.g., “medical care”) and restrict the use of the model solely to this stated purpose. Additionally, it requires alignment between the intended purpose of the training data collection and the model’s purpose.

The production of predictive and generative AI models signifies a new form of power asymmetry. Without public control of the purposes for which existing AI models can be reused in other contexts, this power asymmetry poses significant individual and societal risks in the form of discrimination, unfair treatment, and exploitation of vulnerabilities (e.g., risks of medical conditions being implicitly estimated in job applicant screening). Our proposed purpose limitation for AI models aims to establish accountability, effective oversight, and prevent collective harms related to the regulatory gap.

Originating from an interdisciplinary collaboration between ethics and legal studies, our paper proceeds in four steps, covering (1) the definition of purpose limitation for AI models, (2) examining the ethical reasons supporting purpose limitation for AI models, (3) critiquing the inadequacies of the GDPR, and (4) evaluating the proposed AI Act’s shortcomings in addressing the regulatory gap. Through these interconnected stages, we advocate for amending current AI regulation with an updated purpose limitation principle to address one of the most severe regulatory loopholes.

Download und bibliographische Angaben

  1. Mühlhoff, Rainer, und Hannah Ruschemeier. 2024. „Updating Purpose Limitation for AI: A Normative Approach from Law and Philosophy“. SSRN Preprint, Januar. https://papers.ssrn.com/abstract=4711621.

Datum: