New paper alert: Purpose Limitation for AI Models

We are excited to announce the publication of our paper, “Regulating AI with Purpose Limitation for Models” by Rainer Mühlhoff and Hannah Ruschemeier, featured in the opening issue of the new journal “AI Law and Regulation”. In this groundbreaking study, we introduce the concept of applying purpose limitation to AI models as a novel approach to mitigate the unregulated secondary use of AI, addressing risks such as discrimination and infringement of fundamental rights.

Our interdisciplinary research highlights the increasing informational power asymmetry between data companies and society, suggesting that current regulations, focused mainly on data protection, fall short in curbing the misuse of trained models. By shifting the focus from training data to trained models, we advocate for a regulatory framework that emphasizes democratic control over AI’s predictive and generative capabilities, ensuring they are used in ways that are beneficial to society without undermining individual or collective rights.

This paper is a call to action for lawmakers, technologists, and the public to rethink how we regulate AI, aiming for a future where AI serves the public good while respecting privacy and equity. Dive into our full analysis and join the conversation on how we can achieve a more equitable and controlled use of AI technologies.

An earlier version of the paper was presented at the EAI CAIP – AI for People conference on November 24, 2023, in Bologna.

Download options and bibliographic data

  1. Mühlhoff, Rainer, und Hannah Ruschemeier. 2024. „Regulating AI via Purpose Limitation for Models“. AI Law and Regulation. https://dx.doi.org/10.21552/aire/2024/1/5.

Categories:

Date: