Neues Preprint: Purpose Limitation for Models

Hannah Ruschemeier und ich stellen eine neue Regulierungsidee zur Eindämmung von Missbrauchsrisiken trainierter KI-Modelle vor. Das Paper wird auf der EAI CAIP23 Konferenz (AI for People) präsentiert.

Abstract: This paper proposes the concept of purpose limitation for models as an approachto democratising AI via effective regulation. We aim to define the purposes ofmachine learning models built for predictive analytics and generative AI in democraticprocesses. Unregulated (secondary) use of specific models creates immense individualand societal risks, including discrimination against individuals or groups, infringementof fundamental rights, or distortion of democracy through misinformation. We arguethat possession of trained models, which in many cases consist of anonymousdata (even if the training data contains personal data), is at the core of an increasingasymmetry of informational power between data companies and society. Combiningethical and legal aspects in our interdisciplinary approach, we identify the trainedmodel as the object of regulatory intervention instead of the training data. This alteredfocus adds to existing data protection laws and the proposed Artificial Intelligence Act,which are inefficient in preventing the misuse of trained models due to their focus onthe procedural aspects of personal data or training data. By enabling the concept ofrisk prevention law and the principle of proportionality, we argue that the potential useof trained models in ways that are damaging to society by powerful actors warrantspreventive regulatory interventions. Thus, we seek to balance the asymmetry of powerby enabling democratic control of where and how predictive and generative AI capabilitiesmay be used by identifying beneficial purposes.

Download und bibliographische Angaben

  1. Mühlhoff, Rainer, und Hannah Ruschemeier. 2023. „Democratising AI via Purpose Limitation for Models“. SSRN preprint, accepted at CAIP23.