Video: Purpose Limitation for Models – presentation at CAIP23

Learn why secondary use of trained AI models is an enormous risk that is unattended by AI Act and GDPR. Hannah Ruschemeier and I presented our current research at the AI for People CAIP23 conference in Bologna.

Video of the presentation

Our paper

  1. Mühlhoff, Rainer, und Hannah Ruschemeier. 2023. „Democratising AI via Purpose Limitation for Models“. SSRN preprint, accepted at CAIP23. https://dx.doi.org/10.2139/ssrn.4599869.

Abstract: This paper proposes the concept of purpose limitation for models as an approachto democratising AI via effective regulation. We aim to define the purposes ofmachine learning models built for predictive analytics and generative AI in democraticprocesses. Unregulated (secondary) use of specific models creates immense individualand societal risks, including discrimination against individuals or groups, infringementof fundamental rights, or distortion of democracy through misinformation. We arguethat possession of trained models, which in many cases consist of anonymousdata (even if the training data contains personal data), is at the core of an increasingasymmetry of informational power between data companies and society. Combiningethical and legal aspects in our interdisciplinary approach, we identify the trainedmodel as the object of regulatory intervention instead of the training data. This alteredfocus adds to existing data protection laws and the proposed Artificial Intelligence Act,which are inefficient in preventing the misuse of trained models due to their focus onthe procedural aspects of personal data or training data. By enabling the concept ofrisk prevention law and the principle of proportionality, we argue that the potential useof trained models in ways that are damaging to society by powerful actors warrantspreventive regulatory interventions. Thus, we seek to balance the asymmetry of powerby enabling democratic control of where and how predictive and generative AI capabilitiesmay be used by identifying beneficial purposes.

Download options and bibliographic data

  1. Mühlhoff, Rainer, und Hannah Ruschemeier. 2023. „Democratising AI via Purpose Limitation for Models“. SSRN preprint, accepted at CAIP23. https://dx.doi.org/10.2139/ssrn.4599869.

Categories:

Date: