The Predictive Privacy Project
Data Protection in the Context of Big Data and AI
Privacy in the era of Big Data and AI is not only an ethical and philosophical problem, but equally a question of political regulation. I therefore explore the legal implementation of predictive privacy in a standing collaboration with legal scholar Hannah Ruschemeier. See also our joint work on Updating Purpose Limitation for AI.
What is Predictive Privacy?
Big Data and Artificial Intelligence pose a new challenge to the traditional understanding of privacy. These techniques can be used to make predictions – for example about human behaviour, the progression of a disease, security risks or purchasing behaviour. The basis for such predictions is a comparison of behavioural data (e.g. usage, tracking or activity data) of the individual concerned with the data of many other individuals. When machine learning and data analytics technology is used to predict future behaviour or unknown information about individuals by pattern matching in large data sets, I refer to this as “predictive analytics”.
Predictive analytics is frequently associated with useful applications that improve, for example, our health care. However, the potential for misuse is just as great: predictive analytics also makes it possible to infer sensitive attributes such as gender, sexual orientation, predisposition to disease, mental health or political attitudes without those concerned realizing it. Such estimates are used, for example, to determine insurance premiums, creditworthiness, advertising and product prices for each individual user.
The concept
I use the concept of “predictive privacy” to research data protection and informational privacy in the context of predictive analytics. This is an approach that specifically addresses the risk of inferred information being misused. A person’s predictive privacy also includes information that can be guessed about them by (algorithmically) matching it with information from many other people. Predictive privacy is thus violated when, without the person’s knowledge and against their will, sensitive information about them is predicted. Predictive privacy is potentially violated by data analytics and machine learning applications in risk scoring, credit scoring, automated job selection, differential pricing, algorithmic triage, etc.
Research paper
-
Mühlhoff, Rainer. 2021. „Predictive Privacy: Towards an Applied Ethics of Data Analytics“. Ethics and Information Technology. doi:10.1007/s10676-021-09606-x.
Full text pre-print academia.edu bibtex
×@article{Mü2020:PP-ETIN, author = {Mühlhoff, Rainer}, title = {Predictive Privacy: Towards an Applied Ethics of Data Analytics}, journal = {Ethics and Information Technology}, year = {2021}, doi = {10.1007/s10676-021-09606-x}, web_group = {papers}, web_academia = {https://www.academia.edu/44424158/Predictive_Privacy_Towards_an_Applied_Ethics_of_Data_Analytics}, web_preprint = {https://dx.doi.org/10.2139/ssrn.3724185}, web_fulltext = {/media/publications/mühlhoff_2021_predictive-privacy_ETIN-s10676-021-09606-x.pdf} }
Collective privacy: Data protection is not a private decision
Predictive privacy not only extends the traditional and familiar concept of (informational) privacy, it also implies a collectivist ethical approach to data protection. The term “data protection” refers to the legal norms and regulations aimed at protecting the fundamental rights of individuals and groups against possible violations caused by data processing. The idea of data protection is to mitigate the power imbalance created by the use of data technology between data processing organisations and citizens.
Given the violations of predictive privacy enabled by modern predictive analytics technologies, data protection as implemented by the EU’s General Data Protection Regulation (GDPR) faces a fundamental obstacle. The data on which predictive models are trained are usually collected legally, either with user consent or as anonymous data – which is still suitable for training machine learning algorithms that find correlations between, for example, behavioural data and sensitive attributes.
Predictive analytics operates precisely in the blind spot of the individualistic Western notion of privacy: it exploits the masses of data (big data) voluntarily disclosed by individual users who decide for themselves that they “have nothing to hide”. While the individual decision to reveal information, such as when using a digital service, often seems marginal or irrelevant to the user in terms of loss of privacy, on a large scale the data collected through millions of such decisions reveal predictive knowledge about all of us. How this predictive knowledge may be used is poorly regulated, and many applications are detrimental to the individuals concerned or to society.
The individualistic conception of Western privacy legislation is therefore facing a dead end, and to protect predictive privacy, we need a collectivist interpretation of data protection. Predictive analytics can be used to derive sensitive information about a data subject based on the information disclosed by many other individuals. That is, the data you disclose, potentially helps discriminate against others. And the data others disclose about themselves can be use to make predictions about you.
Example: Guessing intimate information from Facebook likes
For a data company like Facebook, it is possible to build predictive models that infer the sexual orientation or relationship status of Facebook users based on their “likes” – researchers have shown that only a few likes from a user are enough (Kosinski et al. 2013). To train such a model, Facebook may proceed as follows: A small number of users, for example, only 5% – explicitly state their sexual orientation or relationship status in their Facebook profile. With a total of 2.8 billion users worldwide, even this 5% makes up a very large cohort from which Facebook then has both the Facebook likes (proxy variable) and information on sexual orientation or relationship status (target variable).
As a result, through “supervised learning”, a predictive model from the data of these users is trained, which learns to predict the target variable based on the proxy variable. Once such a model has been trained, it can infer the sexual orientation or relationship status of its users, even though this information has not been explicitly provided, but based solely on their Facebook likes. Facebook can therefore classify almost all its users according to these sensitive parameters – users are unaware of having been classified according to these attributes even though they have deliberately chosen not to share this information on their profiles.
Further sensitive information that can be determined from Facebook likes includes the user’s ethnic background, religious and political views, psychological personality traits, intelligence, “happiness”, addictive behaviour, childhood with divorced parents, age and gender (Kosinski et al. 2013). Other studies show that numerous health issues can be inferred from Facebook data, including self-harm, depression, anxiety disorders, psychosis, diabetes and hypertension (Mechant et al. 2019).
Talk: Predictive Privacy, CAIS Kolloquium, Bochum, 14 December 2022.
Research articles on Purpose Limitation for Models
-
Mühlhoff, Rainer, und Hannah Ruschemeier. 2024. „Updating Purpose Limitation for AI: A Normative Approach from Law and Philosophy“. SSRN Preprint, Januar. https://papers.ssrn.com/abstract=4711621.×
@article{Mü-Ru2024:UPL, type = {SSRN Preprint}, title = {Updating {{Purpose Limitation}} for {{AI}}: {{A}} Normative Approach from Law and Philosophy}, shorttitle = {Updating {{Purpose Limitation}} for {{AI}}}, author = {Mühlhoff, Rainer and Ruschemeier, Hannah}, date = {2024-01-22}, url = {https://papers.ssrn.com/abstract=4711621}, langid = {english}, pubstate = {preprint}, keywords = {AI Act,AI Governance,AI regulation,collective privacy,data ethics,data protection,Ethics,EU regulation,GDPR,general purpose AI systems,LLMSs,Open Source,power asymmetries,secondary data use}, web_thumbnail = {/assets/images/publications/Mü-Ru2024:UPL.jpg}, web_fulltext = {/media/publications/mühlhoff_ruschemeier_2024_updating_purpose_limitation_for_ai.pdf}, web_preprint = {https://ssrn.com/abstract=4711621}, web_group = {preprint} }
-
Mühlhoff, Rainer, und Hannah Ruschemeier. 2024. „Regulating AI via Purpose Limitation for Models“. AI Law and Regulation. https://dx.doi.org/10.21552/aire/2024/1/5.
Full text Publisher's website bibtex
×@article{Mü-Ru2023:PLM, title = {Regulating {{AI}} via {{Purpose Limitation}} for {{Models}}}, author = {Mühlhoff, Rainer and Ruschemeier, Hannah}, journal = {AI Law and Regulation}, url = {https://dx.doi.org/10.21552/aire/2024/1/5}, year = {2024}, langid = {english}, web_group = {aktuell}, web_thumbnail = {/assets/images/publications/Mü-Ru2023:PLM.jpg}, web_fulltext = {/media/publications/mühlhoff_ruschemeier_2024_regulating_ai_with_purpose_limitation_for_models.pdf}, web_publisher = {https://aire.lexxion.eu/article/AIRE/2024/1/5} }
-
Mühlhoff, Rainer. 2024. „Das Risiko der Sekundärnutzung trainierter Modelle als zentrales Problem von Datenschutz und KI-Regulierung im Medizinbereich“. In KI und Robotik in der Medizin – interdisziplinäre Fragen, herausgegeben von Hannah Ruschemeier und Björn Steinrötter. Nomos. doi:10.5771/9783748939726-27.×
@incollection{Mü2023RisikoSekundärnutzung, title = {Das Risiko der Sekundärnutzung trainierter Modelle als zentrales Problem von Datenschutz und KI-Regulierung im Medizinbereich}, booktitle = {KI und Robotik in der Medizin – interdisziplinäre Fragen}, author = {Mühlhoff, Rainer}, editor = {Ruschemeier, Hannah and Steinrötter, Björn}, year = {2024}, doi = {10.5771/9783748939726-27}, publisher = {Nomos}, isbn = {9783748939726}, web_group = {aktuell}, web_thumbnail = {/assets/images/publications/Mü2023RisikoSekundärnutzung.jpg}, web_fulltext = {/media/publications/mühlhoff_2024_risiko_der_sekundärnutzung.pdf} }
Research articles on Predictive Privacy
-
Mühlhoff, Rainer. 2023. „Predictive Privacy: Collective Data Protection in the Context of AI and Big Data“. Big Data & Society, 1–14. doi:10.1177/20539517231166886.×
@article{Mü2023:BDS, author = {Mühlhoff, Rainer}, title = {Predictive Privacy: Collective Data Protection in the Context of AI and Big Data}, journal = {Big Data & Society}, year = {2023}, doi = {10.1177/20539517231166886}, pages = {1–14}, web_group = {papers}, web_thumbnail = {/assets/images/publications/Mü2023:BDS.jpg}, web_fulltext = {/media/publications/mühlhoff_2023_predictive_privacy.pdf} }
-
Mühlhoff, Rainer. 2021. „Predictive Privacy: Towards an Applied Ethics of Data Analytics“. Ethics and Information Technology. doi:10.1007/s10676-021-09606-x.
Full text pre-print academia.edu bibtex
×@article{Mü2020:PP-ETIN, author = {Mühlhoff, Rainer}, title = {Predictive Privacy: Towards an Applied Ethics of Data Analytics}, journal = {Ethics and Information Technology}, year = {2021}, doi = {10.1007/s10676-021-09606-x}, web_group = {papers}, web_academia = {https://www.academia.edu/44424158/Predictive_Privacy_Towards_an_Applied_Ethics_of_Data_Analytics}, web_preprint = {https://dx.doi.org/10.2139/ssrn.3724185}, web_fulltext = {/media/publications/mühlhoff_2021_predictive-privacy_ETIN-s10676-021-09606-x.pdf} }
-
Mühlhoff, Rainer, und Hannah Ruschemeier. 2022. „Predictive Analytics und DSGVO: Ethische und rechtliche Implikationen“. In Telemedicus – Recht der Informationsgesellschaft, Tagungsband zur Sommerkonferenz 2022, herausgegeben von Hans-Christian Gräfe und Telemedicus e.V., 38–67. Deutscher Fachverlag.×
@incollection{Mü-Ru2022, title = {Predictive Analytics und DSGVO: Ethische und rechtliche Implikationen}, booktitle = {Telemedicus – Recht der Informationsgesellschaft, Tagungsband zur Sommerkonferenz 2022}, author = {Mühlhoff, Rainer and Ruschemeier, Hannah}, editor = {Gräfe, Hans-Christian and {Telemedicus e.V.}}, date = {2022}, pages = {38--67}, publisher = {Deutscher Fachverlag}, location = {Frankfurt am Main}, isbn = {978-3-8005-1857-9}, web_group = {papers}, web_thumbnail = {/assets/images/publications/Mü-Ru2022.jpg}, web_fulltext = {/media/publications/telemedicus-2022-tagungsband-isbn-978-3-8005-1857-9.pdf} }
-
Mühlhoff, Rainer, und Theresa Willem. 2023. „Social Media Advertising for Clinical Studies: Ethical and Data Protection Implications of Online Targeting“. Big Data & Society, 1–15. doi:10.1177/20539517231156127.×
@article{Mü-Willem2022, author = {Mühlhoff, Rainer and Willem, Theresa}, title = {Social Media Advertising for Clinical Studies: Ethical and Data Protection Implications of Online Targeting}, journal = {Big Data & Society}, page = {1–15}, year = {2023}, doi = {10.1177/20539517231156127}, web_group = {papers}, web_thumbnail = {/assets/images/publications/Mü-Willem2022.jpg}, web_fulltext = {/media/publications/mühlhoff_willem_2023_social_media_advertising_for_clinical_studies.pdf} }
-
Mühlhoff, Rainer. 2022. „Prädiktive Privatheit: Kollektiver Datenschutz im Kontext von Big Data und KI“. In Künstliche Intelligenz, Demokratie und Privatheit, herausgegeben von Michael Friedewald, Alexander Roßnagel, Jessica Heesen, Nicole Krämer, und Jörn Lamla, 31–58. Nomos. doi:10.5771/9783748913344-31.×
@incollection{Mü2022:PP-Forum-Privatheit, author = {Mühlhoff, Rainer}, title = {Prädiktive Privatheit: Kollektiver Datenschutz im Kontext von Big Data und KI}, booktitle = {Künstliche Intelligenz, Demokratie und Privatheit}, editor = {Friedewald, Michael and Roßnagel, Alexander and Heesen, Jessica and Krämer, Nicole and Lamla, Jörn}, year = {2022}, doi = {10.5771/9783748913344-31}, pages = {31–58}, publisher = {Nomos}, isbn = {978-3-7489-1334-4}, web_group = {papers}, web_thumbnail = {/assets/images/publications/Mü2022:PP-Forum-Privatheit.jpg}, web_fulltext = {https://doi.org/10.5771/9783748913344-31} }
-
Mühlhoff, Rainer. 2020. „Prädiktive Privatheit: Warum wir alle »etwas zu verbergen haben«“. In #VerantwortungKI – Künstliche Intelligenz und gesellschaftliche Folgen, herausgegeben von Christoph Markschies und Isabella Hermann. Bd. 3/2020. Berlin-Brandenburgische Akademie der Wissenschaften.×
@incollection{Mü2020:BBAW, author = {Mühlhoff, Rainer}, title = {Prädiktive Privatheit: Warum wir alle »etwas zu verbergen haben«}, booktitle = {#VerantwortungKI – Künstliche Intelligenz und gesellschaftliche Folgen}, volume = {3/2020}, editor = {Markschies, Christoph and Hermann, Isabella}, publisher = {Berlin-Brandenburgische Akademie der Wissenschaften}, year = {2020}, isbn = {978-3-939818-93-9}, web_group = {papers}, web_fulltext = {/media/publications/mühlhoff_2020_BBAW_prädiktive-privatheit.pdf}, web_book_pdf = {https://www.bbaw.de/files-bbaw/user_upload/publikationen/BBAW_Verantwortung-KI-3-2020_PDF-A-1b.pdf} }
Essays on predictive privacy
-
Mühlhoff, Rainer. 2020. „We Need to Think Data Protection Beyond Privacy: Turbo-Digitalization after COVID-19 and the Biopolitical Shift of Digital Capitalism“. Medium, März. doi:10.2139/ssrn.3596506.
Full text Full text academia.edu bibtex
×@article{Mü2020:DataProtection, author = {Mühlhoff, Rainer}, title = {We Need to Think Data Protection Beyond Privacy: Turbo-Digitalization after COVID-19 and the Biopolitical Shift of Digital Capitalism}, journal = {Medium}, date = {2020-03-31}, doi = {10.2139/ssrn.3596506}, web_group = {essays}, web_fulltext = {https://ssrn.com/abstract=3596506}, web_fulltext_nonpdf = {https://medium.com/@rainermuehlhoff/why-we-need-data-protection-beyond-privacy-aba9e9c996ed}, web_academia = {https://www.academia.edu/42682173/We_Need_to_Think_Data_Protection_Beyond_Privacy} }
-
Mühlhoff, Rainer. 2020. „Digitale Grundrechte nach Corona: Warum wir gerade jetzt eine Debatte über Datenschutz brauchen“. Netzpolitik.org 31.03.2020.×
@article{Mü2020:Netzpolitik, title = {Digitale Grundrechte nach Corona: Warum wir gerade jetzt eine Debatte über Datenschutz brauchen}, author = {Mühlhoff, Rainer}, journal = {Netzpolitik.org}, year = {2020}, volume = {31.03.2020}, web_group = {essays}, web_fulltext_nonpdf = {https://netzpolitik.org/2020/warum-wir-gerade-jetzt-eine-debatte-ueber-datenschutz-brauchen/}, web_academia = {https://www.academia.edu/42439533/Digitale_Grundrechte_nach_Corona_Warum_wir_gerade_jetzt_eine_Debatte_%C3%BCber_Datenschutz_brauchen}, web_thumbnail = {/assets/images/publications/Mü2020:Netzpolitik.jpg} }
-
Mühlhoff, Rainer. 2020. „Die Illusion der Anonymität: Big Data im Gesundheitssystem“. Blätter für Deutsche und Internationale Politik 8: 13–16.
Full text Full text pre-print academia.edu bibtex
×@article{Mü2020_Blätter, author = {Mühlhoff, Rainer}, title = {Die Illusion der Anonymität: Big Data im Gesundheitssystem}, journal = {Blätter für Deutsche und Internationale Politik}, year = {2020}, volume = {8}, pages = {13–16}, web_group = {essays}, web_academia = {https://www.academia.edu/43689490/Big_Data_und_KI_im_Gesundheitssystem_Warum_wir_einen_kollektiven_Datenschutz_brauchen}, web_fulltext = {https://docs.rainermuehlhoff.de/m%C3%BChlhoff_2020_bl%C3%A4tter-08-20_illusion-der-anonymit%C3%A4t.pdf}, web_fulltext_nonpdf = {https://www.blaetter.de/ausgabe/2020/august/die-illusion-der-anonymitaet-big-data-im-gesundheitssystem}, web_preprint = {http://docs.rainermuehlhoff.de/m%C3%BChlhoff_2020_kollektiver-datenschutz-gesundheitsdaten_authors-manuscript.pdf}, web_thumbnail = {https://www.blaetter.de/sites/default/files/images/ausgaben/2020/Cover_202008_single_1024.png} }