Instructions on video homework
Lecture Introduction to the Ethics of AI, Summer Term 2024.
Assignment: Video/audio homework
As one out of three grading components, if you would like to obtain ECTS points for the lecture, you need to hand in a video or audio podcast once during the semester.
Task
- Below is a list of papers related to the lecture. Your task is to present and explain one of these papers in your video/audio podcast. You can freely choose a text that fits your interests and skills.
- Point out what is interesting and useful about this paper. What did you take away? You don’t need to understand everything in the paper.
- Part of the task is to make a decision on what to include and what to leave out in the limited time. Make priorities so that the video/audio submission is coherent and gives a good sense on the topic/discussion space opened with the paper.
- Make sure your presentation is accessible to other cogsci students without specific expertise in the subfield of the paper. If you chose a very technical paper, explain it without too much maths and formulas etc.
Format
- You can work alone or in a team of two.
- If you work alone: the video/audio podcast must be maximum 4 minutes long.
- If you work with a partner: the video/audio podcast must be maximum 6 minutes long. Both partners must visibly/audibly be part of your work.
- For detailed instructions on how to proceed in creating a video/audio podcast, including advice on suitable software, please refer to this help page.
- State clearly in your video/audio: Who you are (by whom the homework was done) and on which text you are working.
Recommended Structure
- Part 1: Text understanding
- Present the text to an audience that did not read the text.
- Who is the author?
- What kind of text is it? Where/how was it published?
- What is the core topic or question of the text, the problem, the main concept it wants to address?
- Part 2: What do you take away? Why is the text worth reading?
- What insights and questions arise from the text? How does it relate to ethics of AI?
- What are your main takeaways from the text? What do you find particularly interesting?
FAQ
Deadline: The video/audio homework should be handed in until the 15 July 2024.
How to hand in the video/audio? Please upload to the specific folder in the respective Stud.IP class of your discussion group. Name the file so that it starts with your last name(s). After uploading, please send an email to your tutor saying that you handed in a video/audio homework, stating your Matrikel number and the text to which your homework refers.
Where to find the texts: Wherever possible we provide the PDFs of the texts listed below in the Stud.IP class of the lecture.
Do we need to register somewhere for making a video/audio homework based on a certain paper? TBA
Is it possible that several people (independently) make video about the same paper? Yes this is possible, no problem.
Your question is missing here? Send me an email!
Available texts
New texts might get added in the course of the semester. Please choose one of the texts for your video/audio podcast homework.
Text | Description | Related to session no. |
---|---|---|
Miceli, Milagros, Julian Posada, and Tianling Yang. 2022. Studying Up Machine Learning Data: Why Talk About Bias When We Mean Power? Proceedings of the ACM on Human-Computer Interaction 6 (GROUP): 34:1-34:14. https://doi.org/10.1145/3492853. | #datawork | 2 (Human-Aided AI) |
Miceli, Milagros, Martin Schuessler, and Tianling Yang. 2020. Between Subjectivity and Imposition: Power Dynamics in Data Annotation for Computer Vision. arXiv. https://doi.org/10.48550/arXiv.2007.14886. | #datawork #power | 2 (Human-Aided AI) + 4 (power) |
Basu, Rima. 2019. What We Epistemically Owe to Each Other. Philosophical Studies 176 (4): 915 31. https://doi.org/10.1007/s11098-018-1219-z. | #philosophy | 3 (ethics 101) + 12 (data protection 2) |
Simon, Judith. 2017. Value-Sensitive Design and Responsible Research and Innovation. In The Ethics of Technology: Methods and Approaches, edited by Sven Ove Hansson, 219 35. Philosophy, Technology and Society. London/ ; New York: Rowman & Littlefield International, Ltd. | #RRI | 3 (ethics 101) |
Etzioni, Amitai, and Oren Etzioni. 2017. Incorporating Ethics into Artificial Intelligence. The Journal of Ethics 21 (4): 403 18. https://doi.org/10/ggm4r9. | #trolleyproblem (and critique) | 3 (ethics 101) |
Matzner, Tobias. 2019. Autonomy Trolleys und andere Probleme: Konfigurationen künstlicher Intelligenz in ethischen Debatten über selbstfahrende Kraftfahrzeuge. Zeitschrift für Medienwissenschaft 21 (2): 46 55. | #lang=DE #trolleyproblem (and critique) | 3 (ethics 101) |
Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610 23. FAccT 21. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3442188.3445922. | 6/7 (bias & discrimination) | |
Buolamwini, Joy, and Timnit Gebru. 2018. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. In Conference on Fairness, Accountability and Transparency, 77 91. PMLR. http://proceedings.mlr.press/v81/buolamwini18a.html. | #intersectionality #criticalrace #gender | 6/7 (bias & discrimination) |
Crenshaw, Kimberle. 1989. Demarginalizing the Intersection of Race and Sex: A Black Feminist Critique of Antidiscrimination Doctrine, Feminist Theory and Antiracist Politics. University of Chicago Legal Forum 1989: 139. | #intersectionality #criticalrace #gender | 6/7 (bias & discrimination) |
Noble, Safiya Umoja. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. New York: New York University Press. Pages 1–9 + 15–63. | #critique | 6/7 (bias & discrimination) |
Foulds, James, Rashidul Islam, Kamrun Naher Keya, and Shimei Pan. 2019. An Intersectional Definition of Fairness. ArXiv:1807.08362 [Cs, Stat], September. http://arxiv.org/abs/1807.08362. | #maths | 6/7 (bias & discrimination) |
Sweeney, Latanya. 2013. Discrimination in Online Ad Delivery. Communications of the ACM 56 (5): 44 54. https://doi.org/10.1145/2447976.2447990. | #criticalrace | 6/7 (bias & discrimination) |
Samek, Wojciech, Thomas Wiegand, and Klaus-Robert Müller. 2017. Explainable Artificial Intelligence: Understanding, Visualizing and Interpreting Deep Learning Models. ArXiv:1708.08296 [Cs, Stat], August. http://arxiv.org/abs/1708.08296. | #math | 8 (responsibility) |
Tufekci, Zeynep. 2015. Algorithmic Harms beyond Facebook and Google: Emergent Challenges of Computational Agency. Colo. Tech. LJ 13: 203. | #socialmedia #disinformation | |
Bhandari, Aparajita, and Sara Bimo. 2022. Why s Everyone on TikTok Now? The Algorithmized Self and the Future of Self-Making on Social Media. Social Media + Society 8 (1). https://doi.org/10.1177/20563051221086241. | #socialmedia #disinformation | |
Sweeney, Latanya. 2002. K-Anonymity: A Model for Protecting Privacy. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 10 (05): 557 70. https://doi.org/10.1142/S0218488502001648. | #anonymity #math | 11 (data protection 1) |
Narayanan, Arvind, and Vitaly Shmatikov. 2008. Robust De-Anonymization of Large Sparse Datasets. In 2008 IEEE Symposium on Security and Privacy (Sp 2008), 111 25. Oakland, CA, USA: IEEE. https://doi.org/10.1109/SP.2008.33. | #math #reidentification | 11 (data protection 1) |
Ohm, Paul. 2010. Broken Promises of Privacy: Responding to the Surprising Failure of Anonymization. UCLA Law Review 57: 1701 77. | #privacy #reidentification | 11 (data protection 1) |
Mittelstadt, Brent. 2017. From Individual to Group Privacy in Big Data Analytics. Philosophy & Technology 30 (4): 475 94. https://doi.org/10.1007/s13347-017-0253-7. | #philosophy #privacy | 12 (data protection 2) |
Mühlhoff, Rainer. 2021. Predictive Privacy: Towards an Applied Ethics of Data Analytics. Ethics and Information Technology 23: 675 90. https://doi.org/10.1007/s10676-021-09606-x. | #privacy #predictiveanalytics | 12 (data protection 2) |
Mühlhoff, Rainer, and Hannah Ruschemeier. 2024. “Predictive Analytics and the GDPR: Collective Dimensions of Data Protection.” Law, Innovation and Technology. https://doi.org/10.1080/17579961.2024.2313794. | #privacy #predictiveanalytics | 12 (data protection 2) |
Mühlhoff, Rainer, and Hannah Ruschemeier. 2024. Regulating AI with Purpose Limitation for Models. Journal of AI Law and Regulation. https://doi.org/10.21552/aire/2024/1/5. | #AIAct #privacy | 12 (data protection 2) |
Blanke, Jordan M. 2020. Protection for Inferences Drawn : A Comparison Between the General Data Protection Regulation and the California Consumer Privacy Act. Global Privacy Law Review 1 (2). | #privacy #GDPR #CCPA | 12 (data protection 2) |
Gorwa, Robert, and Michael Veale. 2024. Moderating Model Marketplaces: Platform Governance Puzzles for AI Intermediaries. https://doi.org/10.48550/arXiv.2311.12573. | #AIAct #privacy | 12 (data protection 2) |
Kröger, Jacob Leon, Otto Hans-Martin Lutz, and Stefan Ullrich. 2021. The Myth of Individual Control: Mapping the Limitations of Privacy Self-Management. In SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3881776. | #privacy #informed_consent | 12 (data protection 2) |
Lewinski, Kai von. 2009. Geschichte Des Datenschutzrechts von 1600 Bis 1977. In Freiheit Sicherheit Öffentlichkeit: 48. Assistententagung öffentliches Recht, Heidelberg 2008, 196 220. Nomos. https://doi.org/10.5771/9783845215532-196. | #lang=DE #privacy #law | 12 (data protection 2) |
EU High-Level Expert Group on AI. 2019. Ethics Guidelines for Trustworthy AI. Text. EU High-Level Expert Group on AI. https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai. | #guidelines #EUpolicy |