Acknowledgements
Authors:
Tanja Zdolšek Draksler, PhD, Ana Fabjan, Alenka Guček, PhD, Matej Kovačič, PhD
Funding:
This work was done within the Horizon Europe project AI4Gov: Trusted AI for Transparent Public Governance fostering Democratic Values (Grant agreement ID: 101094905).
References:
AI fairness and privacy: fundamentals, synergies, and conflicts. Available to watch at VideoLectures.NET: Watch here.
AI for Good Global Summit: Organized by the International Telecommunication Union and XPRIZE, this summit gathers AI experts, ethicists, and policymakers to address ethical challenges, including bias and discrimination. Visit website.
AI Search @ International Artificial Intelligence Olympiad 2024. Available to watch at VideoLectures.NET: Watch here.
AI4Gov - Trusted AI for Transparent Public Governance fostering Democratic Values (Horizon Europe project). LinkedIn, Official website, CORDIS project page.
AI4Gov - Trusted AI for Transparent Public Governance fostering Democratic Values (Horizon Europe project). D4.1 - Trustworthy, Explainable, and unbiased AI V1. Read document.
AI4Gov - Trusted AI for Transparent Public Governance fostering Democratic Values (Horizon Europe project). D2.1 - AI4Gov Holistic Regulatory Framework V1. Read document.
AI4Gov - Trusted AI for Transparent Public Governance fostering Democratic Values (Horizon Europe project). D2.2 - AI4Gov Holistic Regulatory Framework V2. Read document.
AI4Gov - Trusted AI for Transparent Public Governance fostering Democratic Values (Horizon Europe project). D4.3 - Policies Visualization Services V1. Read document.
AI4Gov - Trusted AI for Transparent Public Governance fostering Democratic Values (Horizon Europe project). 5.1 - Input papers to facilitate the workshops on awareness raising V1. Read document.
AI4Gov - Trusted AI for Transparent Public Governance fostering Democratic Values (Horizon Europe project). 5.3 - Assessment tools, training activities, best practice guide V1. Read document.
Algorithmic Justice League (AJL): Raises awareness about AI bias through advocacy, art, and research, highlighting its social impact. Visit website.
Angwin, J., Larson, J., Mattu, S., and Kirchner, L. (2016). ProPublica.
Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2022). Machine bias. In Ethics of data and analytics (pp. 254-264). Auerbach Publications. Read online.
Applying Explainable Artificial Intelligence Techniques on Linked Open Government Data 2021. Available to watch at VideoLectures.NET: Watch here.
Auditing for Bias in Algorithms Delivering Job Ads @ WWW 2021. Available to watch at VideoLectures.NET: Watch here.
Beyond the headlines: How to make the best of machine learning models in the wild @ NG School 2019. Available to watch at VideoLectures.NET: Watch here.
Bias Issues and Solutions in Recommender System @ WWW 2021. Available to watch at VideoLectures.NET: Watch here.
Dastin, J. (2018). Insight - Amazon scraps secret AI recruiting tool that showed bias against women. Reuters web-page. Read article.
Dastin, J. (2022). Amazon scraps secret AI recruiting tool that showed bias against women. In Ethics of data and analytics (pp. 296-299). Auerbach Publications. Read chapter.
DEMO video: scrollytelling and catalogue of bias detection and mitigation tools. (2024). Developed within the AI4Gov project.
Deployed Deep Generative Models @ International Artificial Intelligence Olympiad 2024. Available to watch at VideoLectures.NET: Watch here.
Discriminative Bias for Learning Probabilistic Sentential Decision Diagrams @ IDA 2020. Available to watch at VideoLectures.NET: Watch here.
Does Gender Matter in the News? Detecting and Examining Gender Bias in News Articles @ WWW 2021. Available to watch at VideoLectures.NET: Watch here.
Explainable Models for Healthcare AI. Available to watch at VideoLectures.NET: Watch here.
Exploring Racial Bias in Classifiers for Face Recognition @ WWW 2021. Available to watch at VideoLectures.NET: Watch here.
Facebook Civil Rights Audit: Facebook conducted an audit to reduce racial bias in ad targeting, leading to policy changes that support transparency and fairness in AI-driven advertising. Read the progress report, Final Report PDF.
Formal Explainability in Artificial Intelligence. Available to watch at VideoLectures.NET: Watch here.
Fournier F. et al. (2023). The WHY in Business Processes: Discovery of Causal Execution Dependencies. DOI. Read online.
Friends Don’t Let Friends Deploy Black-Box Models: The Importance of Intelligibility in Machine Learning @ KDD conference 2019, Anchorage. Available to watch at VideoLectures.NET: Watch here.
Gender Bias in Fake News: An Analysis @ WSDM 2023. Available to watch at VideoLectures.NET: Watch here.
Hao, K., & Stray, J. (2019). Can you make AI fairer than a judge? Play our courtroom algorithm game. MIT Technology Review. Read article.
Heikkilä, M. (2022). AI: Decoded: A Dutch algorithm scandal serves a warning to Europe—The AI Act won’t save us. Politico. Read article.
Heikkilä, M. (2022). Dutch scandal serves as a warning for Europe over risks of using algorithms. POLITICO. Read article.
Horizon Europe MAMMOth project. AI Fairness Definition Guide. Read guide.
Inside AI, An Algorithmic Adventure. (2022). UNESCO. Available online.
Introduction to AI @ International Artificial Intelligence Olympiad 2024. Available to watch at VideoLectures.NET: Watch here.
Introduction to Machine Learning @ Deep Learning Summer School 2016, Montreal. Available to watch at VideoLectures.NET: Watch here.
Introduction to solving a real-world machine learning problem on the Zindi platform @ International Artificial Intelligence Olympiad 2024. Available to watch at VideoLectures.NET: Watch here.
Jee, C. (2019). A biased medical algorithm favored white people for health-care programs. MIT Technology Review. Available online.
Kernel methods @ International Artificial Intelligence Olympiad 2024. Available to watch at VideoLectures.NET: Watch here.
Kuźniacki, B. (2023). The Dutch Childcare Benefit Scandal Shows That We Need Explainable AI Rules. Available online.
Larson, J., Mattu, S., Kirchner, L., and Angwin, J. (2016). How We Analyzed the COMPAS Recidivism Algorithm. ProPublica web-page. Available online.
Learning Evaluation @ International Artificial Intelligence Olympiad 2024. Available to watch at VideoLectures.NET: Watch here.
Logics and practices of transparency and opacity in real-world applications of public sector machine learning @ KDD 2017. Available to watch at VideoLectures.NET: Watch here.
Machine Bias. There’s software used across the country to predict future criminals. And it’s biased against blacks. ProPublica web-page. Available online.
Manias G. et al. "AI4Gov: Trusted AI for Transparent Public Governance Fostering Democratic Values," 2023 19th International Conference on Distributed Computing in Smart Systems and the Internet of Things (DCOSS-IoT), Pafos, Cyprus, 2023, pp. 548-555.
Mitigating Demographic Biases in Social Media-Based Recommender Systems @ KDD 2019. Available to watch at VideoLectures.NET: Watch here.
Mitigating Gender Bias in Captioning Systems @ WWW 2021. Available to watch at VideoLectures.NET: Watch here.
Never Too Late to Learn: Regularizing Gender Bias in Coreference Resolution @ WSDM 2023. Available to watch at VideoLectures.NET: Watch here.
Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447-453. DOI.
On Bias, Interpretability and Robustness. Available to watch at VideoLectures.NET: Watch here.
Panel discussion "AI for Society" @ 1st European Summer School on Artificial Intelligence (ESSAI) & 20th Advanced Course on Artificial Intelligence (ACAI), Ljubljana 2023. Available to watch at VideoLectures.NET: Watch here.
Panel discussion "Societal impact of AI" @ International Artificial Intelligence Olympiad 2024. Available to watch at VideoLectures.NET: Watch here.
Panel discussion “Human+AI collaboration: Reskilling and upskilling for the future” @ Building Bridges Across Languages: Human-Centered AI for the Euro-Med International Conference, 2024, Ljubljana, Slovenia. Available to watch at VideoLectures.NET: Watch here.
Park, A. L. (2019). Injustice ex machina: Predictive algorithms in criminal sentencing. UCLA Law Review, 19. Available online.
Reclaim Your Face: A European coalition advocating against biometric mass surveillance and promoting transparency in facial recognition. Website.
Reinforcement Learning @ International Artificial Intelligence Olympiad 2024. Available to watch at VideoLectures.NET: Watch here.
Schneier, B. (2009). Hacking AI Resume Screening with Text in a White Font. Blog article Schneier on security. Available online.
Smart-sized Benchmarking for Black-Box Optimization. Available to watch at VideoLectures.NET: Watch here.
Stacey, K. (2023). UK officials use AI to decide on issues from benefits to marriage licences. The Guardian. Available online.
Statement of Support (SoS) for funding bodies. (2024). Developed within AI4Gov - Trusted AI for Transparent Public Governance fostering Democratic Values (Horizon Europe project) from partner WLC - White Label Consultancy. Available online.
Stop-and-Think Self-Assessment tool for applicants. (2024). Developed within AI4Gov - Trusted AI for Transparent Public Governance fostering Democratic Values (Horizon Europe project) from partner WLC - White Label Consultancy. Available online.
Supervised Learning @ International Artificial Intelligence Olympiad 2024. Available to watch at VideoLectures.NET: Watch here.
Technical University of Munich’s Institute for Ethics in AI: Offers an interdisciplinary approach to integrating ethics in AI development. Website.
The alluring promise of objectivity: Big data in criminal justice @ Law and Ethics 2017. Available to watch at VideoLectures.NET: Watch here.
Thinking critically about digital data collection: Twitter and beyond. Available to watch at VideoLectures.NET: Watch here.
UCL’s Centre for Digital Ethics and Policy: Conducts research and provides education on the ethical challenges in AI. Website.
Understanding the Impact of Geographical Bias on News Sentiment: A Case Study on London and Rio Olympics @ SIKDD 2021. Available to watch at VideoLectures.NET: Watch here.
University of Cambridge’s Leverhulme Centre for the Future of Intelligence: Runs the "AI: Ethics and Society" program, focusing on ethical issues like bias. Website.
Unsupervised Learning @ International Artificial Intelligence Olympiad 2024. Available to watch at VideoLectures.NET: Watch here.
Video tutorial “Algorithmic Bias: From Discrimination Discovery to Fairness-Aware Data Mining” @ KDD conference, San Francisco, USA, 2016 (3. video parts). Available to watch at VideoLectures.NET: Watch here.
Virtualized Unbiasing Framework demonstrator. Developed within the AI4Gov project. Available online.
Working With Data @ International Artificial Intelligence Olympiad 2024. Available to watch at VideoLectures.NET: Watch here.
Yong, E. (2018). A popular algorithm is no better at predicting crimes than random people. The Atlantic, 17, 2018. Available online.
