Activity 01
Case Study Analysis: Famous AI Bias Incidents
Assign groups one documented bias incident each, COMPAS recidivism scoring, Amazon's recruiting tool, facial recognition misidentification rates, healthcare resource allocation algorithms. Groups analyze the source of bias, who was harmed, what the deployer claimed, and what a fairer design might look like. Each group presents a five-minute brief, and the class identifies common patterns across cases.
Analyze how biases in training data can lead to discriminatory outcomes in AI systems.
Facilitation TipDuring the Case Study Analysis, assign each group a different incident so the class collectively covers multiple domains and students compare findings across contexts.
What to look forPresent students with a scenario where an AI hiring tool disproportionately rejects female applicants. Ask: 'What are two potential sources of bias in the training data for this tool? How could the developers have approached fairness differently to mitigate this outcome?'