Activity 01
Simulation Game: Biased Data Bags
Distribute bags of colored beads with uneven distributions to represent biased datasets. In small groups, students 'train' a partner to classify new beads by majority color patterns, then test on balanced bags and record failure rates. Groups debrief on how data imbalance caused poor predictions.
If an AI makes a biased decision, who is responsible: the programmer or the data?
Facilitation TipDuring Biased Data Bags, pause after each round to ask groups to share which bead colors failed most often and why.
What to look forPresent students with a scenario: An AI system designed to recommend job candidates was trained on data from a company that historically hired more men for technical roles. Ask: 'Who is primarily responsible for any bias in the AI's recommendations: the programmers who built the system, or the historical data it learned from? Justify your answer with specific reasons.'