Skip to content
Computer Science · 11th Grade

Active learning ideas

Algorithmic Bias and Fairness

Active learning works because algorithmic bias is a human problem disguised as a technical one. Students need to trace data flows, weigh trade-offs, and experience the gap between a clean algorithm and messy reality. Case studies, audits, and debates put abstract concepts into real systems where students can see bias emerge, measure its effects, and judge possible fixes.

Common Core State StandardsCSTA: 3B-IC-25CSTA: 3B-IC-26
25–40 minPairs → Whole Class4 activities

Activity 01

Case Study Analysis40 min · Small Groups

Case Study Analysis: COMPAS and Hiring Algorithms

Assign small groups one of two documented bias cases (COMPAS criminal risk scoring or Amazon's hiring algorithm). Groups read a summary, identify where bias entered the system, and present findings to the class using a structured claim-evidence-reasoning format.

Analyze how human biases can be inadvertently encoded into AI algorithms.

Facilitation TipIn Case Study Analysis, assign roles so each student traces a different entry point for bias in the COMPAS system.

What to look forPresent students with a case study, such as a biased AI in college admissions. Ask: 'Identify at least two ways bias could have entered this system. Discuss the potential consequences for applicants from underrepresented groups. What is one specific step an engineer could take to address this bias?'

AnalyzeEvaluateCreateDecision-MakingSelf-Management
Generate Complete Lesson

Activity 02

Structured Academic Controversy: Should Biased AI Be Banned?

Pairs argue that a specific biased AI system should be banned outright, then switch and argue for regulation instead of prohibition. After both rounds, partners synthesize a position that addresses both the harms and the practical tradeoffs of each response.

Explain the societal impact of biased AI systems in areas like hiring or criminal justice.

Facilitation TipFor the Structured Academic Controversy, require students to cite specific lines from the case studies when stating their positions.

What to look forProvide students with a short description of a hypothetical AI system (e.g., an AI for recommending job candidates). Ask them to write down: 'One potential source of bias in the training data. One proxy variable that might lead to unfair outcomes. One fairness metric that could be used to evaluate the system.'

AnalyzeEvaluateCreateSocial AwarenessRelationship Skills
Generate Complete Lesson

Activity 03

Socratic Seminar30 min · Small Groups

Dataset Audit: Find the Bias

Provide groups with a simplified synthetic dataset (e.g., fictional loan approval records). Groups use frequency counts and comparison tables to identify which features correlate with protected characteristics, then present what mitigation steps they would take.

Design strategies to identify and mitigate bias in machine learning models.

Facilitation TipWhen running the Dataset Audit, display correlation tables on the board and ask students to explain each value in plain language.

What to look forAsk students to write: 'One real-world example of algorithmic bias we discussed. One reason why it is challenging to eliminate bias from AI systems. One question you still have about AI fairness.'

AnalyzeEvaluateCreateSocial AwarenessRelationship Skills
Generate Complete Lesson

Activity 04

Gallery Walk25 min · Whole Class

Gallery Walk: Mitigation Strategies

Post six posters around the room, each describing a different bias mitigation technique (e.g., re-sampling, fairness constraints, post-hoc correction). Students rotate, evaluate each strategy's strengths and limitations on sticky notes, and the class debriefs on which strategies address root causes vs. symptoms.

Analyze how human biases can be inadvertently encoded into AI algorithms.

Facilitation TipDuring the Gallery Walk, have students rotate with sticky notes to add questions or suggestions to each mitigation poster.

What to look forPresent students with a case study, such as a biased AI in college admissions. Ask: 'Identify at least two ways bias could have entered this system. Discuss the potential consequences for applicants from underrepresented groups. What is one specific step an engineer could take to address this bias?'

UnderstandApplyAnalyzeCreateRelationship SkillsSocial Awareness
Generate Complete Lesson

A few notes on teaching this unit

Teachers should frame algorithmic bias as a design failure, not a data failure. Start with concrete artifacts—code notebooks, dataset columns, court rulings—so students confront the messiness early. Avoid rushing to solutions; instead, model how to hold complexity by asking: ‘Which stakeholders’ values are embedded here?’ Use structured controversies to normalize disagreement and peer review to surface blind spots in students’ own reasoning.

Successful learning shows when students can trace how bias travels from historical inequities into data, identify proxy variables in actual datasets, and articulate why technical fixes alone fail without policy and design changes. They should also propose mitigation strategies that balance fairness with system goals and defend their choices in debate.


Watch Out for These Misconceptions

  • During Case Study Analysis, some students may think that if the COMPAS algorithm doesn’t include race as an input, it cannot be biased.

    During Case Study Analysis, ask students to examine the dataset columns for proxy variables like prior offenses, neighborhood income, or school attended. Have them calculate correlation scores to show how these variables indirectly encode race, then discuss how engineers could remap or remove these proxies.

  • During Dataset Audit, students may assume that collecting more data will automatically reduce bias.

    During Dataset Audit, direct students to examine the COMPAS dataset size versus its demographic skew. Use the table to show how adding more data from a biased system amplifies historical inequities, then have them propose data collection changes that target underrepresented groups explicitly.

  • During Gallery Walk, students might believe that bias only affects high-stakes domains like criminal justice.

    During Gallery Walk, include examples from content recommendation and targeted advertising. Ask students to examine how popularity metrics in recommendation systems can reinforce stereotypes, and have them propose domain-specific fairness metrics to evaluate these systems.


Methods used in this brief