Skip to content
Computer Science · 12th Grade

Active learning ideas

Ethical AI and Algorithmic Bias

Active learning works for Ethical AI and Algorithmic Bias because students need to see bias not as a vague concept but as something embedded in data and code. When students analyze real incidents, debate neutrality, and draft guidelines, they move from abstract worry to concrete analysis of how bias operates and how it can be measured.

Common Core State StandardsCSTA: 3B-IC-25CSTA: 3B-IC-26
20–45 minPairs → Whole Class4 activities

Activity 01

Case Study Analysis45 min · Small Groups

Case Study Analysis: Famous AI Bias Incidents

Assign groups one documented bias incident each, COMPAS recidivism scoring, Amazon's recruiting tool, facial recognition misidentification rates, healthcare resource allocation algorithms. Groups analyze the source of bias, who was harmed, what the deployer claimed, and what a fairer design might look like. Each group presents a five-minute brief, and the class identifies common patterns across cases.

Analyze how biases in training data can lead to discriminatory outcomes in AI systems.

Facilitation TipDuring the Case Study Analysis, assign each group a different incident so the class collectively covers multiple domains and students compare findings across contexts.

What to look forPresent students with a scenario where an AI hiring tool disproportionately rejects female applicants. Ask: 'What are two potential sources of bias in the training data for this tool? How could the developers have approached fairness differently to mitigate this outcome?'

AnalyzeEvaluateCreateDecision-MakingSelf-Management
Generate Complete Lesson

Activity 02

Formal Debate40 min · Whole Class

Formal Debate: Can AI Be Neutral?

Divide the class into two groups: one argues that AI systems can be made bias-free through better data and auditing, the other argues that all AI systems embed the values of their designers and can never be neutral. After 15 minutes of preparation, groups debate for 20 minutes. The debrief does not declare a winner, it surfaces which empirical claims were most contested.

Critique current approaches to ensuring fairness and transparency in AI decision-making.

Facilitation TipFor the Structured Debate, assign roles (pro, con, judge, audience) and give students 10 minutes to prepare structured arguments using fairness metrics from the quick-check activity.

What to look forProvide students with a brief description of an AI system (e.g., a content recommendation algorithm). Ask them to identify one potential ethical concern related to bias and suggest one concrete step the developers could take to address it.

AnalyzeEvaluateCreateSelf-ManagementDecision-Making
Generate Complete Lesson

Activity 03

Think-Pair-Share20 min · Pairs

Think-Pair-Share: Whose Fairness Definition?

Present the three main mathematical fairness definitions (demographic parity, equalized odds, individual fairness) using a concrete hiring scenario. Pairs calculate which candidates would be hired under each definition and identify cases where the definitions give conflicting results. The debrief addresses why no single definition is universally correct.

Design a set of ethical guidelines for the development and deployment of AI technologies.

Facilitation TipIn the Think-Pair-Share, ask students to write their definition of fairness first, then compare with a partner, and finally share with the class to surface multiple definitions before the design challenge.

What to look forDisplay a list of common fairness metrics (e.g., demographic parity, equal opportunity). Ask students to write a one-sentence explanation for each, highlighting a key difference or trade-off between them.

UnderstandApplyAnalyzeSelf-AwarenessRelationship Skills
Generate Complete Lesson

Activity 04

Socratic Seminar35 min · Small Groups

Design Challenge: Write AI Ethics Guidelines

Groups are assigned the role of ethics board for a company deploying an AI system in a specific context (college admissions, medical triage, content moderation). They produce a one-page policy specifying: what data may be used, which fairness metric applies, who is accountable for errors, and how affected individuals can appeal. Groups share and critique each other's policies.

Analyze how biases in training data can lead to discriminatory outcomes in AI systems.

What to look forPresent students with a scenario where an AI hiring tool disproportionately rejects female applicants. Ask: 'What are two potential sources of bias in the training data for this tool? How could the developers have approached fairness differently to mitigate this outcome?'

AnalyzeEvaluateCreateSocial AwarenessRelationship Skills
Generate Complete Lesson

A few notes on teaching this unit

Experienced teachers approach this topic by making bias visible through concrete artifacts: datasets, model cards, and fairness reports. Avoid abstract lectures on ethics; instead, use real-world audits and let students experience the tension between accuracy and fairness firsthand. Research shows that when students write their own fairness guidelines, they more deeply internalize the trade-offs than when they merely read about them.

Successful learning looks like students recognizing that fairness is not a single metric but a set of trade-offs, that proxy variables persist even after removing protected attributes, and that their own role as future designers includes making explicit value choices. They should be able to explain why equal accuracy can mask unequal error rates across groups.


Watch Out for These Misconceptions

  • During Case Study Analysis, watch for students assuming that removing race or gender from the training data eliminates bias.

    Use the dataset from the Compas case study. Before students remove the protected attribute, ask them to calculate correlation between the attribute and labels, then remove it and recalculate. This reveals that proxy variables persist in other features like ZIP code and criminal history, demonstrating that removing the attribute alone does not solve bias.

  • During Structured Debate: Can AI Be Neutral?, watch for students claiming algorithms are objective because they are data-driven.

    Use the COMPAS debate prompt. Ask students to trace every human decision embedded in the algorithm: which data was labeled as high risk, which threshold optimized, and whose values determined acceptable error rates. Have them map these choices to show that neutrality is a choice, not a default.

  • During Think-Pair-Share: Whose Fairness Definition?, watch for students equating fairness with equal accuracy across groups.

    Provide students with two confusion matrices from a recidivism risk tool: one showing equal accuracy but unequal false positive rates, and another showing unequal accuracy but equal false positive rates. Ask them to explain which scenario they consider fairer and why, linking the definition back to the matrices.


Methods used in this brief