Skip to content
Computer Science · 9th Grade

Active learning ideas

Sources of Algorithmic Bias

Algorithmic bias is abstract until students see how human choices shape technology outcomes. Active learning works here because students need to trace bias through real systems, not just hear about it. Moving from analysis to mapping to role-play builds concrete understanding and empathy, which helps students recognize bias in systems they will use and build.

Common Core State StandardsCSTA: 3A-IC-24CSTA: 3A-IC-25
20–40 minPairs → Whole Class4 activities

Activity 01

Case Study Analysis40 min · Small Groups

Case Study Analysis: Real Algorithmic Bias

Groups each receive one documented case of algorithmic bias (COMPAS, Amazon hiring tool, facial recognition accuracy, predictive policing). Each group identifies the source of bias, the affected group, and the real-world harm. Groups present their case using a shared analysis template, then the class maps patterns across all cases.

Analyze how human prejudices can be encoded into software and the resulting social impact.

Facilitation TipDuring Case Study Analysis, assign each group a different real-world case so the class collectively sees multiple entry points for bias.

What to look forProvide students with a brief description of a hypothetical AI system (e.g., a loan application screener). Ask them to write one sentence identifying a potential source of bias (data or design) and one sentence explaining how it could lead to unfair outcomes.

AnalyzeEvaluateCreateDecision-MakingSelf-Management
Generate Complete Lesson

Activity 02

Think-Pair-Share20 min · Pairs

Think-Pair-Share: Where Did the Bias Come From?

Show students a short description of a biased AI outcome (e.g., a loan approval system that denies more applications from a particular zip code). Individually, students trace back through the development process to identify at least two points where bias could have entered. Pairs compare their traces, then share with the class.

Differentiate between various sources of algorithmic bias (e.g., data bias, design bias).

Facilitation TipIn Think-Pair-Share, ask students to list one human choice in the model’s pipeline before they discuss causes with a partner.

What to look forPose the question: 'If an algorithm is trained on historical data, how can it ever be truly fair?' Facilitate a class discussion, encouraging students to reference specific types of bias and their real-world consequences.

UnderstandApplyAnalyzeSelf-AwarenessRelationship Skills
Generate Complete Lesson

Activity 03

Fishbowl Discussion30 min · Small Groups

Bias Source Mapping: From Data to Decision

Using a simplified flowchart of how a hiring algorithm works (data collection, feature selection, model training, threshold setting, deployment), groups annotate each stage with the types of bias that could enter there. The class builds a composite map on the board showing how bias accumulates through a pipeline.

Explain how algorithmic bias can perpetuate or exacerbate existing inequalities.

Facilitation TipFor Bias Source Mapping, have students use different colors to mark each stage of the pipeline so links between choices and outcomes are visible.

What to look forPresent students with two short case studies of algorithmic bias. Ask them to categorize the primary source of bias in each case (data bias or design bias) and briefly justify their choice.

AnalyzeEvaluateSocial AwarenessSelf-Awareness
Generate Complete Lesson

Activity 04

Fishbowl Discussion30 min · Individual

Perspective Role-Play: Who Is Harmed?

Students take on roles of people affected by a biased algorithm (loan applicant, job candidate, parolee, medical patient). Each writes a one-paragraph account from their perspective describing the decision they received and why it may be unfair. Class discusses whose perspective is typically absent from algorithmic development teams.

Analyze how human prejudices can be encoded into software and the resulting social impact.

Facilitation TipIn Perspective Role-Play, give each student a stakeholder role card with specific concerns to voice, so harm is grounded in lived experience.

What to look forProvide students with a brief description of a hypothetical AI system (e.g., a loan application screener). Ask them to write one sentence identifying a potential source of bias (data or design) and one sentence explaining how it could lead to unfair outcomes.

AnalyzeEvaluateSocial AwarenessSelf-Awareness
Generate Complete Lesson

A few notes on teaching this unit

Teach this topic by making the invisible visible. Start with cases where students can see the people behind the data and the decisions behind the code. Avoid lectures that separate bias from the systems that produce it; instead, connect each stage of the pipeline to human values and trade-offs. Research shows that students grasp bias better when they analyze systems they recognize, so use familiar tools like social media or school apps as examples. Keep the focus on intervention points—where students could change the system—not just on identifying harm.

Students will identify specific human decisions that introduce bias and explain how those decisions lead to unfair outcomes. They will trace bias from data collection through model design to real-world impact. By the end, they should critique systems with evidence, not just opinions.


Watch Out for These Misconceptions

  • During Case Study Analysis, watch for students who say algorithms are objective because they use math, not opinions.

    During Case Study Analysis, redirect students to the case materials to locate the specific human decisions—like what data was labeled or which metric was optimized—that shaped the algorithm’s behavior and produced biased outcomes.

  • During Think-Pair-Share, watch for students who assume bias only comes from biased training data.

    During Think-Pair-Share, ask students to review the full algorithm pipeline shown in the activity and identify at least two additional stages where bias could enter, using the case examples as evidence.

  • During Perspective Role-Play, watch for students who believe removing sensitive attributes like race or gender fixes bias.

    During Perspective Role-Play, have students examine proxy variables in the role-play scenarios and explain how removing one attribute might not remove bias if correlated features remain.


Methods used in this brief