Skip to content

Ethical Implications of Algorithmic PredictionsActivities & Teaching Strategies

Active learning works for this topic because algorithmic bias is abstract until students see it in real data. Case studies and debates make the ethical stakes tangible, helping students move from vague concerns to concrete analysis of how human choices shape algorithmic decisions.

9th GradeComputer Science3 activities25 min40 min

Learning Objectives

  1. 1Analyze the potential for algorithmic bias in predictive policing systems by examining case study data.
  2. 2Critique the ethical considerations of using algorithms for loan application screening, identifying potential discriminatory outcomes.
  3. 3Evaluate the societal impact of biased hiring algorithms on underrepresented groups.
  4. 4Predict the long-term consequences of widespread reliance on potentially flawed algorithmic decision-making in social services.

Want a complete lesson plan with these objectives? Generate a Mission

40 min·Small Groups

Case Study Analysis: The COMPAS Recidivism Tool

Small groups read a summary of the ProPublica analysis of the COMPAS risk-scoring algorithm used in US courts. Groups identify: what the algorithm was designed to do, what ProPublica found it actually did, and who was harmed. Groups present their analysis and the class votes on whether the system should continue to be used.

Prepare & details

Analyze the dangers of over-relying on algorithmic predictions for social issues.

Facilitation Tip: During the COMPAS case study, have students annotate the ProPublica article with three types of human decisions: data selection, model design, and fairness definition.

Setup: Groups at tables with case materials

Materials: Case study packet (3-5 pages), Analysis framework worksheet, Presentation template

AnalyzeEvaluateCreateDecision-MakingSelf-Management
35 min·Whole Class

Formal Debate: Should Algorithms Make Hiring Decisions?

Half the class argues for algorithmic hiring screening; the other half argues against. Each side has five minutes to prepare, three minutes to present, and two minutes for rebuttal. After the debate, students individually write a one-paragraph position statement that acknowledges the strongest argument on the opposing side.

Prepare & details

Critique the use of predictive algorithms in sensitive areas like criminal justice or hiring.

Facilitation Tip: In the hiring debate, assign roles explicitly so students speak from the perspective of job seekers, hiring managers, and data scientists to surface conflicting values.

Setup: Two teams facing each other, audience seating for the rest

Materials: Debate proposition card, Research brief for each side, Judging rubric for audience, Timer

AnalyzeEvaluateCreateSelf-ManagementDecision-Making
25 min·Pairs

Think-Pair-Share: Fairness Definitions Clash

Present two definitions of algorithmic fairness that are mathematically incompatible: equal error rates across groups vs. equal prediction accuracy. Students individually decide which definition they think is more fair and why. Partners share, then the class discusses why there is no universally correct answer.

Prepare & details

Predict the societal consequences of biased algorithmic predictions.

Facilitation Tip: For the fairness definitions activity, provide a chart of four fairness metrics and ask pairs to plot where their intuitions align or clash before sharing with the class.

Setup: Standard classroom seating; students turn to a neighbor

Materials: Discussion prompt (projected or printed), Optional: recording sheet for pairs

UnderstandApplyAnalyzeSelf-AwarenessRelationship Skills

Teaching This Topic

Teach this topic by foregrounding the human choices embedded in every algorithmic system. Avoid presenting math as neutral; instead, show how error metrics, datasets, and thresholds embed values. Use jargon like ‘disparate impact’ only after students have grappled with concrete cases where fairness is contested.

What to Expect

Successful learning looks like students tracing human choices in algorithm design, identifying fairness trade-offs, and articulating why data alone does not guarantee objectivity. They should use fairness metrics to critique outcomes, not just technical accuracy.

These activities are a starting point. A full mission is the experience.

  • Complete facilitation script with teacher dialogue
  • Printable student materials, ready for class
  • Differentiation strategies for every learner
Generate a Mission

Watch Out for These Misconceptions

Common MisconceptionDuring the COMPAS case study analysis, watch for the claim that algorithms are objective because they use math and data, not human opinions.

What to Teach Instead

During the COMPAS case study analysis, have students trace the human choices in the article: ask them to list what data ProPublica used, what outcome the algorithm predicted, and which fairness definition was implicitly assumed by the tool’s designers.

Common MisconceptionDuring the hiring debate, watch for the assumption that a highly accurate algorithm is a fair one.

What to Teach Instead

During the hiring debate, require students to present accuracy rates by demographic group and ask them to define fairness before evaluating the model’s trade-offs.

Assessment Ideas

Discussion Prompt

After the COMPAS case study, pose the question: ‘Imagine an algorithm is used to decide which students receive extra academic support. What kinds of data might be used, and how could that data lead to unfair outcomes for certain students?’ Facilitate a class discussion, guiding students to identify potential biases.

Exit Ticket

After the hiring debate, provide students with a brief scenario about an algorithm used for college admissions. Ask them to write two sentences explaining one potential ethical concern and one sentence suggesting a way to mitigate that concern.

Quick Check

During the fairness definitions activity, present students with a short, anonymized case study of an algorithmic decision (e.g., loan denial). Ask them to identify the potential source of bias in 1-2 sentences and state whether the outcome seems fair. Collect responses to gauge understanding.

Extensions & Scaffolding

  • Challenge: Ask students to redesign the COMPAS tool using a different fairness definition (e.g., predictive parity) and present pros and cons in 2 minutes.
  • Scaffolding: Provide sentence starters for the debate (e.g., ‘My role believes fairness means… because…’) and a word bank of fairness concepts.
  • Deeper: Invite students to find a current news article about algorithmic bias and map it to the COMPAS case using a Venn diagram of data, model, and outcome choices.

Key Vocabulary

Algorithmic BiasSystematic and repeatable errors in a computer system that create unfair outcomes, such as privileging one arbitrary group of users over others.
Predictive PolicingThe use of data analysis and algorithms to identify potential criminal activity and deploy law enforcement resources proactively.
Fairness in AIThe principle that artificial intelligence systems should not produce discriminatory or prejudiced outcomes against individuals or groups.
Data SetA collection of data, often used to train machine learning models. Biases present in the real world can be embedded within these data sets.

Ready to teach Ethical Implications of Algorithmic Predictions?

Generate a full mission with everything you need

Generate a Mission