Skip to content
Computer Science · 9th Grade · Data Intelligence and Visualization · Weeks 28-36

Ethical Implications of Algorithmic Predictions

Students will discuss the dangers of over-relying on algorithmic predictions for social issues.

Common Core State StandardsCSTA: 3A-DA-12CSTA: 3A-IC-24

About This Topic

Algorithmic prediction systems are used to make high-stakes decisions about people: who gets a loan, who is flagged by predictive policing, whose resume gets filtered by a hiring algorithm. These systems claim objectivity because they are based on data, but data carries the historical biases of the systems that generated it. When a model trained on past hiring decisions learns to discriminate against certain groups because humans did, the algorithm amplifies and automates that discrimination at scale.

This topic, grounded in CSTA 3A-DA-12 and 3A-IC-24, asks 9th graders to think critically about the gap between algorithmic claims and algorithmic reality. Students do not need to understand gradient descent to understand that a system trained on biased outcomes will produce biased predictions. The human-values question, who should be making decisions that affect people's lives and on what basis, is one every student can meaningfully engage with.

Active learning through case study analysis and structured debate is especially appropriate here because these are contested, value-laden questions with real consequences. Students who discuss these issues with peers are more likely to carry the critical perspective into civic life as adults.

Key Questions

  1. Analyze the dangers of over-relying on algorithmic predictions for social issues.
  2. Critique the use of predictive algorithms in sensitive areas like criminal justice or hiring.
  3. Predict the societal consequences of biased algorithmic predictions.

Learning Objectives

  • Analyze the potential for algorithmic bias in predictive policing systems by examining case study data.
  • Critique the ethical considerations of using algorithms for loan application screening, identifying potential discriminatory outcomes.
  • Evaluate the societal impact of biased hiring algorithms on underrepresented groups.
  • Predict the long-term consequences of widespread reliance on potentially flawed algorithmic decision-making in social services.

Before You Start

Introduction to Data and Algorithms

Why: Students need a foundational understanding of what data is and how algorithms process it before discussing their ethical implications.

Basic Concepts of Bias

Why: Prior exposure to the concept of bias in human decision-making will help students recognize and analyze bias in algorithmic systems.

Key Vocabulary

Algorithmic BiasSystematic and repeatable errors in a computer system that create unfair outcomes, such as privileging one arbitrary group of users over others.
Predictive PolicingThe use of data analysis and algorithms to identify potential criminal activity and deploy law enforcement resources proactively.
Fairness in AIThe principle that artificial intelligence systems should not produce discriminatory or prejudiced outcomes against individuals or groups.
Data SetA collection of data, often used to train machine learning models. Biases present in the real world can be embedded within these data sets.

Watch Out for These Misconceptions

Common MisconceptionAlgorithms are objective because they use math and data, not human opinions.

What to Teach Instead

Algorithms reflect the human choices made in designing them: what data to use, what outcome to optimize for, and what fairness definition to apply. Data itself encodes historical human biases. Active case study analysis helps students trace the human decisions embedded in a specific algorithmic system.

Common MisconceptionA highly accurate algorithm is a fair one.

What to Teach Instead

An algorithm can be highly accurate on average while performing very differently across demographic groups. A model that is 95% accurate overall but 70% accurate for one group is not fair by most definitions. Examining disparate impact data, not just overall accuracy, is necessary to evaluate fairness.

Active Learning Ideas

See all activities

Real-World Connections

  • In cities like Chicago, predictive policing algorithms have been used to forecast crime hotspots, raising concerns about over-policing in minority neighborhoods.
  • Companies like Amazon have faced scrutiny for using AI-powered hiring tools that showed bias against female applicants, leading to the tool's discontinuation.
  • The criminal justice system in some states utilizes risk assessment tools to inform bail and sentencing decisions, prompting debate about their fairness and accuracy.

Assessment Ideas

Discussion Prompt

Pose the question: 'Imagine an algorithm is used to decide which students receive extra academic support. What kinds of data might be used, and how could that data lead to unfair outcomes for certain students?' Facilitate a class discussion, guiding students to identify potential biases.

Exit Ticket

Provide students with a brief scenario about an algorithm used for college admissions. Ask them to write two sentences explaining one potential ethical concern and one sentence suggesting a way to mitigate that concern.

Quick Check

Present students with a short, anonymized case study of an algorithmic decision (e.g., loan denial). Ask them to identify the potential source of bias in 1-2 sentences and state whether the outcome seems fair. Collect responses to gauge understanding.

Frequently Asked Questions

What is algorithmic bias?
Algorithmic bias occurs when a predictive model produces systematically different outcomes for different groups of people, typically because the training data reflects existing human inequities. The model learns to associate certain features with outcomes in ways that disadvantage specific groups. Because the process looks technical and neutral, the bias can be harder to challenge than explicit human discrimination.
How are predictive algorithms used in criminal justice?
Risk-scoring tools like COMPAS assign numerical scores to defendants predicting their likelihood of reoffending. Judges use these scores in bail, sentencing, and parole decisions. ProPublica's 2016 investigation found that COMPAS incorrectly flagged Black defendants as high-risk at nearly twice the rate of white defendants. The case raised fundamental questions about due process and algorithmic accountability.
Can algorithmic bias be fixed?
Partially. Technical fixes like fairness-aware machine learning can reduce disparate impact, but no single technical definition of fairness satisfies all legitimate fairness criteria simultaneously. This means policy and oversight decisions, who is allowed to use a system, in what context, with what appeals process, are as important as the technical design.
How does active learning help students understand algorithmic ethics?
Case studies and structured debates put students in direct contact with the real-world consequences of algorithmic systems. When students must argue a position and respond to counterarguments, they develop the nuanced thinking that these complex tradeoffs require. Passive reading about bias is far less likely to produce the kind of civic engagement these issues demand.