Ethical Implications of Algorithmic Predictions
Students will discuss the dangers of over-relying on algorithmic predictions for social issues.
About This Topic
Algorithmic prediction systems are used to make high-stakes decisions about people: who gets a loan, who is flagged by predictive policing, whose resume gets filtered by a hiring algorithm. These systems claim objectivity because they are based on data, but data carries the historical biases of the systems that generated it. When a model trained on past hiring decisions learns to discriminate against certain groups because humans did, the algorithm amplifies and automates that discrimination at scale.
This topic, grounded in CSTA 3A-DA-12 and 3A-IC-24, asks 9th graders to think critically about the gap between algorithmic claims and algorithmic reality. Students do not need to understand gradient descent to understand that a system trained on biased outcomes will produce biased predictions. The human-values question, who should be making decisions that affect people's lives and on what basis, is one every student can meaningfully engage with.
Active learning through case study analysis and structured debate is especially appropriate here because these are contested, value-laden questions with real consequences. Students who discuss these issues with peers are more likely to carry the critical perspective into civic life as adults.
Key Questions
- Analyze the dangers of over-relying on algorithmic predictions for social issues.
- Critique the use of predictive algorithms in sensitive areas like criminal justice or hiring.
- Predict the societal consequences of biased algorithmic predictions.
Learning Objectives
- Analyze the potential for algorithmic bias in predictive policing systems by examining case study data.
- Critique the ethical considerations of using algorithms for loan application screening, identifying potential discriminatory outcomes.
- Evaluate the societal impact of biased hiring algorithms on underrepresented groups.
- Predict the long-term consequences of widespread reliance on potentially flawed algorithmic decision-making in social services.
Before You Start
Why: Students need a foundational understanding of what data is and how algorithms process it before discussing their ethical implications.
Why: Prior exposure to the concept of bias in human decision-making will help students recognize and analyze bias in algorithmic systems.
Key Vocabulary
| Algorithmic Bias | Systematic and repeatable errors in a computer system that create unfair outcomes, such as privileging one arbitrary group of users over others. |
| Predictive Policing | The use of data analysis and algorithms to identify potential criminal activity and deploy law enforcement resources proactively. |
| Fairness in AI | The principle that artificial intelligence systems should not produce discriminatory or prejudiced outcomes against individuals or groups. |
| Data Set | A collection of data, often used to train machine learning models. Biases present in the real world can be embedded within these data sets. |
Watch Out for These Misconceptions
Common MisconceptionAlgorithms are objective because they use math and data, not human opinions.
What to Teach Instead
Algorithms reflect the human choices made in designing them: what data to use, what outcome to optimize for, and what fairness definition to apply. Data itself encodes historical human biases. Active case study analysis helps students trace the human decisions embedded in a specific algorithmic system.
Common MisconceptionA highly accurate algorithm is a fair one.
What to Teach Instead
An algorithm can be highly accurate on average while performing very differently across demographic groups. A model that is 95% accurate overall but 70% accurate for one group is not fair by most definitions. Examining disparate impact data, not just overall accuracy, is necessary to evaluate fairness.
Active Learning Ideas
See all activitiesCase Study Analysis: The COMPAS Recidivism Tool
Small groups read a summary of the ProPublica analysis of the COMPAS risk-scoring algorithm used in US courts. Groups identify: what the algorithm was designed to do, what ProPublica found it actually did, and who was harmed. Groups present their analysis and the class votes on whether the system should continue to be used.
Formal Debate: Should Algorithms Make Hiring Decisions?
Half the class argues for algorithmic hiring screening; the other half argues against. Each side has five minutes to prepare, three minutes to present, and two minutes for rebuttal. After the debate, students individually write a one-paragraph position statement that acknowledges the strongest argument on the opposing side.
Think-Pair-Share: Fairness Definitions Clash
Present two definitions of algorithmic fairness that are mathematically incompatible: equal error rates across groups vs. equal prediction accuracy. Students individually decide which definition they think is more fair and why. Partners share, then the class discusses why there is no universally correct answer.
Real-World Connections
- In cities like Chicago, predictive policing algorithms have been used to forecast crime hotspots, raising concerns about over-policing in minority neighborhoods.
- Companies like Amazon have faced scrutiny for using AI-powered hiring tools that showed bias against female applicants, leading to the tool's discontinuation.
- The criminal justice system in some states utilizes risk assessment tools to inform bail and sentencing decisions, prompting debate about their fairness and accuracy.
Assessment Ideas
Pose the question: 'Imagine an algorithm is used to decide which students receive extra academic support. What kinds of data might be used, and how could that data lead to unfair outcomes for certain students?' Facilitate a class discussion, guiding students to identify potential biases.
Provide students with a brief scenario about an algorithm used for college admissions. Ask them to write two sentences explaining one potential ethical concern and one sentence suggesting a way to mitigate that concern.
Present students with a short, anonymized case study of an algorithmic decision (e.g., loan denial). Ask them to identify the potential source of bias in 1-2 sentences and state whether the outcome seems fair. Collect responses to gauge understanding.
Frequently Asked Questions
What is algorithmic bias?
How are predictive algorithms used in criminal justice?
Can algorithmic bias be fixed?
How does active learning help students understand algorithmic ethics?
More in Data Intelligence and Visualization
Data Collection Methods and Bias
Students will explore techniques for gathering data and analyze how bias in data collection can lead to inaccurate conclusions.
2 methodologies
Ethical Data Scraping and Privacy
Students will discuss the ethical considerations of scraping data from public websites and privacy implications.
2 methodologies
Data Cleaning and Preprocessing
Students will learn the necessity of cleaning data to ensure accuracy and handle missing or corrupted data.
2 methodologies
Correlation vs. Causation
Students will analyze why correlation does not necessarily imply a causal relationship.
2 methodologies
Identifying Trends in Data
Students will use computational tools to identify patterns and trends within datasets.
2 methodologies
Evaluating Data-Driven Conclusions
Students will learn to critically evaluate conclusions drawn from data, considering limitations and potential biases.
2 methodologies