Skip to content
Computer Science · 9th Grade · The Impact of Artificial Intelligence · Weeks 28-36

AI and Societal Inequality

Students will analyze how AI exacerbates existing inequalities in society.

Common Core State StandardsCSTA: 3A-IC-24CSTA: 3A-IC-25

About This Topic

AI systems do not operate in a vacuum. They are built on data collected from existing social structures, deployed in contexts shaped by economic inequality, and evaluated against metrics that may not reflect the full range of people affected. For 9th graders, this topic builds the analytical framework to ask not just 'does this AI work?' but 'who does it work for, and at whose expense?' CSTA 3A-IC-24 and 3A-IC-25 ask students to connect computing decisions to their impacts across diverse communities, which requires examining AI through the lens of access, representation, and power.

Key mechanisms worth examining include training data bias (models trained on non-representative data perform worse for underrepresented groups), differential access (wealthier schools and districts have better hardware, faster internet, and more AI tools), and algorithmic decision-making in high-stakes contexts like hiring, loan approval, and criminal sentencing. US students have often encountered these systems directly or through news coverage. Facial recognition accuracy disparities across demographic groups is a documented, well-studied example that resonates because it is concrete and verifiable.

Active learning is particularly valuable for this topic because the issues are contested and emotionally relevant. Structured debate, case study analysis, and policy design activities give students tools to reason about tradeoffs rather than either dismissing concerns or accepting claims uncritically.

Key Questions

  1. Analyze how AI exacerbates existing inequalities in society.
  2. Predict the impact of AI on different socioeconomic groups.
  3. Design policy recommendations to mitigate AI's negative impact on inequality.

Learning Objectives

  • Analyze case studies to identify specific examples of how AI algorithms perpetuate or amplify existing societal inequalities related to race, gender, or socioeconomic status.
  • Evaluate the ethical implications of AI deployment in high-stakes decision-making processes, such as loan applications or criminal justice, by comparing outcomes for different demographic groups.
  • Design policy recommendations aimed at mitigating the negative impacts of AI on marginalized communities, considering factors like data bias and differential access.
  • Explain the technical and societal reasons behind disparities in AI performance across different demographic groups, citing examples like facial recognition accuracy.
  • Synthesize information from multiple sources to articulate the complex relationship between AI development and the exacerbation of societal inequalities.

Before You Start

Introduction to Machine Learning Concepts

Why: Students need a basic understanding of how AI models learn from data to grasp the concept of training data bias.

Data Representation and Types

Why: Understanding different types of data and how they are collected is foundational to analyzing how data can reflect or perpetuate societal biases.

Key Vocabulary

Algorithmic BiasSystematic and repeatable errors in a computer system that create unfair outcomes, such as privileging one arbitrary group of users over others.
Training Data BiasWhen the data used to train an AI model does not accurately represent the diversity of the real world, leading the model to perform poorly or unfairly for certain groups.
Differential AccessUnequal availability of technology, resources, or AI tools across different socioeconomic groups or geographic locations, impacting who benefits from AI advancements.
High-Stakes Decision-MakingThe use of AI systems in areas with significant consequences for individuals, such as hiring, college admissions, credit scoring, or judicial sentencing.
Algorithmic TransparencyThe principle that the decision-making processes of AI algorithms should be understandable and explainable to users and affected individuals.

Watch Out for These Misconceptions

Common MisconceptionAI is objective because it uses math and data rather than human opinions.

What to Teach Instead

AI systems reflect the choices made in their design: what data to collect, what to optimize for, whose feedback to use. Data itself encodes historical patterns, including discriminatory ones. If a hiring model was trained on decades of male-dominated hires, it learns to favor male candidates. The math is neutral; the data is not. Case study analysis that traces this chain from historical context to model output makes the mechanism visible and hard to dismiss.

Common MisconceptionIf an AI treats everyone the same, it is fair.

What to Teach Instead

Applying the same rule to everyone is formal equality, not equity. A facial recognition system that is 99% accurate overall but 85% accurate for darker-skinned women treats everyone 'the same' while delivering worse outcomes for a specific group. Students need to distinguish equal treatment from equitable outcomes. Structured debates and case studies help them articulate this distinction with evidence rather than just asserting it.

Common MisconceptionOnly people who directly use an AI system are affected by it.

What to Teach Instead

AI-driven decisions in hiring, lending, sentencing, and healthcare affect people who never interact with the system directly. Someone rejected by an automated hiring screen or denied a loan by an algorithm may not even know AI was involved. The reach of algorithmic decision-making extends well beyond the interface, which is precisely why policy-level analysis matters alongside individual user experience.

Active Learning Ideas

See all activities

Case Study Analysis: AI in High-Stakes Decisions

Provide three short case studies (e.g., COMPAS recidivism scoring, Amazon's hiring algorithm, facial recognition accuracy disparities). Groups analyze each using a structured template: What does the AI do? Who benefits? Who is harmed? What data was it trained on? Groups present findings and the class maps patterns across all three cases.

45 min·Small Groups

Structured Academic Controversy: Is AI Making Inequality Worse?

Pairs are assigned a position (AI is widening inequality / AI is reducing inequality) and prepare arguments with provided readings. They argue their assigned side for 5 minutes each, then switch positions, then work together to write a nuanced joint statement. The goal is to hold complexity rather than win the debate.

40 min·Pairs

Think-Pair-Share: Who Designed This?

Show a series of AI product screenshots or feature descriptions. Students independently note who they imagine built it, who the intended user is, and whose needs might not have been considered. Partner discussion and whole-class debrief surfaces assumptions students had not noticed they were making about the default user.

20 min·Pairs

Policy Design Workshop: Mitigating AI Inequality

Small groups are each assigned a sector (healthcare, education, hiring, criminal justice). They identify one specific inequality AI creates or worsens in that sector, then draft a policy recommendation with a rationale and at least one counterargument. Groups share proposals and receive structured feedback from peers.

50 min·Small Groups

Real-World Connections

  • ProPublica's investigation into the COMPAS algorithm revealed that it was more likely to falsely flag Black defendants as future criminals compared to white defendants, impacting sentencing decisions in the US justice system.
  • Facial recognition systems have shown significantly lower accuracy rates for women and people with darker skin tones, leading to potential misidentification and wrongful accusations, as documented by NIST studies.
  • Companies like Amazon have faced scrutiny for using AI-powered hiring tools that showed bias against female applicants because the system was trained on historical hiring data that favored men.

Assessment Ideas

Discussion Prompt

Present students with a hypothetical scenario: An AI system is proposed to help allocate scholarships. Ask them: 'What potential biases could be embedded in this system? How might this AI affect students from different socioeconomic backgrounds differently? What questions should we ask before deploying it?'

Quick Check

Provide students with a short news clipping about an AI application (e.g., AI in loan approvals). Ask them to identify one way the AI might exacerbate existing inequalities and one way it might benefit society. They should write their answers in 2-3 sentences each.

Exit Ticket

Ask students to write down one specific example of AI being used in a way that could create or worsen inequality. Then, ask them to propose one concrete step a developer or policymaker could take to address this specific issue.

Frequently Asked Questions

How does AI training data lead to biased outcomes?
AI models learn patterns from historical data. If that data reflects past discrimination, like decades of hiring records that favored one demographic, the model learns to replicate those patterns. The model is not deciding to discriminate; it is doing exactly what it was optimized to do. Fixing biased outputs requires examining the data, the objective function, and the deployment context, not just adjusting the algorithm.
What is the digital divide and why does it matter for AI inequality?
The digital divide refers to unequal access to technology: fast internet, modern devices, and quality technical education. It matters for AI because the benefits of AI tools, like personalized tutoring software, medical diagnostics, and productivity applications, are disproportionately available to people with better access. Students in underfunded schools or low-income households often cannot access the same AI-powered resources as their peers, compounding existing educational gaps.
What are some documented examples of AI worsening inequality?
Several are well-studied. The COMPAS recidivism tool predicted higher recidivism risk for Black defendants at nearly twice the rate for white defendants with similar criminal histories. Amazon's experimental hiring algorithm downgraded resumes containing the word 'women's' because it was trained on historically male-dominated data. These are not isolated cases; they reflect structural patterns that appear consistently across domains when AI is trained on historically unequal data.
How does active learning help students engage critically with AI inequality?
These topics involve contested values, real stakes, and emotional weight. Structured debate forces students to engage with positions they might not initially hold. Case study analysis grounds abstract claims in specific evidence. Policy design asks students to take responsibility for solutions rather than only critique problems. Together, these approaches build careful, evidence-based reasoning that lecture alone rarely achieves for topics this socially complex.