Skip to content
Computer Science · 12th Grade · Data Science and Intelligent Systems · Weeks 19-27

Ethical AI and Algorithmic Bias

Students examine the ethical implications of AI, focusing on algorithmic bias, fairness, and accountability in intelligent systems.

Common Core State StandardsCSTA: 3B-IC-25CSTA: 3B-IC-26

About This Topic

AI systems that make decisions about people, who gets a loan, who receives a job interview, who is flagged by facial recognition, whose health records trigger an alert, carry the values and assumptions of the people who designed them and the data used to train them. Students in US 12th-grade CS examine how algorithmic bias arises, how it can be measured, and why eliminating it is technically and ethically complex.

Bias enters AI systems through multiple pathways: historical data that reflects past discrimination gets used to train models that perpetuate those patterns; underrepresentation of certain groups in training data leads to lower accuracy for those groups; optimization objectives chosen for business reasons (maximize clicks, minimize cost) may systematically harm particular populations. Students also learn that 'fairness' has multiple mathematical definitions that are provably incompatible, a system cannot simultaneously satisfy all of them, which means deployers must make explicit value judgments about whose interests take priority.

Active learning methods, particularly structured debates and case study analyses, are essential here because the issues require students to reason from multiple ethical frameworks simultaneously rather than simply applying a rule. The goal is developing principled judgment, not arriving at predetermined conclusions.

Key Questions

  1. Analyze how biases in training data can lead to discriminatory outcomes in AI systems.
  2. Critique current approaches to ensuring fairness and transparency in AI decision-making.
  3. Design a set of ethical guidelines for the development and deployment of AI technologies.

Learning Objectives

  • Analyze how specific biases in training datasets, such as historical loan approval data, can lead to discriminatory outcomes in AI-driven loan application systems.
  • Critique the effectiveness of current fairness metrics, like demographic parity and equalized odds, in addressing algorithmic bias in facial recognition technology.
  • Design a set of ethical guidelines for the development and deployment of AI in hiring processes, considering principles of accountability and transparency.
  • Evaluate the trade-offs between different definitions of fairness and their implications for AI systems used in criminal justice risk assessments.

Before You Start

Introduction to Machine Learning Concepts

Why: Students need a foundational understanding of how machine learning models are trained and make predictions to grasp the mechanisms of algorithmic bias.

Data Representation and Analysis

Why: Understanding how data is collected, cleaned, and analyzed is crucial for identifying potential sources of bias within datasets.

Key Vocabulary

Algorithmic BiasSystematic and repeatable errors in a computer system that create unfair outcomes, such as privileging one arbitrary group of users over others.
Fairness MetricsMathematical definitions used to quantify and measure fairness in AI systems, with various definitions often being mutually exclusive.
TransparencyThe degree to which the inner workings and decision-making processes of an AI system are understandable to humans.
AccountabilityThe obligation of an AI system's developers and deployers to take responsibility for the outcomes and impacts of the system.
Training DataThe dataset used to train an AI model, which can inadvertently encode societal biases if not carefully curated and analyzed.

Watch Out for These Misconceptions

Common MisconceptionRemoving protected attributes like race or gender from the training data makes an AI system fair.

What to Teach Instead

Proxy variables, zip code, name, word choice in an essay, often correlate strongly with protected attributes. Removing the attribute itself does not prevent the model from learning the proxy association. Students who analyze a dataset for proxy variables before and after removing a protected attribute frequently find that the correlation persists through other features.

Common MisconceptionAlgorithmic decisions are more objective than human decisions because they are data-driven.

What to Teach Instead

Algorithms encode human choices at every stage: what data to collect, which labels to assign, which objective to optimize, which errors are acceptable. These choices are not objective, they reflect values, priorities, and assumptions. In some contexts, algorithms can be less biased than individual humans; in others, they systematize bias at scale in ways individual decisions do not.

Common MisconceptionIf an AI system achieves equal accuracy across demographic groups, it is fair.

What to Teach Instead

Equal accuracy can coexist with unequal rates of false positives or false negatives across groups. A criminal risk score that is equally accurate for Black and white defendants but generates higher false positive rates for Black defendants is not fair by multiple definitions even though accuracy is equal. Mathematical fairness requires choosing which type of error equality matters most.

Active Learning Ideas

See all activities

Case Study Analysis: Famous AI Bias Incidents

Assign groups one documented bias incident each, COMPAS recidivism scoring, Amazon's recruiting tool, facial recognition misidentification rates, healthcare resource allocation algorithms. Groups analyze the source of bias, who was harmed, what the deployer claimed, and what a fairer design might look like. Each group presents a five-minute brief, and the class identifies common patterns across cases.

45 min·Small Groups

Formal Debate: Can AI Be Neutral?

Divide the class into two groups: one argues that AI systems can be made bias-free through better data and auditing, the other argues that all AI systems embed the values of their designers and can never be neutral. After 15 minutes of preparation, groups debate for 20 minutes. The debrief does not declare a winner, it surfaces which empirical claims were most contested.

40 min·Whole Class

Think-Pair-Share: Whose Fairness Definition?

Present the three main mathematical fairness definitions (demographic parity, equalized odds, individual fairness) using a concrete hiring scenario. Pairs calculate which candidates would be hired under each definition and identify cases where the definitions give conflicting results. The debrief addresses why no single definition is universally correct.

20 min·Pairs

Design Challenge: Write AI Ethics Guidelines

Groups are assigned the role of ethics board for a company deploying an AI system in a specific context (college admissions, medical triage, content moderation). They produce a one-page policy specifying: what data may be used, which fairness metric applies, who is accountable for errors, and how affected individuals can appeal. Groups share and critique each other's policies.

35 min·Small Groups

Real-World Connections

  • Tech companies like Google and Microsoft are actively researching and implementing methods to detect and mitigate bias in their AI products, such as their respective AI ethics review boards and open-source fairness toolkits.
  • The use of AI in predictive policing has faced significant scrutiny due to concerns about racial bias, leading to debates and policy changes in cities like Chicago and Los Angeles regarding its deployment.
  • Financial institutions are increasingly using AI for credit scoring and loan approvals, raising questions about whether these systems perpetuate historical discrimination against minority groups.

Assessment Ideas

Discussion Prompt

Present students with a scenario where an AI hiring tool disproportionately rejects female applicants. Ask: 'What are two potential sources of bias in the training data for this tool? How could the developers have approached fairness differently to mitigate this outcome?'

Exit Ticket

Provide students with a brief description of an AI system (e.g., a content recommendation algorithm). Ask them to identify one potential ethical concern related to bias and suggest one concrete step the developers could take to address it.

Quick Check

Display a list of common fairness metrics (e.g., demographic parity, equal opportunity). Ask students to write a one-sentence explanation for each, highlighting a key difference or trade-off between them.

Frequently Asked Questions

What is algorithmic bias and how does it end up in AI systems?
Algorithmic bias is a systematic pattern of unfair outcomes in an AI system's decisions. It enters through several routes: historical training data that reflects past discrimination, underrepresentation of certain groups causing lower model accuracy for those groups, proxy variables that correlate with protected attributes, and optimization objectives that favor outcomes for some groups over others. Bias can be unintentional and still cause serious harm.
What are the main approaches to measuring fairness in AI systems?
Three common definitions are: demographic parity (the model's positive prediction rate is equal across groups), equalized odds (both true positive and false positive rates are equal across groups), and individual fairness (similar individuals receive similar predictions). These definitions can be mathematically incompatible with each other, meaning a system that satisfies one often cannot simultaneously satisfy another.
Who is accountable when an AI system causes harm?
Accountability in AI is genuinely contested. Potential accountability holders include the organization that deployed the system, the engineers who built it, the data providers whose data trained it, and the regulators who approved it. Current US legal frameworks do not always provide clear answers. Effective accountability typically requires transparency about how the system works, mechanisms for affected individuals to appeal decisions, and ongoing monitoring.
How does active learning help students engage with AI ethics?
AI ethics involves genuine value conflicts where reasonable people reach different conclusions using different ethical frameworks. Structured debates and case study analyses put students in the position of making and defending ethical judgments, not just identifying pre-labeled problems. This productive disagreement, guided by evidence from real incidents, develops the kind of critical reasoning that students need to engage with AI systems as future designers, deployers, and citizens.