Skip to content
The Impact of Computing on Society · Term 4

Artificial Intelligence and Bias

Investigate how machine learning models can inherit and amplify human biases from training data.

Need a lesson plan for Computer Science?

Generate Mission

Key Questions

  1. Who is responsible when an autonomous system makes an unethical decision?
  2. How can we detect and mitigate bias in algorithmic decision-making?
  3. What does it mean for a machine to be 'fair' in a social context?

Ontario Curriculum Expectations

CS.HS.C.1CS.HS.C.2
Grade: Grade 11
Subject: Computer Science
Unit: The Impact of Computing on Society
Period: Term 4

About This Topic

Artificial intelligence and bias explores how machine learning models trained on real-world data often reflect and intensify societal prejudices. Grade 11 students examine processes like data collection, where historical imbalances in datasets lead to skewed predictions, such as facial recognition systems performing poorly on certain ethnic groups or hiring algorithms favoring specific demographics. They analyze key questions: who bears responsibility for unethical AI decisions, methods to detect bias in algorithms, and definitions of fairness in machine outputs.

This topic aligns with Ontario's Computer Science curriculum in the unit on computing's societal impact, fostering skills in ethical reasoning and critical evaluation of technology. Students connect concepts to standards CS.HS.C.1 and CS.HS.C.2 by debating accountability and mitigation strategies, preparing them for real-world applications in policy and development.

Active learning shines here because bias is abstract and context-dependent. When students audit datasets hands-on or simulate biased models, they witness amplification firsthand, sparking discussions that build empathy and problem-solving across diverse perspectives.

Learning Objectives

  • Analyze how specific features within a dataset can introduce or perpetuate bias in machine learning models.
  • Evaluate the ethical implications of biased AI decision-making in scenarios such as loan applications or criminal justice.
  • Propose mitigation strategies to reduce bias in an AI model, considering trade-offs between fairness and accuracy.
  • Critique existing AI applications for potential biases and their societal impact.
  • Explain the concept of algorithmic fairness and its challenges in diverse social contexts.

Before You Start

Introduction to Machine Learning Concepts

Why: Students need a basic understanding of how machine learning models learn from data to grasp how biases are inherited.

Data Representation and Analysis

Why: Understanding how data is structured and analyzed is crucial for identifying potential biases within datasets.

Key Vocabulary

Algorithmic BiasSystematic and repeatable errors in a computer system that create unfair outcomes, such as privileging one arbitrary group of users over others.
Training DataThe dataset used to teach a machine learning model to recognize patterns and make predictions. Biases present in this data can be learned by the model.
Fairness MetricsQuantitative measures used to assess whether an algorithm's outcomes are equitable across different demographic groups, such as demographic parity or equalized odds.
Data AugmentationTechniques used to increase the size and diversity of a training dataset, often by creating modified versions of existing data, to help reduce bias.
Algorithmic AccountabilityThe principle that developers and deployers of AI systems are responsible for the outcomes and impacts of their algorithms, especially in cases of harm or discrimination.

Active Learning Ideas

See all activities

Real-World Connections

Hiring software used by companies like Amazon has faced scrutiny for exhibiting gender bias, favoring male candidates due to historical data reflecting a male-dominated tech industry.

Facial recognition systems, such as those used by law enforcement, have demonstrated lower accuracy rates for individuals with darker skin tones, raising concerns about misidentification and civil liberties.

Credit scoring algorithms used by financial institutions can inadvertently discriminate against certain socioeconomic groups if historical lending data reflects systemic inequalities.

Watch Out for These Misconceptions

Common MisconceptionAI models are neutral because they use math and statistics.

What to Teach Instead

Models inherit biases from training data that mirrors human prejudices. Active group audits of datasets reveal hidden imbalances, helping students see how numbers encode societal issues and prompting them to question assumptions.

Common MisconceptionBias only appears in obviously discriminatory cases.

What to Teach Instead

Subtle correlations in data amplify over time in complex models. Hands-on simulations let students trace bias propagation step-by-step, building skills to detect nuanced issues through collaborative analysis.

Common MisconceptionBias in AI cannot be fixed once trained.

What to Teach Instead

Mitigation techniques like data reweighting or adversarial training work at various stages. Role-play activities clarify intervention points, encouraging students to experiment and iterate solutions in teams.

Assessment Ideas

Discussion Prompt

Present students with a hypothetical scenario: An AI system designed to recommend job candidates shows a strong preference for applicants from specific universities. Ask: 'Who is responsible if this system perpetuates inequality? What steps could the developers take to identify and address this bias before deployment?'

Quick Check

Provide students with a short, simplified dataset (e.g., fictional student grades with demographic information). Ask them to identify potential sources of bias within the data and explain how these might affect a predictive model. For example, 'If the dataset shows fewer female students in advanced math courses, how might an AI predict future math success for a new female student?'

Exit Ticket

Ask students to write down one specific example of how bias can enter an AI model and one strategy that could be used to mitigate it. They should also define 'algorithmic fairness' in their own words.

Ready to teach this topic?

Generate a complete, classroom-ready active learning mission in seconds.

Generate a Custom Mission

Frequently Asked Questions

What are real examples of AI bias in machine learning?
Common cases include facial recognition failing on darker skin tones due to underrepresented training images, or credit scoring algorithms disadvantaging women because of historical data patterns. Students benefit from dissecting these: they map data sources to outcomes, quantify disparities with metrics like demographic parity, and brainstorm fixes like diverse data collection. This builds analytical skills for ethical tech use.
How can teachers detect bias in algorithmic decision-making?
Use fairness metrics such as equalized odds or disparate impact ratios on model outputs across subgroups. Tools like AIF360 or Fairlearn simplify audits. In class, guide students to probe training data for imbalances first, then evaluate predictions. Regular checkpoints in projects ensure bias awareness from design stages.
How can active learning help students understand AI bias?
Active approaches like dataset audits and bias simulations make invisible data prejudices visible. Pairs graphing imbalances or groups debating case studies foster ownership and debate, turning passive reading into discovery. Students retain more by connecting abstract concepts to tangible outcomes, developing critical thinking for societal tech impacts.
What does fairness mean for AI in social contexts?
Fairness involves metrics like equal opportunity across groups, but context matters: medical AI might prioritize accuracy over parity. Ontario curriculum emphasizes this nuance. Teach through key questions, having students define fairness for scenarios like hiring, then test models against their criteria. This cultivates nuanced ethical judgment.