Skip to content
Computing · Year 8

Active learning ideas

Machine Learning and Bias

Active learning works for this topic because students need to see bias in action, not just hear about it. When they manipulate biased datasets and analyze real cases, they move from abstract worry to concrete understanding. These hands-on experiences build lasting empathy for people affected by unfair AI systems.

National Curriculum Attainment TargetsKS3: Computing - Artificial IntelligenceKS3: Computing - Societal and Ethical Impacts
30–45 minPairs → Whole Class4 activities

Activity 01

Simulation Game30 min · Small Groups

Simulation Game: Biased Data Bags

Distribute bags of colored beads with uneven distributions to represent biased datasets. In small groups, students 'train' a partner to classify new beads by majority color patterns, then test on balanced bags and record failure rates. Groups debrief on how data imbalance caused poor predictions.

If an AI makes a biased decision, who is responsible: the programmer or the data?

Facilitation TipDuring Biased Data Bags, pause after each round to ask groups to share which bead colors failed most often and why.

What to look forPresent students with a scenario: An AI system designed to recommend job candidates was trained on data from a company that historically hired more men for technical roles. Ask: 'Who is primarily responsible for any bias in the AI's recommendations: the programmers who built the system, or the historical data it learned from? Justify your answer with specific reasons.'

ApplyAnalyzeEvaluateCreateSocial AwarenessDecision-Making
Generate Complete Lesson

Activity 02

Socratic Seminar45 min · Small Groups

Case Study Carousel: Real AI Examples

Prepare stations with cases like biased hiring tools or facial recognition errors. Groups rotate, noting bias sources (data or code), impacts, and fixes. Each group presents one insight to the class for collective discussion.

Explain how we can ensure that machine learning models are fair and transparent.

Facilitation TipFor the Case Study Carousel, assign each pair a single case to analyze deeply before rotating to the next one.

What to look forProvide students with a simplified, hypothetical dataset (e.g., student test scores with demographic information). Ask them to identify one potential source of bias within the data and explain how it might lead to an unfair outcome if used to train an AI for predicting future academic success.

AnalyzeEvaluateCreateSocial AwarenessRelationship Skills
Generate Complete Lesson

Activity 03

Socratic Seminar35 min · Pairs

Debate Pairs: Who Bears Responsibility?

Assign pairs to argue for programmer or data as primary bias source, using evidence from prior activities. Pairs share arguments in a whole-class debate, voting on strongest points and reflecting on shared accountability.

Critique the limitations of a machine's ability to learn compared to a human.

Facilitation TipIn Debate Pairs, give students a visible timer so they practice concise arguments within a set limit.

What to look forStudents write down one way a programmer could try to make an AI model fairer. They should also list one limitation of AI compared to human decision-making in complex ethical situations.

AnalyzeEvaluateCreateSocial AwarenessRelationship Skills
Generate Complete Lesson

Activity 04

Timeline Challenge40 min · Individual

Timeline Challenge: Build a Fair Dataset

Individually, students select a scenario like image labeling, brainstorm diverse data sources, and sketch a balanced dataset plan. Pairs review and refine plans, then share prototypes with the class for feedback.

If an AI makes a biased decision, who is responsible: the programmer or the data?

What to look forPresent students with a scenario: An AI system designed to recommend job candidates was trained on data from a company that historically hired more men for technical roles. Ask: 'Who is primarily responsible for any bias in the AI's recommendations: the programmers who built the system, or the historical data it learned from? Justify your answer with specific reasons.'

RememberUnderstandAnalyzeSelf-ManagementRelationship Skills
Generate Complete Lesson

A few notes on teaching this unit

Teachers should frame bias as a data problem first, not a coding problem, to avoid the misconception that fixing the algorithm solves everything. Use analogies like ‘garbage in, garbage out’ to make the concept stick. Avoid long lectures on ethics; instead, let students discover the issues through structured tasks where bias emerges naturally from their own choices.

Successful learning looks like students explaining how biased data leads to unfair outcomes, not just identifying bias in a case. They should cite evidence from simulations and debates when discussing responsibility and fairness. By the end, they can suggest practical steps to reduce bias in datasets.


Watch Out for These Misconceptions

  • During Biased Data Bags, watch for students who assume the beads represent people directly.

    After the simulation, ask students to reflect in writing: ‘What real-world group might be like the underrepresented beads?’ This helps them connect the model to societal bias without conflating beads and people.

  • During Case Study Carousel, watch for students who generalize that all AI bias comes from bad programmers.

    After analyzing cases, use a quick card sort where students match specific bias sources (e.g., ‘unrepresentative training data,’ ‘labeling errors,’ ‘narrow problem framing’) to the cases they studied. This makes the variety of bias sources visible.

  • During Debate Pairs, watch for students who claim machines can ‘learn like humans’ when comparing AI and human judgment.

    After the debate, ask each pair to write one sentence explaining how their analogy between human and machine learning breaks down, using an example from their debate cards.


Methods used in this brief