Skip to content

Bias and Fairness in AIActivities & Teaching Strategies

Active learning works for Bias and Fairness in AI because students need to experience bias firsthand to understand it. When children sort datasets or simulate AI rules, they see how human choices shape technology, making abstract ideas concrete and memorable.

Year 6Computing4 activities25 min45 min

Learning Objectives

  1. 1Analyze examples of AI systems that exhibit bias due to skewed training data.
  2. 2Evaluate the impact of biased AI on different user groups, such as facial recognition software failing on certain demographics.
  3. 3Design a simple strategy to mitigate bias in a hypothetical AI training dataset.
  4. 4Justify the ethical importance of fairness and representation in AI development.

Want a complete lesson plan with these objectives? Generate a Mission

35 min·Small Groups

Group Sort: Spotting Dataset Bias

Give small groups printed cards with images or profiles representing a hiring dataset. Students sort into 'hire' or 'not hire' piles, then discuss imbalances like gender or ethnicity skews. Groups redesign the dataset for fairness and predict improved AI outcomes.

Prepare & details

Analyze how a computer program can inherit biases from its creators or training data.

Facilitation Tip: During Group Sort, circulate with a checklist to note which students quickly spot underrepresented groups or labels, reinforcing early those who hesitate.

Setup: Two teams facing each other, audience seating for the rest

Materials: Debate proposition card, Research brief for each side, Judging rubric for audience, Timer

AnalyzeEvaluateCreateSelf-ManagementDecision-Making
40 min·Pairs

Pairs Simulation: Rule-Based AI

Pairs create a simple paper-based 'AI' using rules from a biased dataset, like favoring sports hobbies for job candidates. They test it on diverse profiles, note unfair results, and revise rules with balanced criteria. Share findings in a class gallery walk.

Prepare & details

Evaluate the importance of using 'fair' datasets when training AI models.

Facilitation Tip: In Pairs Simulation, assign one student to record the rules and the other to test them, ensuring both roles engage actively with the process.

Setup: Two teams facing each other, audience seating for the rest

Materials: Debate proposition card, Research brief for each side, Judging rubric for audience, Timer

AnalyzeEvaluateCreateSelf-ManagementDecision-Making
45 min·Whole Class

Whole Class Debate: Fair Data Matters

Divide the class into teams to debate using biased versus fair datasets for an AI hiring tool. Provide evidence cards; teams prepare 2-minute arguments. Vote and reflect on why fairness affects society.

Prepare & details

Justify why it is crucial to consider fairness when designing AI systems.

Facilitation Tip: For the Whole Class Debate, assign roles like ‘AI developer’ or ‘advocate for fairness’ to keep all students accountable for reasoned arguments.

Setup: Two teams facing each other, audience seating for the rest

Materials: Debate proposition card, Research brief for each side, Judging rubric for audience, Timer

AnalyzeEvaluateCreateSelf-ManagementDecision-Making
25 min·Individual

Individual Test: Online AI Audit

Students access safe, free AI tools like image labelers. They input diverse photos from school resources and log accuracy differences. Compile results on a shared chart to identify patterns.

Prepare & details

Analyze how a computer program can inherit biases from its creators or training data.

Facilitation Tip: In the Individual Test, provide headphones so students can focus on the AI audit without distractions from peers.

Setup: Two teams facing each other, audience seating for the rest

Materials: Debate proposition card, Research brief for each side, Judging rubric for audience, Timer

AnalyzeEvaluateCreateSelf-ManagementDecision-Making

Teaching This Topic

Teachers should approach this topic by grounding discussions in students’ lived experiences of technology, using relatable examples like school apps or games. Avoid abstract definitions; instead, let students discover bias through hands-on tasks. Research shows that when children create and test their own ‘rules’ for AI, they better grasp how bias spreads and how to address it.

What to Expect

Successful learning looks like students confidently identifying bias sources in datasets and justifying inclusive design choices in discussions. They should articulate why fairness matters and propose specific fixes, using clear examples from their activities.

These activities are a starting point. A full mission is the experience.

  • Complete facilitation script with teacher dialogue
  • Printable student materials, ready for class
  • Differentiation strategies for every learner
Generate a Mission

Watch Out for These Misconceptions

Common MisconceptionDuring Group Sort, watch for students who say, 'AI is always fair because computers don’t have opinions.' Redirect them by asking, 'Look at the labels on these dataset cards. Who chose these? Could their choices reflect human opinions? How?'

What to Teach Instead

During Group Sort, students will notice that datasets often lack labels for certain groups. Ask them to explain why those groups might be missing and who decided what to include or exclude in the data they are sorting.

Common MisconceptionDuring Pairs Simulation, watch for students who claim, 'Bias only affects advanced AI used by big companies.' Pause the activity and ask, 'What rules did you just write for your simple AI? Could those rules favor one group without you realizing it?'

What to Teach Instead

During Pairs Simulation, remind students that the rules they create mimic real-world biases. Have them swap rules with another pair and test if the new rules produce different outcomes, highlighting how bias can start small.

Common MisconceptionDuring Whole Class Debate, watch for students who insist, 'Once trained, AI bias cannot be changed.' Counter with, 'If your AI keeps favoring one name, what could you do to fix it?'

What to Teach Instead

During Whole Class Debate, use the example from Group Sort to show how retraining with balanced data reduces bias. Ask students to describe the steps they would take to ‘retrain’ their simulated AI to be fairer.

Assessment Ideas

Discussion Prompt

After Group Sort, present students with a scenario: 'An AI is designed to recommend books. It was trained only on books written by male authors. What kind of bias might this AI show? How could we make it fairer?' Facilitate a class discussion, guiding them to identify the source of bias and brainstorm solutions.

Exit Ticket

After Pairs Simulation, ask students to write down one example of AI bias they encountered in their activity and explain in one sentence why fairness is important when creating AI. Collect these to gauge understanding of key concepts.

Quick Check

After Whole Class Debate, show students two sets of images: one set with balanced representation of different people and one set with clear underrepresentation of a group. Ask: 'Which set of images would be better for training an AI to recognize people? Why?' Collect responses to assess their ability to apply fairness criteria.

Extensions & Scaffolding

  • Challenge students to design a fair dataset for a new AI tool, such as one that recommends school lunches, ensuring balanced representation of dietary preferences and allergies.
  • For students who struggle, provide partially completed dataset cards with missing labels or categories to help them identify gaps.
  • Deeper exploration: Invite students to research real-world AI biases reported in news articles and compare their findings to class examples, creating a class ‘AI Bias Newsboard’.

Key Vocabulary

Artificial Intelligence (AI)Computer systems designed to perform tasks that typically require human intelligence, such as learning, problem-solving, and decision-making.
Bias (in AI)When an AI system produces results that are systematically prejudiced due to flawed assumptions in the machine learning process, often stemming from biased training data.
Training DataThe information, such as images, text, or numbers, used to teach an AI model how to perform a specific task or make predictions.
Fairness (in AI)Ensuring that AI systems do not discriminate against individuals or groups, providing equitable outcomes and opportunities for all users.
Algorithmic BiasSystematic and repeatable errors in a computer system that create unfair outcomes, such as privileging one arbitrary group of users over others.

Ready to teach Bias and Fairness in AI?

Generate a full mission with everything you need

Generate a Mission