Skip to content
Computer Science · Grade 9

Active learning ideas

AI Ethics and Bias

Active learning works for AI ethics and bias because abstract concepts become concrete when students see unfair outcomes in real systems. Hands-on audits and debates help students recognize that neutrality in AI is not automatic, but a choice shaped by data and design. These activities build critical awareness of how technology impacts people’s lives.

Ontario Curriculum ExpectationsCS.HS.IC.2CS.HS.S.15
35–50 minPairs → Whole Class4 activities

Activity 01

Socratic Seminar50 min · Small Groups

Case Study Rotation: Real-World AI Bias

Prepare four cases: facial recognition errors, biased hiring tools, predictive policing, and credit scoring. Small groups rotate through stations every 10 minutes, noting bias sources, impacts, and fixes on worksheets. End with whole-class share-out.

Explain how bias can be introduced into AI systems and its potential consequences.

Facilitation TipDuring Case Study Rotation, assign each group a different real-world case to ensure timely rotation and equal participation.

What to look forPresent students with a scenario: An AI system is used to recommend job candidates. One group argues it's efficient, another claims it's biased against women. Ask students to facilitate a debate, identifying potential sources of bias and proposing solutions for fairness.

AnalyzeEvaluateCreateSocial AwarenessRelationship Skills
Generate Complete Lesson

Activity 02

Socratic Seminar40 min · Pairs

Debate Pairs: Developer vs. User Responsibility

Pair students to debate if AI bias fixes responsibility lies more with developers or users. Provide evidence cards on data sourcing and deployment. Pairs present arguments, then vote class-wide on strongest points.

Evaluate the ethical responsibilities of AI developers and users.

What to look forProvide students with a short description of an AI application (e.g., a content recommendation algorithm). Ask them to write down two potential ethical concerns related to bias and one question they would ask the developers about accountability.

AnalyzeEvaluateCreateSocial AwarenessRelationship Skills
Generate Complete Lesson

Activity 03

Socratic Seminar45 min · Small Groups

Framework Design: Fairness Checklist

In small groups, students review AI scenarios and co-create a fairness checklist covering data diversity, testing, and transparency. Test the checklist on a sample AI tool description, then refine based on peer feedback.

Design a framework for assessing the fairness of an AI-powered decision-making system.

What to look forStudents will write one sentence explaining how bias can enter an AI system and one sentence describing a real-world consequence of biased AI. They will also list one ethical responsibility of an AI user.

AnalyzeEvaluateCreateSocial AwarenessRelationship Skills
Generate Complete Lesson

Activity 04

Socratic Seminar35 min · Individual

Dataset Audit: Individual Bias Hunt

Give students sample datasets from public AI projects. Individually, they identify bias indicators like underrepresentation, score severity, and suggest balanced alternatives. Share findings in a gallery walk.

Explain how bias can be introduced into AI systems and its potential consequences.

What to look forPresent students with a scenario: An AI system is used to recommend job candidates. One group argues it's efficient, another claims it's biased against women. Ask students to facilitate a debate, identifying potential sources of bias and proposing solutions for fairness.

AnalyzeEvaluateCreateSocial AwarenessRelationship Skills
Generate Complete Lesson

A few notes on teaching this unit

Approach this topic by balancing technical details with human impact. Begin with visible bias examples to ground discussions, then connect them to algorithmic causes. Avoid abstract lectures by using role-play and simulations that reveal unintentional biases. Research shows that when students experience bias firsthand, their ethical reasoning deepens.

Successful learning looks like students identifying bias sources in datasets, articulating ethical responsibilities, and proposing fairness checks in AI systems. They should move from recognizing problems to designing solutions using structured frameworks. Discussions should reflect nuanced understanding, not oversimplified views of fairness.


Watch Out for These Misconceptions

  • During Case Study Rotation, watch for students assuming large datasets guarantee neutrality.

    After Case Study Rotation, have groups present evidence from their cases showing how large datasets amplified existing social biases. Use their findings to emphasize that size does not replace diversity or intentional fairness checks.

  • During Debate Pairs, listen for claims that bias only comes from developers’ intentions.

    During Debate Pairs, provide role cards that include scenarios where bias emerges from unexamined assumptions or historical data patterns. Debrief by asking students to share hidden influences they noticed during their simulations.

  • During Framework Design, expect students to separate ethical discussions from technical work.

    After Framework Design, ask students to map their fairness checklist to a specific algorithmic step, such as data selection or model training. This integration makes ethics a visible part of technical practice.


Methods used in this brief