Skip to content
Computer Science · Grade 11

Active learning ideas

Artificial Intelligence and Bias

Active learning works well for this topic because AI bias is abstract until students see it in action. When students manipulate datasets or debate real cases, they move from hearing about bias to feeling its impact. This hands-on approach helps them internalize how numbers and decisions interact in ways that create unfair outcomes.

Ontario Curriculum ExpectationsCS.HS.C.1CS.HS.C.2
30–60 minPairs → Whole Class4 activities

Activity 01

Case Study Analysis45 min · Small Groups

Case Study Analysis: Real-World AI Bias

Provide articles on cases like COMPAS recidivism tool or Amazon hiring AI. In small groups, students identify biased data sources, predict outcomes, and propose fixes. Groups present findings to class for peer feedback.

Who is responsible when an autonomous system makes an unethical decision?

Facilitation TipDuring Case Study Analysis, assign roles like 'data scientist' or 'ethicist' to push students beyond surface observations.

What to look forPresent students with a hypothetical scenario: An AI system designed to recommend job candidates shows a strong preference for applicants from specific universities. Ask: 'Who is responsible if this system perpetuates inequality? What steps could the developers take to identify and address this bias before deployment?'

AnalyzeEvaluateCreateDecision-MakingSelf-Management
Generate Complete Lesson

Activity 02

Socratic Seminar30 min · Pairs

Dataset Audit: Spot the Bias

Give pairs anonymized datasets from hiring or lending scenarios. They categorize features, calculate representation imbalances, and graph disparities using spreadsheets. Discuss how imbalances affect model training.

How can we detect and mitigate bias in algorithmic decision-making?

Facilitation TipFor Dataset Audit, require students to document each bias they find with a specific example from the dataset.

What to look forProvide students with a short, simplified dataset (e.g., fictional student grades with demographic information). Ask them to identify potential sources of bias within the data and explain how these might affect a predictive model. For example, 'If the dataset shows fewer female students in advanced math courses, how might an AI predict future math success for a new female student?'

AnalyzeEvaluateCreateSocial AwarenessRelationship Skills
Generate Complete Lesson

Activity 03

Socratic Seminar50 min · Whole Class

Simulation Debate: Ethical Decisions

Whole class divides into roles: developers, users, regulators. Simulate an AI car accident scenario with biased training data. Debate responsibility and mitigation, voting on solutions.

What does it mean for a machine to be 'fair' in a social context?

Facilitation TipIn Simulation Debate, provide a clear rubric for ethical frameworks so students can ground their arguments in evidence.

What to look forAsk students to write down one specific example of how bias can enter an AI model and one strategy that could be used to mitigate it. They should also define 'algorithmic fairness' in their own words.

AnalyzeEvaluateCreateSocial AwarenessRelationship Skills
Generate Complete Lesson

Activity 04

Socratic Seminar60 min · Individual

Bias Mitigation Workshop: Model Tweaks

Individuals tweak a simple pre-built ML model (using Google Colab) by resampling data or adding fairness constraints. Test on holdout sets and compare accuracy vs. fairness metrics.

Who is responsible when an autonomous system makes an unethical decision?

Facilitation TipDuring Bias Mitigation Workshop, circulate with a checklist to ensure teams test at least two different strategies.

What to look forPresent students with a hypothetical scenario: An AI system designed to recommend job candidates shows a strong preference for applicants from specific universities. Ask: 'Who is responsible if this system perpetuates inequality? What steps could the developers take to identify and address this bias before deployment?'

AnalyzeEvaluateCreateSocial AwarenessRelationship Skills
Generate Complete Lesson

A few notes on teaching this unit

Experienced teachers approach this topic by balancing technical details with ethical questions. Avoid diving too deep into machine learning math, which can overshadow the human impact of bias. Instead, use analogies like 'training data as a mirror' to help students grasp how society’s flaws become AI’s flaws. Research shows that students retain concepts better when they connect them to lived experiences, so encourage personal reflections on fairness.

Successful learning looks like students confidently identifying bias in datasets and explaining its origins. They should also evaluate responsibility for AI decisions and propose concrete mitigation strategies. Most importantly, they should connect these concepts to fairness in technology and society.


Watch Out for These Misconceptions

  • During Simulation Debate, watch for students who assume AI models are neutral due to their mathematical foundations.

    Use the debate format to redirect them: ask teams to present evidence from their case studies showing how training data shapes model behavior, then have peers challenge each claim with dataset examples.

  • During Dataset Audit, watch for students who dismiss subtle biases as unimportant.

    Guide them to trace correlations step-by-step, using the activity’s audit sheet to mark how small imbalances grow into larger disparities over time in predictive models.

  • During Bias Mitigation Workshop, watch for students who believe bias cannot be fixed once a model is trained.

    Have them experiment with techniques like data reweighting during the workshop, then compare results to show how intervention changes model behavior, even post-training.


Methods used in this brief