Skip to content

Ethics in Artificial IntelligenceActivities & Teaching Strategies

Ethics in AI benefits from active learning because students grapple with real-world consequences of abstract concepts. When they examine biased datasets or debate facial recognition policies, they see how technical choices translate into human impacts. This makes the abstract tangible and the moral stakes clear.

JC 2Computing4 activities30 min50 min

Learning Objectives

  1. 1Critique real-world AI applications for potential sources of algorithmic bias.
  2. 2Evaluate the ethical frameworks applicable to autonomous decision-making in AI systems.
  3. 3Propose mitigation strategies for reducing bias in AI training data.
  4. 4Analyze the societal impact of facial recognition technology and justify proposed limitations.
  5. 5Synthesize arguments regarding accountability for AI-driven errors.

Want a complete lesson plan with these objectives? Generate a Mission

45 min·Pairs

Debate Pairs: Facial Recognition Limits

Assign pairs to affirm or oppose public use of facial recognition. Provide case studies on privacy vs. security. Pairs prepare 3-minute arguments, then switch sides for rebuttals. Conclude with whole-class vote and reflection.

Prepare & details

Who should be held accountable for the decisions made by an AI agent?

Facilitation Tip: During Debate Pairs: Facial Recognition Limits, assign one student to argue for strict regulation and one to argue for minimal restriction to force nuanced positions.

Setup: Two teams facing each other, audience seating for the rest

Materials: Debate proposition card, Research brief for each side, Judging rubric for audience, Timer

AnalyzeEvaluateCreateSelf-ManagementDecision-Making
50 min·Small Groups

Small Groups: Bias Audit Simulation

Give groups sample datasets with hidden biases, like loan approval records skewed by gender. Groups identify biases, propose fixes, and test revised data on mock algorithms using spreadsheets. Share findings in a gallery walk.

Prepare & details

How can bias in training data lead to discriminatory outcomes in software?

Facilitation Tip: During Small Groups: Bias Audit Simulation, provide a small, labeled dataset so students can manually spot underrepresentation or skewed labels.

Setup: Two teams facing each other, audience seating for the rest

Materials: Debate proposition card, Research brief for each side, Judging rubric for audience, Timer

AnalyzeEvaluateCreateSelf-ManagementDecision-Making
40 min·Whole Class

Role-Play: Whole Class Trolley Problem

Present AI car crash scenarios where the vehicle must choose between harms. Students draw roles: AI designer, victim families, regulator. Role-play discussions, then vote on programming choices and justify positions.

Prepare & details

Should there be limits on the use of facial recognition in public spaces?

Facilitation Tip: During Role-Play: Whole Class Trolley Problem, assign roles with conflicting values (e.g., safety engineer, community advocate) to surface ethical trade-offs.

Setup: Two teams facing each other, audience seating for the rest

Materials: Debate proposition card, Research brief for each side, Judging rubric for audience, Timer

AnalyzeEvaluateCreateSelf-ManagementDecision-Making
30 min·Individual

Individual: Ethical Dilemma Journal

Students read a case on AI in hiring, note biases and stakeholders. Write personal stances with pros/cons. Pair-share journals, then discuss class patterns in ethical trade-offs.

Prepare & details

Who should be held accountable for the decisions made by an AI agent?

Facilitation Tip: During Individual: Ethical Dilemma Journal, ask students to revisit entries weekly to track how their reasoning evolves.

Setup: Two teams facing each other, audience seating for the rest

Materials: Debate proposition card, Research brief for each side, Judging rubric for audience, Timer

AnalyzeEvaluateCreateSelf-ManagementDecision-Making

Teaching This Topic

Teachers should avoid presenting ethics as a purely philosophical exercise disconnected from technical work. Instead, integrate ethical analysis into data science skills, like asking students to audit datasets for bias before training models. Research suggests students retain ethical reasoning better when they apply it to concrete cases rather than abstract principles. Use structured debates and simulations to make invisible biases visible.

What to Expect

Successful learning looks like students articulating specific sources of bias, defending ethical positions with evidence, and proposing actionable solutions. They should move beyond 'AI is bad' to 'Here is how bias occurs and here is how to address it.'

These activities are a starting point. A full mission is the experience.

  • Complete facilitation script with teacher dialogue
  • Printable student materials, ready for class
  • Differentiation strategies for every learner
Generate a Mission

Watch Out for These Misconceptions

Common MisconceptionDuring Debate Pairs: Facial Recognition Limits, some students may claim AI is unbiased if trained on 'enough' data.

What to Teach Instead

During Debate Pairs, have students examine a sample dataset with known underrepresentation. Ask them to identify which groups are missing and how that might affect the AI's accuracy.

Common MisconceptionDuring Role-Play: Whole Class Trolley Problem, students often assume AI decisions are solely the developer's responsibility.

What to Teach Instead

During Role-Play, assign roles that include users, regulators, and affected communities. Require each role to justify their share of accountability in the final decision.

Common MisconceptionDuring Small Groups: Bias Audit Simulation, students may think ethics is a post-deployment fix rather than a design constraint.

What to Teach Instead

During Small Groups, provide case studies where bias mitigation techniques were integrated into model development. Ask students to compare outcomes with and without these constraints.

Assessment Ideas

Discussion Prompt

After Debate Pairs: Facial Recognition Limits, present students with a scenario where facial recognition misidentifies a person of color. Ask them to apply the debate frameworks to assign accountability and propose a technical fix.

Quick Check

During Small Groups: Bias Audit Simulation, circulate and ask each group to identify one bias in their dataset and propose one debiasing technique. Collect responses to assess understanding of bias sources.

Peer Assessment

After Individual: Ethical Dilemma Journal, have students pair up to compare their journal entries. Assessors must identify the ethical principle guiding their partner's reasoning and suggest one additional consideration.

Extensions & Scaffolding

  • Challenge: Ask students to design a fairness constraint for a biased dataset and test its impact on model performance.
  • Scaffolding: Provide a partially completed bias audit checklist for students to fill in during the Small Groups activity.
  • Deeper exploration: Have students interview a local professional in AI development about how ethics is integrated into their workflow.

Key Vocabulary

Algorithmic BiasSystematic and repeatable errors in a computer system that create unfair outcomes, such as privileging one arbitrary group of users over others.
Training DataThe dataset used to train an AI model, from which the model learns patterns and makes predictions or decisions.
Autonomous Decision MakingThe ability of an AI system to make choices and take actions without direct human intervention or oversight.
Facial RecognitionA technology capable of identifying or verifying a person from a digital image or a video frame from a video source.
AccountabilityThe obligation of an individual or organization to account for its activities and accept responsibility for its actions and decisions.

Ready to teach Ethics in Artificial Intelligence?

Generate a full mission with everything you need

Generate a Mission