Skip to content
Computing · Year 9

Active learning ideas

Ethical Dilemmas of AI

Active learning works for ethical dilemmas of AI because abstract concepts like bias and accountability become concrete when students see real-world consequences. Role-plays and debates let students test their own assumptions, turning quiet reflection into shared reasoning. This topic demands more than reading; it needs immediate, collaborative analysis to shift perspectives from passive acceptance to critical assessment.

National Curriculum Attainment TargetsKS3: Computing - Impact of TechnologyKS3: Computing - Ethics and Law
30–50 minPairs → Whole Class4 activities

Activity 01

Formal Debate45 min · Pairs

Debate Pairs: AI Accountability

Pair students to prepare arguments for one side: 'AI developers are always responsible' versus 'End-users share blame'. Each pair presents for 3 minutes, then switches sides. Class votes and discusses shifts in perspective.

Who should be held responsible when an AI-driven system causes harm?

Facilitation TipDuring Debate Pairs, set a strict three-minute speaking limit per turn to keep discussions focused and give both voices equal weight.

What to look forPresent students with a scenario: An AI chatbot used by a mental health service provides harmful advice. Ask: 'Who is most responsible for the harm caused: the AI developers, the company deploying the chatbot, or the user who followed the advice? Justify your answer with specific reasoning.'

AnalyzeEvaluateCreateSelf-ManagementDecision-Making
Generate Complete Lesson

Activity 02

Formal Debate50 min · Small Groups

Case Study Carousel: Bias Examples

Divide class into small groups with cases like biased loan algorithms or recruitment tools. Groups analyse causes, impacts, and solutions on posters, then rotate to add feedback. Conclude with whole-class synthesis.

Analyze how algorithmic bias can perpetuate and amplify societal inequalities.

Facilitation TipFor the Case Study Carousel, provide sticky notes for students to mark patterns they see across examples, then cluster these notes to reveal systemic bias.

What to look forAsk students to write down one AI technology they use or are aware of. Then, have them identify one potential ethical issue associated with it and briefly explain why it is a concern.

AnalyzeEvaluateCreateSelf-ManagementDecision-Making
Generate Complete Lesson

Activity 03

Formal Debate40 min · Small Groups

Role-Play Scenarios: Job Displacement

Assign roles like factory worker, CEO, policymaker in an automation scenario. Groups act out a town hall meeting, negotiating solutions. Debrief on trade-offs and ethical priorities.

Predict the long-term impact of widespread AI automation on the global workforce.

Facilitation TipIn Role-Play Scenarios, assign students roles they wouldn’t normally choose to stretch their empathy and expose blind spots in workforce impacts.

What to look forDisplay images or short descriptions of different AI applications (e.g., recommendation algorithms, medical diagnostic tools, AI art generators). Ask students to quickly categorize each as having a high or low risk of algorithmic bias and provide a one-sentence justification.

AnalyzeEvaluateCreateSelf-ManagementDecision-Making
Generate Complete Lesson

Activity 04

Formal Debate30 min · Whole Class

Ethical Dilemma Cards: Whole Class Vote

Distribute cards with dilemmas like self-driving car choices. Students vote anonymously via polls, then discuss in whole class why choices vary and what principles guide them.

Who should be held responsible when an AI-driven system causes harm?

What to look forPresent students with a scenario: An AI chatbot used by a mental health service provides harmful advice. Ask: 'Who is most responsible for the harm caused: the AI developers, the company deploying the chatbot, or the user who followed the advice? Justify your answer with specific reasoning.'

AnalyzeEvaluateCreateSelf-ManagementDecision-Making
Generate Complete Lesson

A few notes on teaching this unit

Teachers should frame AI ethics as a chain of human decisions, not a technical failure, to avoid framing bias as an algorithmic glitch. Use structured debates to slow down fast opinions, giving students time to gather evidence before reacting. Research shows that when students role-play stakeholders, they’re more likely to consider long-term societal impacts rather than short-term convenience.

Successful learning looks like students using evidence to justify their positions, not just stating opinions. They should trace ethical chains from data to decision-making and identify where responsibility lies. By the end, students will articulate nuanced trade-offs, not simple right or wrong answers.


Watch Out for These Misconceptions

  • During Debate Pairs on AI Accountability, watch for students assuming algorithms are neutral because their creators intend them to be fair.

    Use the debate’s structure to push students to examine training data sources and developer assumptions from the Case Study Carousel materials, forcing them to connect abstract intent to concrete bias patterns.

  • During Role-Play Scenarios on Job Displacement, watch for students predicting total job loss based on fear rather than sector-specific evidence.

    Have students refer to real labor market data provided in the scenario cards to ground their predictions, then revise their role-play scripts to match evidence rather than alarmist claims.

  • During Ethical Dilemma Cards, watch for students assigning blame only to programmers without considering broader chains of responsibility.

    Use the mapping activity from Ethical Dilemma Cards to trace each card’s scenario from data collection to deployment, highlighting where non-technical stakeholders share accountability.


Methods used in this brief