Skip to content
Computer Science · 9th Grade

Active learning ideas

Ethical Decision-Making in AI

Active learning works because ethical decision-making in AI requires students to confront ambiguity and trade-offs directly. Abstract discussions about fairness or accountability become concrete when students role-play stakeholders or design policies themselves, making the invisible choices behind AI systems visible.

Common Core State StandardsCSTA: 3A-IC-24CSTA: 3A-IC-25
20–40 minPairs → Whole Class4 activities

Activity 01

Case Study Analysis35 min · Whole Class

Ethical Dilemma Fishbowl: Autonomous Vehicles

Present the classic trolley problem adapted for self-driving cars (algorithm must choose between hitting one pedestrian or swerving into a group). Four students discuss in a fishbowl while the class observes and takes notes. After 8 minutes, rotate in four new students who respond directly to what was said. Debrief on which values were invoked and who gets to decide.

Analyze ethical dilemmas that AI systems might encounter (e.g., self-driving cars).

Facilitation TipIn Stakeholder Mapping, ask students to include not just who is affected but also who has the power to change the system, to highlight accountability gaps.

What to look forPresent students with the following scenario: 'An AI system designed to allocate limited medical resources during a pandemic must decide which patients receive ventilators. The AI has been trained on historical data that shows disparities in healthcare access. What ethical issues arise? How should human oversight be implemented to ensure fairness?' Students should discuss in small groups and report key concerns.

AnalyzeEvaluateCreateDecision-MakingSelf-Management
Generate Complete Lesson

Activity 02

Case Study Analysis40 min · Small Groups

Policy Design Sprint: AI Guidelines

Groups receive a specific AI deployment context (healthcare diagnosis, bail decision support, school discipline flagging). Each group drafts three ethical guidelines for that context, explaining who they protect and what they constrain. Groups present their guidelines, and the class votes on which are most important and hardest to implement.

Justify the importance of human oversight in AI decision-making.

What to look forAsk students to write down one specific example of algorithmic bias they have encountered or can imagine. Then, have them propose one concrete step a developer could take to mitigate that bias in an AI system.

AnalyzeEvaluateCreateDecision-MakingSelf-Management
Generate Complete Lesson

Activity 03

Think-Pair-Share20 min · Pairs

Think-Pair-Share: Why Human Oversight?

Present three scenarios where AI made a consequential error that a human oversight process would have caught. Students individually write one reason why human oversight matters in each case. Pairs compare, then the class builds a shared list of the distinct reasons human judgment cannot be fully delegated to algorithms.

Propose ethical guidelines for the development and deployment of AI.

What to look forProvide students with a short case study of an AI application (e.g., a facial recognition system used by law enforcement). Ask them to identify: 1) One potential ethical dilemma, 2) The role of human oversight, and 3) One potential consequence of unchecked AI decision-making. Collect responses for review.

UnderstandApplyAnalyzeSelf-AwarenessRelationship Skills
Generate Complete Lesson

Activity 04

Case Study Analysis35 min · Small Groups

Stakeholder Mapping: Who Decides?

For a specific AI application (content moderation, predictive policing, college admissions screening), groups map all stakeholders: who builds it, who deploys it, who is affected, who audits it, and who has recourse when it fails. Groups identify gaps in current accountability structures and propose one change to address the most serious gap.

Analyze ethical dilemmas that AI systems might encounter (e.g., self-driving cars).

What to look forPresent students with the following scenario: 'An AI system designed to allocate limited medical resources during a pandemic must decide which patients receive ventilators. The AI has been trained on historical data that shows disparities in healthcare access. What ethical issues arise? How should human oversight be implemented to ensure fairness?' Students should discuss in small groups and report key concerns.

AnalyzeEvaluateCreateDecision-MakingSelf-Management
Generate Complete Lesson

A few notes on teaching this unit

Teachers should balance technical exposure with ethical practice by grounding abstract concepts in real cases students can analyze step-by-step. Avoid letting discussions become purely philosophical; anchor them in specific design choices or policy levers students can critique. Research shows students retain ethical reasoning better when they apply it to artifacts they can modify, like policy drafts or decision trees.

Successful learning shows when students move beyond labeling decisions as simply right or wrong. They should articulate competing values, identify who holds responsibility, and propose specific oversight structures that address real-world constraints.


Watch Out for These Misconceptions

  • During Ethical Dilemma Fishbowl, watch for students who frame the autonomous vehicle scenario as a purely technical problem to solve with code. Redirect by asking: ‘Which real-world stakeholders would disagree with your solution, and why?’

    During Policy Design Sprint, watch for students who treat ethical guidelines as generic principles without identifying who will enforce them or how. Redirect by asking: ‘Which part of your policy will the engineering team actually change, and how will you measure its impact?’

  • During Think-Pair-Share on human oversight, watch for students who assume any human involvement makes AI systems safer. Redirect by asking: ‘Can you name a time when human oversight introduced bias or delay? How did it happen?’

    During Stakeholder Mapping, watch for students who map only obvious stakeholders like users or developers. Redirect by asking: ‘Who is missing from this map that would be harmed by a biased decision? Who can hold the developers accountable?’


Methods used in this brief