Ethical Decision-Making in AIActivities & Teaching Strategies
Active learning works because ethical decision-making in AI requires students to confront ambiguity and trade-offs directly. Abstract discussions about fairness or accountability become concrete when students role-play stakeholders or design policies themselves, making the invisible choices behind AI systems visible.
Learning Objectives
- 1Analyze ethical dilemmas that AI systems, such as autonomous vehicles, might encounter by identifying conflicting values.
- 2Evaluate the necessity of human oversight in AI decision-making by comparing AI-generated outcomes with human judgment in specific scenarios.
- 3Propose a set of ethical guidelines for AI development and deployment, justifying each guideline with principles of fairness and accountability.
- 4Classify the types of biases that can be embedded in AI systems and explain their potential real-world impacts.
- 5Critique existing AI applications for ethical considerations, identifying areas where human intervention is crucial.
Want a complete lesson plan with these objectives? Generate a Mission →
Ethical Dilemma Fishbowl: Autonomous Vehicles
Present the classic trolley problem adapted for self-driving cars (algorithm must choose between hitting one pedestrian or swerving into a group). Four students discuss in a fishbowl while the class observes and takes notes. After 8 minutes, rotate in four new students who respond directly to what was said. Debrief on which values were invoked and who gets to decide.
Prepare & details
Analyze ethical dilemmas that AI systems might encounter (e.g., self-driving cars).
Facilitation Tip: In Stakeholder Mapping, ask students to include not just who is affected but also who has the power to change the system, to highlight accountability gaps.
Setup: Groups at tables with case materials
Materials: Case study packet (3-5 pages), Analysis framework worksheet, Presentation template
Policy Design Sprint: AI Guidelines
Groups receive a specific AI deployment context (healthcare diagnosis, bail decision support, school discipline flagging). Each group drafts three ethical guidelines for that context, explaining who they protect and what they constrain. Groups present their guidelines, and the class votes on which are most important and hardest to implement.
Prepare & details
Justify the importance of human oversight in AI decision-making.
Setup: Groups at tables with case materials
Materials: Case study packet (3-5 pages), Analysis framework worksheet, Presentation template
Think-Pair-Share: Why Human Oversight?
Present three scenarios where AI made a consequential error that a human oversight process would have caught. Students individually write one reason why human oversight matters in each case. Pairs compare, then the class builds a shared list of the distinct reasons human judgment cannot be fully delegated to algorithms.
Prepare & details
Propose ethical guidelines for the development and deployment of AI.
Setup: Standard classroom seating; students turn to a neighbor
Materials: Discussion prompt (projected or printed), Optional: recording sheet for pairs
Stakeholder Mapping: Who Decides?
For a specific AI application (content moderation, predictive policing, college admissions screening), groups map all stakeholders: who builds it, who deploys it, who is affected, who audits it, and who has recourse when it fails. Groups identify gaps in current accountability structures and propose one change to address the most serious gap.
Prepare & details
Analyze ethical dilemmas that AI systems might encounter (e.g., self-driving cars).
Setup: Groups at tables with case materials
Materials: Case study packet (3-5 pages), Analysis framework worksheet, Presentation template
Teaching This Topic
Teachers should balance technical exposure with ethical practice by grounding abstract concepts in real cases students can analyze step-by-step. Avoid letting discussions become purely philosophical; anchor them in specific design choices or policy levers students can critique. Research shows students retain ethical reasoning better when they apply it to artifacts they can modify, like policy drafts or decision trees.
What to Expect
Successful learning shows when students move beyond labeling decisions as simply right or wrong. They should articulate competing values, identify who holds responsibility, and propose specific oversight structures that address real-world constraints.
These activities are a starting point. A full mission is the experience.
- Complete facilitation script with teacher dialogue
- Printable student materials, ready for class
- Differentiation strategies for every learner
Watch Out for These Misconceptions
Common MisconceptionDuring Ethical Dilemma Fishbowl, watch for students who frame the autonomous vehicle scenario as a purely technical problem to solve with code. Redirect by asking: ‘Which real-world stakeholders would disagree with your solution, and why?’
What to Teach Instead
During Policy Design Sprint, watch for students who treat ethical guidelines as generic principles without identifying who will enforce them or how. Redirect by asking: ‘Which part of your policy will the engineering team actually change, and how will you measure its impact?’
Common MisconceptionDuring Think-Pair-Share on human oversight, watch for students who assume any human involvement makes AI systems safer. Redirect by asking: ‘Can you name a time when human oversight introduced bias or delay? How did it happen?’
What to Teach Instead
During Stakeholder Mapping, watch for students who map only obvious stakeholders like users or developers. Redirect by asking: ‘Who is missing from this map that would be harmed by a biased decision? Who can hold the developers accountable?’
Assessment Ideas
After Ethical Dilemma Fishbowl, present the same autonomous vehicle scenario to small groups and ask them to generate a new solution that explicitly addresses the concerns raised during the debate. Assess based on how well their solution balances competing values and includes accountability measures.
After Policy Design Sprint, have students submit their AI guidelines with one bullet point explaining how they will verify that the policy is followed in practice. Assess for specificity in accountability mechanisms rather than vague commitments.
During Think-Pair-Share on human oversight, have students swap their written responses and highlight one example of oversight that addresses a real gap and one that might create performative bureaucracy. Collect for review to assess depth of critique.
Extensions & Scaffolding
- Challenge: Have students research a real AI incident, then rewrite the company’s apology statement to include specific changes to their design process rather than generic promises.
- Scaffolding: Provide sentence stems for students struggling to articulate trade-offs, such as "The AI’s decision favors ___ over ___ because ___."
- Deeper exploration: Invite a local tech ethicist or AI developer to review student policy proposals and provide feedback on feasibility and blind spots.
Key Vocabulary
| Algorithmic Bias | Systematic and repeatable errors in a computer system that create unfair outcomes, such as privileging one arbitrary group of users over others. |
| Human Oversight | The involvement of people in supervising and guiding the operation of AI systems, ensuring accountability and ethical adherence. |
| Value Alignment | The challenge of ensuring that an AI system's goals and actions are consistent with human values and ethical principles. |
| Trolley Problem | A thought experiment in ethics that presents a scenario where a person must choose between actively causing one death to save multiple lives, often applied to autonomous vehicle ethics. |
| Accountability | The obligation to accept responsibility for one's actions and decisions, particularly important when AI systems cause harm or make errors. |
Suggested Methodologies
More in The Impact of Artificial Intelligence
Machine Learning vs. Traditional Programming
Students will understand how machine learning differs from traditional rule-based programming.
2 methodologies
Supervised and Unsupervised Learning
Students will understand how computers learn from examples through supervised and unsupervised learning.
2 methodologies
The Role of Training Data Quality
Students will analyze the role of training data quality in the success of an AI model.
2 methodologies
AI Creativity and Mimicry
Students will discuss whether a computer can truly be creative or if it is just mimicking patterns.
2 methodologies
Sources of Algorithmic Bias
Students will analyze how human prejudices can be encoded into software and the resulting social impact.
2 methodologies
Ready to teach Ethical Decision-Making in AI?
Generate a full mission with everything you need
Generate a Mission