Skip to content
Computer Science · 9th Grade · The Impact of Artificial Intelligence · Weeks 28-36

Ethical Decision-Making in AI

Students will discuss ethical dilemmas faced by AI systems and the importance of human oversight.

Common Core State StandardsCSTA: 3A-IC-24CSTA: 3A-IC-25

About This Topic

AI systems increasingly make or inform decisions that have significant consequences for people's lives , who gets a loan, whether a medical image flags a tumor, how a self-driving car responds in an unavoidable collision. These decisions involve value trade-offs that no purely technical optimization can resolve. For 9th graders, ethical decision-making in AI is about understanding that these systems embed choices made by their designers, and that human oversight exists precisely because those choices need accountability.

In the US K-12 context, this topic addresses CSTA 3A-IC-24 and 3A-IC-25, and connects naturally to philosophy, civics, and the legal frameworks students encounter in social studies. Concrete dilemmas , the trolley problem applied to autonomous vehicles, triage algorithms in emergency rooms, content moderation at scale , give students real cases to analyze rather than abstract principles.

Active learning is essential here because ethical reasoning requires practice forming and defending positions, not just memorizing frameworks. Structured dilemma discussions where students must commit to a position and respond to counterarguments build the kind of ethical reasoning capacity that passive exposure to case studies cannot.

Key Questions

  1. Analyze ethical dilemmas that AI systems might encounter (e.g., self-driving cars).
  2. Justify the importance of human oversight in AI decision-making.
  3. Propose ethical guidelines for the development and deployment of AI.

Learning Objectives

  • Analyze ethical dilemmas that AI systems, such as autonomous vehicles, might encounter by identifying conflicting values.
  • Evaluate the necessity of human oversight in AI decision-making by comparing AI-generated outcomes with human judgment in specific scenarios.
  • Propose a set of ethical guidelines for AI development and deployment, justifying each guideline with principles of fairness and accountability.
  • Classify the types of biases that can be embedded in AI systems and explain their potential real-world impacts.
  • Critique existing AI applications for ethical considerations, identifying areas where human intervention is crucial.

Before You Start

Introduction to Artificial Intelligence

Why: Students need a foundational understanding of what AI is and how it functions before exploring its ethical implications.

Introduction to Programming Concepts

Why: Understanding that AI systems are built using code helps students grasp that human decisions are embedded in their design and operation.

Key Vocabulary

Algorithmic BiasSystematic and repeatable errors in a computer system that create unfair outcomes, such as privileging one arbitrary group of users over others.
Human OversightThe involvement of people in supervising and guiding the operation of AI systems, ensuring accountability and ethical adherence.
Value AlignmentThe challenge of ensuring that an AI system's goals and actions are consistent with human values and ethical principles.
Trolley ProblemA thought experiment in ethics that presents a scenario where a person must choose between actively causing one death to save multiple lives, often applied to autonomous vehicle ethics.
AccountabilityThe obligation to accept responsibility for one's actions and decisions, particularly important when AI systems cause harm or make errors.

Watch Out for These Misconceptions

Common MisconceptionEthical AI just means building AI that does not break the law.

What to Teach Instead

Many harmful applications of AI are technically legal. Ethical AI involves value choices about fairness, transparency, and accountability that exceed legal minimums. These choices are made by designers and deployers, which is why human oversight structures are necessary even when no law requires them.

Common MisconceptionIf an AI is wrong, it is the algorithm's fault, not anyone's responsibility.

What to Teach Instead

AI systems are built and deployed by people and organizations who made decisions at every stage. When harm results, responsibility traces back to those decisions. Diffuse responsibility is a design choice that can be addressed through clear accountability structures and audit requirements.

Common MisconceptionMore human oversight always makes AI systems safer.

What to Teach Instead

Human oversight can also introduce bias, inconsistency, and delay. The goal is appropriate oversight , identifying the specific decisions that require human judgment, at what frequency, and with what accountability. Blanket oversight without clear criteria can become performative rather than effective.

Active Learning Ideas

See all activities

Ethical Dilemma Fishbowl: Autonomous Vehicles

Present the classic trolley problem adapted for self-driving cars (algorithm must choose between hitting one pedestrian or swerving into a group). Four students discuss in a fishbowl while the class observes and takes notes. After 8 minutes, rotate in four new students who respond directly to what was said. Debrief on which values were invoked and who gets to decide.

35 min·Whole Class

Policy Design Sprint: AI Guidelines

Groups receive a specific AI deployment context (healthcare diagnosis, bail decision support, school discipline flagging). Each group drafts three ethical guidelines for that context, explaining who they protect and what they constrain. Groups present their guidelines, and the class votes on which are most important and hardest to implement.

40 min·Small Groups

Think-Pair-Share: Why Human Oversight?

Present three scenarios where AI made a consequential error that a human oversight process would have caught. Students individually write one reason why human oversight matters in each case. Pairs compare, then the class builds a shared list of the distinct reasons human judgment cannot be fully delegated to algorithms.

20 min·Pairs

Stakeholder Mapping: Who Decides?

For a specific AI application (content moderation, predictive policing, college admissions screening), groups map all stakeholders: who builds it, who deploys it, who is affected, who audits it, and who has recourse when it fails. Groups identify gaps in current accountability structures and propose one change to address the most serious gap.

35 min·Small Groups

Real-World Connections

  • Self-driving car manufacturers like Waymo and Tesla face complex ethical choices regarding accident scenarios, balancing passenger safety with pedestrian impact, which are debated by ethicists and engineers.
  • Hospitals utilize AI for diagnostic imaging analysis, requiring radiologists to maintain oversight to confirm AI-generated findings and prevent misdiagnoses that could affect patient treatment plans.
  • Social media platforms employ AI for content moderation, grappling with the ethical implications of censorship versus free speech, and the potential for biased enforcement against certain user groups.

Assessment Ideas

Discussion Prompt

Present students with the following scenario: 'An AI system designed to allocate limited medical resources during a pandemic must decide which patients receive ventilators. The AI has been trained on historical data that shows disparities in healthcare access. What ethical issues arise? How should human oversight be implemented to ensure fairness?' Students should discuss in small groups and report key concerns.

Exit Ticket

Ask students to write down one specific example of algorithmic bias they have encountered or can imagine. Then, have them propose one concrete step a developer could take to mitigate that bias in an AI system.

Quick Check

Provide students with a short case study of an AI application (e.g., a facial recognition system used by law enforcement). Ask them to identify: 1) One potential ethical dilemma, 2) The role of human oversight, and 3) One potential consequence of unchecked AI decision-making. Collect responses for review.

Frequently Asked Questions

What ethical dilemmas do AI systems face in real-world applications?
Autonomous vehicles must encode collision trade-offs with no value-neutral answer. Medical AI must balance sensitivity against specificity when false positives and false negatives have different costs. Content moderation systems must decide what speech to suppress, reflecting judgments about harm and expression that are genuinely contested across cultures and legal systems.
Why do AI systems need human oversight if they can make faster, more consistent decisions?
Speed and consistency do not guarantee correctness or fairness. AI systems can be systematically wrong in ways that no individual error rate reveals. Human oversight provides accountability for value-laden decisions, a mechanism for catching systematic errors before they scale, and a point of recourse for people harmed by automated decisions.
What makes a good ethical guideline for AI development?
Effective guidelines are specific enough to constrain real decisions, identify who is responsible for compliance, include an audit or enforcement mechanism, and protect the people most likely to be harmed , who are often not the people building or deploying the system. Vague commitments to 'fairness' without operationalization have limited effect.
How does active learning work for teaching AI ethics?
Ethical reasoning requires practice forming and defending positions under challenge. Structured dilemma discussions where students must commit to a stance and respond to counterarguments build that capacity in ways that reading about ethical frameworks cannot. Students who have argued through a trolley problem applied to autonomous vehicles remember the trade-offs and are equipped to recognize similar structures in new contexts.