Skip to content
Computer Science · 11th Grade · Artificial Intelligence and Ethics · Weeks 19-27

Autonomous Systems and Ethical Dilemmas

Discussing the moral challenges posed by self-driving cars, drones, and other autonomous agents.

Common Core State StandardsCSTA: 3B-IC-26CSTA: 3B-IC-27

About This Topic

Self-driving cars, delivery drones, autonomous weapons systems, and medical diagnostic robots share a common challenge: they must make consequential decisions without moment-to-moment human oversight. This topic examines the ethical frameworks engineers and policymakers use to navigate those decisions, drawing on real incidents including the 2018 Uber self-driving fatality in Arizona and the ongoing debate over lethal autonomous weapons in international law. CSTA standards 3B-IC-26 and 3B-IC-27 ask students to analyze the legal and societal implications of increasing machine autonomy.

The trolley problem gets its modern form in autonomous vehicle ethics: if a crash is unavoidable, should the vehicle prioritize the passenger, the pedestrian, or the outcome that minimizes total harm? These questions are not hypothetical for engineers at major US automotive companies; they are design choices embedded in control systems. Students analyze how different ethical frameworks, from utilitarian calculation to deontological constraints on treating people as means, yield different programming choices.

Active learning methods work particularly well here because these dilemmas are genuinely contested. There are no clean correct answers, which makes structured debate and collaborative analysis far more productive than lecture. Students who argue competing frameworks and then design their own decision criteria develop more sophisticated reasoning than students who passively observe the dilemmas from the outside.

Key Questions

  1. Analyze the ethical dilemmas inherent in the design and deployment of autonomous systems.
  2. Justify decision-making frameworks for AI in situations with conflicting values.
  3. Predict the legal and societal implications of increasing autonomy in machines.

Learning Objectives

  • Analyze the ethical trade-offs inherent in programming autonomous vehicles to respond to unavoidable accident scenarios.
  • Compare and contrast utilitarian and deontological ethical frameworks as applied to AI decision-making in autonomous systems.
  • Design a set of decision-making criteria for a hypothetical autonomous system, justifying choices based on ethical principles.
  • Evaluate the potential legal consequences of deploying autonomous systems that make life-or-death decisions.
  • Synthesize arguments for and against the development of lethal autonomous weapons systems.

Before You Start

Introduction to Artificial Intelligence Concepts

Why: Students need a foundational understanding of what AI is and how it learns before exploring the ethical implications of its decision-making.

Basic Principles of Computer Programming

Why: Understanding that code dictates machine behavior is essential for grasping how ethical frameworks are translated into autonomous system actions.

Key Vocabulary

Algorithmic biasSystematic and repeatable errors in a computer system that create unfair outcomes, such as prioritizing certain groups over others.
Trolley problemA thought experiment in ethics where a person must choose between allowing a trolley to kill several people or diverting it to kill one person.
Lethal Autonomous Weapons Systems (LAWS)Weapons systems that can independently search for, identify, decide to engage, and engage targets without direct human intervention.
DeontologyAn ethical theory that judges the morality of an action based on rules or duties, emphasizing that some actions are inherently right or wrong regardless of consequences.
UtilitarianismAn ethical theory that holds that the best action is the one that maximizes utility, often defined as maximizing happiness and minimizing suffering for the greatest number of people.

Watch Out for These Misconceptions

Common MisconceptionAutonomous systems will eventually be able to make perfectly ethical decisions because they won't have human emotions.

What to Teach Instead

Ethical decisions require value judgments that cannot be derived from data alone. Removing emotion does not remove values from the design process; it transfers those value choices to the engineers and executives who define the system's objectives and constraints.

Common MisconceptionIf an autonomous system causes harm, the manufacturer is always legally responsible.

What to Teach Instead

US liability law for autonomous systems is still unsettled. Depending on the context, liability may rest on the developer, the owner/operator, or be shared. International law on autonomous weapons has no settled framework at all. Case analysis exercises help students see how genuinely open these legal questions remain.

Common MisconceptionTrolley-problem style dilemmas are the main ethical issue with autonomous vehicles.

What to Teach Instead

Edge-case dilemmas are vivid but rare. The more consequential ethical questions involve systematic biases in training data (e.g., worse pedestrian detection for darker-skinned individuals), cybersecurity vulnerabilities, and equitable access to the technology.

Active Learning Ideas

See all activities

Real-World Connections

  • Engineers at Waymo and Cruise are actively programming decision-making algorithms for self-driving cars, facing real-world choices about how vehicles should react in emergency situations, as seen in the 2018 Uber fatality incident in Tempe, Arizona.
  • The United Nations Convention on Certain Conventional Weapons (CCW) has hosted ongoing discussions and debates among member states regarding the ethical and legal implications of developing and deploying lethal autonomous weapons systems (LAWS).

Assessment Ideas

Discussion Prompt

Present students with a scenario: An autonomous delivery drone carrying medicine must choose between landing in a crowded park to save a life or avoiding the crowd and failing its mission. Ask students to debate: Which action is more ethically justifiable? What ethical framework supports their choice? What are the potential negative consequences of each decision?

Quick Check

Provide students with a short case study about an autonomous medical diagnostic tool that shows a slight bias against a specific demographic. Ask them to identify the type of ethical issue presented and suggest two concrete steps engineers could take to mitigate this bias.

Exit Ticket

Ask students to write down one key difference between a deontological and a utilitarian approach to programming an autonomous vehicle in an unavoidable crash. Then, have them briefly explain which approach they find more compelling and why.

Frequently Asked Questions

How do engineers program autonomous vehicles to handle unavoidable crashes?
Engineers use a combination of risk minimization objectives, regulatory requirements, and explicit ethical policies. Most publicly stated frameworks prioritize minimizing total harm but avoid programming explicit rules that assign different values to different people's lives, partly for ethical reasons and partly for legal liability reasons.
What are autonomous weapons and why are they controversial?
Autonomous weapons use AI to select and engage targets without human direction at the point of attack. They are controversial because international humanitarian law requires human accountability for targeting decisions, and critics argue that fully autonomous lethal systems cannot satisfy that requirement or make the required distinction between combatants and civilians.
What legal frameworks govern autonomous systems in the US?
Federal autonomous vehicle guidance is currently non-binding; NHTSA has published voluntary guidelines but Congress has not passed comprehensive AV legislation. Drone operations fall under FAA Part 107 rules. Product liability, negligence law, and state-level regulations fill gaps unevenly across jurisdictions.
Why is active learning particularly useful for studying autonomous systems ethics?
These dilemmas are genuinely unresolved, which means there are no authoritative answers to simply learn. Structured debate and collaborative design exercises force students to articulate their reasoning and test it against competing frameworks, building the argumentative skills needed to participate in real policy and engineering decisions.