Autonomous Systems and Ethical Dilemmas
Discussing the moral challenges posed by self-driving cars, drones, and other autonomous agents.
About This Topic
Self-driving cars, delivery drones, autonomous weapons systems, and medical diagnostic robots share a common challenge: they must make consequential decisions without moment-to-moment human oversight. This topic examines the ethical frameworks engineers and policymakers use to navigate those decisions, drawing on real incidents including the 2018 Uber self-driving fatality in Arizona and the ongoing debate over lethal autonomous weapons in international law. CSTA standards 3B-IC-26 and 3B-IC-27 ask students to analyze the legal and societal implications of increasing machine autonomy.
The trolley problem gets its modern form in autonomous vehicle ethics: if a crash is unavoidable, should the vehicle prioritize the passenger, the pedestrian, or the outcome that minimizes total harm? These questions are not hypothetical for engineers at major US automotive companies; they are design choices embedded in control systems. Students analyze how different ethical frameworks, from utilitarian calculation to deontological constraints on treating people as means, yield different programming choices.
Active learning methods work particularly well here because these dilemmas are genuinely contested. There are no clean correct answers, which makes structured debate and collaborative analysis far more productive than lecture. Students who argue competing frameworks and then design their own decision criteria develop more sophisticated reasoning than students who passively observe the dilemmas from the outside.
Key Questions
- Analyze the ethical dilemmas inherent in the design and deployment of autonomous systems.
- Justify decision-making frameworks for AI in situations with conflicting values.
- Predict the legal and societal implications of increasing autonomy in machines.
Learning Objectives
- Analyze the ethical trade-offs inherent in programming autonomous vehicles to respond to unavoidable accident scenarios.
- Compare and contrast utilitarian and deontological ethical frameworks as applied to AI decision-making in autonomous systems.
- Design a set of decision-making criteria for a hypothetical autonomous system, justifying choices based on ethical principles.
- Evaluate the potential legal consequences of deploying autonomous systems that make life-or-death decisions.
- Synthesize arguments for and against the development of lethal autonomous weapons systems.
Before You Start
Why: Students need a foundational understanding of what AI is and how it learns before exploring the ethical implications of its decision-making.
Why: Understanding that code dictates machine behavior is essential for grasping how ethical frameworks are translated into autonomous system actions.
Key Vocabulary
| Algorithmic bias | Systematic and repeatable errors in a computer system that create unfair outcomes, such as prioritizing certain groups over others. |
| Trolley problem | A thought experiment in ethics where a person must choose between allowing a trolley to kill several people or diverting it to kill one person. |
| Lethal Autonomous Weapons Systems (LAWS) | Weapons systems that can independently search for, identify, decide to engage, and engage targets without direct human intervention. |
| Deontology | An ethical theory that judges the morality of an action based on rules or duties, emphasizing that some actions are inherently right or wrong regardless of consequences. |
| Utilitarianism | An ethical theory that holds that the best action is the one that maximizes utility, often defined as maximizing happiness and minimizing suffering for the greatest number of people. |
Watch Out for These Misconceptions
Common MisconceptionAutonomous systems will eventually be able to make perfectly ethical decisions because they won't have human emotions.
What to Teach Instead
Ethical decisions require value judgments that cannot be derived from data alone. Removing emotion does not remove values from the design process; it transfers those value choices to the engineers and executives who define the system's objectives and constraints.
Common MisconceptionIf an autonomous system causes harm, the manufacturer is always legally responsible.
What to Teach Instead
US liability law for autonomous systems is still unsettled. Depending on the context, liability may rest on the developer, the owner/operator, or be shared. International law on autonomous weapons has no settled framework at all. Case analysis exercises help students see how genuinely open these legal questions remain.
Common MisconceptionTrolley-problem style dilemmas are the main ethical issue with autonomous vehicles.
What to Teach Instead
Edge-case dilemmas are vivid but rare. The more consequential ethical questions involve systematic biases in training data (e.g., worse pedestrian detection for darker-skinned individuals), cybersecurity vulnerabilities, and equitable access to the technology.
Active Learning Ideas
See all activitiesStructured Academic Controversy: Self-Driving Car Decision Rules
Pairs argue that autonomous vehicles should be programmed to minimize total casualties (utilitarian), then switch and argue that the vehicle should always protect its occupant regardless of third-party risk. After both rounds, pairs draft their own proposed decision rule and present the tradeoffs they accepted.
Case Analysis: Real Autonomous System Incidents
Assign groups one of three documented autonomous system incidents (Uber 2018 fatality, Tesla Autopilot misuse cases, drone strike collateral damage cases). Groups identify the decision point, the ethical question it raised, and what accountability structure was applied, then present to the class.
Design Challenge: Ethical Guidelines for Autonomous Drones
Groups are given a scenario where a delivery company wants to deploy autonomous drones in a mixed residential and commercial area. Teams draft three ethical guidelines for the system's behavior, anticipate failure scenarios, and present their guidelines for class critique.
Real-World Connections
- Engineers at Waymo and Cruise are actively programming decision-making algorithms for self-driving cars, facing real-world choices about how vehicles should react in emergency situations, as seen in the 2018 Uber fatality incident in Tempe, Arizona.
- The United Nations Convention on Certain Conventional Weapons (CCW) has hosted ongoing discussions and debates among member states regarding the ethical and legal implications of developing and deploying lethal autonomous weapons systems (LAWS).
Assessment Ideas
Present students with a scenario: An autonomous delivery drone carrying medicine must choose between landing in a crowded park to save a life or avoiding the crowd and failing its mission. Ask students to debate: Which action is more ethically justifiable? What ethical framework supports their choice? What are the potential negative consequences of each decision?
Provide students with a short case study about an autonomous medical diagnostic tool that shows a slight bias against a specific demographic. Ask them to identify the type of ethical issue presented and suggest two concrete steps engineers could take to mitigate this bias.
Ask students to write down one key difference between a deontological and a utilitarian approach to programming an autonomous vehicle in an unavoidable crash. Then, have them briefly explain which approach they find more compelling and why.
Frequently Asked Questions
How do engineers program autonomous vehicles to handle unavoidable crashes?
What are autonomous weapons and why are they controversial?
What legal frameworks govern autonomous systems in the US?
Why is active learning particularly useful for studying autonomous systems ethics?
More in Artificial Intelligence and Ethics
Introduction to Artificial Intelligence
Students will define AI, explore its history, and differentiate between strong and weak AI.
2 methodologies
Machine Learning Fundamentals
Introduction to how computers learn from data through supervised and unsupervised learning.
2 methodologies
Supervised Learning: Classification and Regression
Exploring algorithms that learn from labeled data to make predictions.
2 methodologies
Unsupervised Learning: Clustering
Discovering patterns and structures in unlabeled data using algorithms like K-Means.
2 methodologies
AI Applications: Image and Speech Recognition
Exploring how AI is used in practical applications like recognizing images and understanding speech.
2 methodologies
Training Data and Model Evaluation
Understanding the importance of data quality, feature engineering, and metrics for model performance.
2 methodologies