Skip to content
The Impact of Computing on Society · Semester 2

Ethics in Artificial Intelligence

Investigating algorithmic bias and the moral implications of autonomous decision making.

Need a lesson plan for Computing?

Generate Mission

Key Questions

  1. Who should be held accountable for the decisions made by an AI agent?
  2. How can bias in training data lead to discriminatory outcomes in software?
  3. Should there be limits on the use of facial recognition in public spaces?

MOE Syllabus Outcomes

MOE: Social Computing - JC2
Level: JC 2
Subject: Computing
Unit: The Impact of Computing on Society
Period: Semester 2

About This Topic

Ethics in Artificial Intelligence requires students to examine how biases in training data lead to discriminatory outcomes in areas like hiring, criminal justice, and healthcare. They investigate moral challenges of autonomous decision-making, such as who bears responsibility when AI errs in self-driving cars or predictive policing. Key questions guide inquiry: accountability for AI agents, bias propagation from data, and limits on facial recognition in public spaces.

This topic aligns with MOE Social Computing standards in JC2, emphasizing computing's societal impact. Students analyze real-world cases, like biased facial recognition systems that misidentify certain ethnic groups, and debate regulatory frameworks. These discussions build skills in ethical reasoning, evidence evaluation, and persuasive argumentation essential for future leaders in technology.

Active learning benefits this topic greatly. Role-plays of ethical dilemmas and collaborative bias audits make abstract concepts concrete, foster empathy through peer perspectives, and encourage students to apply ethical frameworks to complex scenarios they encounter beyond the classroom.

Learning Objectives

  • Critique real-world AI applications for potential sources of algorithmic bias.
  • Evaluate the ethical frameworks applicable to autonomous decision-making in AI systems.
  • Propose mitigation strategies for reducing bias in AI training data.
  • Analyze the societal impact of facial recognition technology and justify proposed limitations.
  • Synthesize arguments regarding accountability for AI-driven errors.

Before You Start

Introduction to Machine Learning Concepts

Why: Students need a foundational understanding of how AI models learn from data to grasp the concept of bias in training datasets.

Data Representation and Structures

Why: Understanding how data is organized and represented is crucial for analyzing potential biases within datasets.

Key Vocabulary

Algorithmic BiasSystematic and repeatable errors in a computer system that create unfair outcomes, such as privileging one arbitrary group of users over others.
Training DataThe dataset used to train an AI model, from which the model learns patterns and makes predictions or decisions.
Autonomous Decision MakingThe ability of an AI system to make choices and take actions without direct human intervention or oversight.
Facial RecognitionA technology capable of identifying or verifying a person from a digital image or a video frame from a video source.
AccountabilityThe obligation of an individual or organization to account for its activities and accept responsibility for its actions and decisions.

Active Learning Ideas

See all activities

Real-World Connections

Hiring platforms like Pymetrics have faced scrutiny for algorithmic bias, with studies suggesting their AI may disadvantage certain demographic groups by favoring specific personality traits correlated with dominant cultural norms.

Law enforcement agencies globally, including those in major cities like London and New York, utilize facial recognition technology for surveillance, raising concerns about privacy and potential misidentification, particularly for minority populations.

Autonomous vehicle developers like Waymo and Tesla grapple with the ethical dilemma of programming cars to make split-second decisions in unavoidable accident scenarios, determining which outcome is 'least bad'.

Watch Out for These Misconceptions

Common MisconceptionAI systems are inherently unbiased if trained on large datasets.

What to Teach Instead

Large datasets often amplify societal biases present in historical data. Group audits of sample data help students spot patterns, like underrepresentation, and brainstorm debiasing strategies through peer critique.

Common MisconceptionHumans should never override AI decisions since AI is more objective.

What to Teach Instead

AI lacks human context and values, leading to ethically flawed choices. Role-plays of override scenarios reveal nuances, as students defend positions and refine understanding via structured debates.

Common MisconceptionEthics is a separate concern from technical AI development.

What to Teach Instead

Ethical considerations must integrate into design from the start. Collaborative case analyses show students how bias mitigation techniques, like fairness constraints, blend tech and morals effectively.

Assessment Ideas

Discussion Prompt

Present students with a scenario: An AI system used for loan applications denies a loan to a qualified applicant from a historically marginalized community. Ask: 'Who is primarily responsible for this discriminatory outcome: the data scientists, the company deploying the AI, the users of the AI, or the creators of the original biased data? Justify your answer with reference to at least two ethical principles.'

Quick Check

Provide students with a short description of an AI system (e.g., a content moderation AI, a medical diagnostic AI). Ask them to identify one potential source of bias in its training data and one potential negative societal consequence if that bias is not addressed. Have them write their answers on a shared digital whiteboard.

Peer Assessment

Students work in pairs to identify a news article about an AI ethical issue. They then present the article's core problem to another pair. The assessing pair must identify the type of bias involved (e.g., selection bias, measurement bias) and suggest one concrete step the AI developers could take to mitigate it. Assessors provide feedback on the clarity and feasibility of the suggested mitigation.

Ready to teach this topic?

Generate a complete, classroom-ready active learning mission in seconds.

Generate a Custom Mission

Frequently Asked Questions

How can teachers address algorithmic bias in JC2 computing classes?
Use real datasets for group analysis to reveal biases in hiring or lending algorithms. Students quantify disparities, propose fairness metrics, and simulate retraining. This builds data literacy and critical evaluation skills aligned with MOE standards.
What real-world examples illustrate AI ethics issues?
Cases like COMPAS recidivism tool, which showed racial bias, or facial recognition errors on darker skin tones highlight data flaws. Students dissect these in debates, connecting to accountability questions and Singapore's PDPA regulations for balanced perspectives.
How can active learning help students grasp AI ethics?
Role-plays and debates immerse students in dilemmas, making abstract biases tangible. Small-group bias hunts on datasets encourage evidence-based arguments, while peer feedback sharpens ethical reasoning. These methods boost engagement and retention over lectures.
Who is accountable for harmful AI decisions?
Accountability spans developers, data providers, and deployers; regulations like EU AI Act clarify roles. Classroom simulations let students argue chains of responsibility, fostering nuanced views on shared liability in autonomous systems.