Skip to content
Impacts of Computing and Emerging Tech · Semester 2

Ethics in Artificial Intelligence

Discussing algorithmic bias, automation, and the moral responsibilities of AI developers.

Need a lesson plan for Computing?

Generate Mission

Key Questions

  1. Who should be held responsible when an autonomous system causes harm?
  2. How do we ensure that machine learning models do not inherit human prejudices?
  3. What does it mean for an algorithm to be transparent or explainable?

MOE Syllabus Outcomes

MOE: Impacts of Computing and Emerging Tech - JC1
Level: JC 1
Subject: Computing
Unit: Impacts of Computing and Emerging Tech
Period: Semester 2

About This Topic

Ethics in Artificial Intelligence equips JC1 students with tools to navigate moral challenges in AI systems. They explore algorithmic bias, where training data reflects societal prejudices leading to unfair outcomes in areas like hiring or policing; automation's risks to employment and safety; and developers' duties to prioritize fairness and accountability. Core questions guide inquiry: who is responsible when autonomous vehicles cause accidents, how machine learning models avoid inheriting human biases, and what makes algorithms transparent and explainable.

This topic fits the MOE Impacts of Computing and Emerging Tech unit by linking technical knowledge to societal effects. Students review cases like COMPAS recidivism tools or facial recognition disparities, learning strategies such as diverse datasets, fairness metrics, and audit processes. These discussions build critical evaluation skills essential for future innovators.

Active learning excels here because debates and role-plays let students embody stakeholders, grapple with trade-offs in real scenarios, and refine arguments through peer feedback, turning ethical theory into personal conviction.

Learning Objectives

  • Analyze case studies to identify instances of algorithmic bias in AI systems.
  • Evaluate the ethical implications of AI automation on employment and societal structures.
  • Critique proposed solutions for ensuring fairness and transparency in machine learning models.
  • Synthesize arguments regarding the moral responsibilities of AI developers and deployers.

Before You Start

Introduction to Artificial Intelligence and Machine Learning

Why: Students need a basic understanding of what AI and ML are before discussing their ethical implications.

Data Representation and Processing

Why: Understanding how data is collected, stored, and processed is fundamental to grasping how bias can enter AI systems.

Key Vocabulary

Algorithmic BiasSystematic and repeatable errors in a computer system that create unfair outcomes, such as privileging one arbitrary group of users over others.
AutomationThe use of technology to perform tasks previously done by humans, often leading to increased efficiency but also potential job displacement.
Explainable AI (XAI)A set of tools and techniques that allow human users to understand and trust the results and output created by machine learning algorithms.
Fairness MetricsQuantitative measures used to assess whether an AI model's predictions or decisions are equitable across different demographic groups.
AccountabilityThe obligation of an individual or organization to be answerable for its actions and decisions, particularly in the context of AI development and deployment.

Active Learning Ideas

See all activities

Real-World Connections

Financial institutions like DBS Bank use AI for loan applications, where algorithmic bias could unfairly deny credit to certain demographics, necessitating rigorous fairness audits.

Ride-sharing companies such as Grab employ AI for dynamic pricing and driver allocation; ethical considerations arise regarding transparency in how these systems operate and their impact on drivers' livelihoods.

Law enforcement agencies are exploring AI for predictive policing, raising critical questions about accountability and bias when algorithms suggest areas for increased surveillance.

Watch Out for These Misconceptions

Common MisconceptionAI systems are inherently unbiased because they use data and math.

What to Teach Instead

Algorithms reflect training data flaws and design choices, amplifying human prejudices. Role-plays help students see biases from multiple viewpoints, while dataset audits reveal hidden assumptions through collaborative analysis.

Common MisconceptionEthics concerns only end-users, not developers.

What to Teach Instead

Developers hold primary responsibility for bias mitigation and transparency. Case study discussions expose this chain, with peer debates clarifying duties and fostering accountability awareness.

Common MisconceptionFixing bias requires scrapping AI entirely.

What to Teach Instead

Targeted fixes like fairness constraints work effectively. Simulations let students experiment with solutions, building confidence in ethical tech design via iterative group testing.

Assessment Ideas

Discussion Prompt

Present students with a scenario: An AI hiring tool consistently ranks male candidates higher than equally qualified female candidates. Ask: 'Who is responsible for this bias: the developers, the company using the tool, or the data providers? Justify your answer with reference to fairness and accountability.'

Quick Check

Provide students with short descriptions of two different AI systems (e.g., a facial recognition system, a medical diagnosis AI). Ask them to identify one potential ethical concern for each system and suggest one method to mitigate that concern.

Exit Ticket

Students write down one key difference between an algorithm that is 'transparent' and one that is 'explainable'. They should also state why this difference matters in the context of AI ethics.

Ready to teach this topic?

Generate a complete, classroom-ready active learning mission in seconds.

Generate a Custom Mission

Frequently Asked Questions

How to teach algorithmic bias in JC1 Computing?
Start with relatable cases like biased hiring tools, showing data imbalances visually. Use simple models for students to tweak and observe fairness changes. Group audits encourage spotting prejudices collaboratively, linking to MOE standards on societal impacts.
What are real examples of AI ethics issues for JC1?
Facial recognition misidentifying minorities, COMPAS overpredicting recidivism for Black defendants, or autonomous car trolley problems highlight bias and responsibility. Discuss transparency in models like GPT, using Singapore contexts like TraceTogether data ethics to ground lessons.
How can active learning help teach AI ethics?
Debates on responsibility questions build empathy and nuance, as students defend opposing views. Role-plays simulate developer dilemmas, making abstract morals tangible. Case rotations promote peer teaching, deepening understanding beyond lectures per MOE active pedagogies.
Why focus on AI transparency in JC1 curriculum?
Explainable AI prevents blind trust in black-box decisions, vital for trust and accountability. Students learn tools like SHAP via activities, addressing key questions on harm responsibility. This prepares them for ethical tech roles in Singapore's Smart Nation push.