Skip to content
CCE · Secondary 3 · Justice and the Legal System · Semester 2

Law and Artificial Intelligence

Considering how laws adapt to new technologies like AI and online harms.

MOE Syllabus OutcomesMOE: Justice and the Legal System - S3MOE: Moral Reasoning - S3

About This Topic

Law and Artificial Intelligence explores how legal systems adapt to technologies like AI and tackle online harms such as deepfakes or algorithmic bias. Secondary 3 students under MOE's Justice and the Legal System unit analyze responsibility when algorithms cause harm, for example in self-driving car accidents or biased facial recognition. They connect this to Singapore's Smart Nation initiative, which promotes AI while stressing ethical governance.

Students address key questions through moral reasoning: Who is accountable, the programmer, company, or user? What challenges face laws in matching tech speed? How does AI affect surveillance privacy or court decisions? These inquiries sharpen critical analysis of justice in digital contexts and prepare students for real-world civic roles.

Active learning suits this topic well. Debates and role-plays bring abstract liability issues to life, prompt students to consider multiple viewpoints, and build confidence in articulating ethical positions during class discussions.

Key Questions

  1. Analyze who should be held responsible when an algorithm causes harm.
  2. Predict the challenges for legal frameworks in keeping pace with rapid technological change.
  3. Evaluate the ethical implications of AI in areas like surveillance and judicial decision-making.

Learning Objectives

  • Analyze the distribution of legal responsibility when an AI system causes harm, considering the roles of developers, users, and manufacturers.
  • Evaluate the ethical implications of AI deployment in sensitive areas such as predictive policing and automated hiring processes.
  • Predict the key challenges legal frameworks will face in adapting to the rapid evolution of AI technologies.
  • Compare Singapore's approach to AI governance with that of other nations, identifying similarities and differences in regulatory strategies.
  • Synthesize arguments for and against the use of AI in judicial decision-making, considering fairness and due process.

Before You Start

Introduction to the Singapore Legal System

Why: Students need a basic understanding of how laws are made and enforced in Singapore to analyze how they might adapt to new technologies.

Ethics and Moral Reasoning

Why: A foundational understanding of ethical principles is necessary to evaluate the moral implications of AI in various applications.

Key Vocabulary

Algorithmic BiasSystematic and repeatable errors in a computer system that create unfair outcomes, such as privileging one arbitrary group of users over others.
Artificial Intelligence (AI)The simulation of human intelligence processes by machines, especially computer systems, including learning, problem-solving, and decision-making.
DeepfakeA type of synthetic media where a person in an existing image or video is replaced with someone else's likeness, often created using AI techniques.
LiabilityLegal responsibility for one's acts or omissions; in the context of AI, this refers to who is accountable when an AI system causes damage or injury.
Smart Nation InitiativeSingapore's national project to harness technology, including AI, to improve the lives of citizens and create economic opportunities.

Watch Out for These Misconceptions

Common MisconceptionAI can be sued directly like a person.

What to Teach Instead

AI lacks legal personhood, so humans or companies face liability. Role-plays clarify chains of responsibility, helping students map accountability from design to deployment through stakeholder discussions.

Common MisconceptionLaws stay the same despite tech changes.

What to Teach Instead

Legal frameworks evolve via updates like Singapore's Model AI Governance Framework. Case study rotations reveal adaptation needs, as groups compare past and proposed laws to see dynamic processes.

Common MisconceptionAI is neutral and unbiased.

What to Teach Instead

Biases stem from flawed data or training. Debates expose this by having students argue from affected viewpoints, fostering empathy and critical evaluation of tech ethics.

Active Learning Ideas

See all activities

Real-World Connections

  • In Singapore, the Infocomm Media Development Authority (IMDA) is developing guidelines for AI use, aiming to foster innovation while ensuring ethical deployment in sectors like healthcare and transportation.
  • Tech companies like Google and Microsoft are actively researching AI ethics and developing internal frameworks to address issues of bias and accountability in their AI products, such as the Azure AI platform.
  • The legal debate around autonomous vehicle accidents, like those involving Tesla's Autopilot, highlights the complexities of assigning blame among the vehicle owner, the manufacturer, and the software developers.

Assessment Ideas

Discussion Prompt

Pose the following scenario: 'An AI-powered hiring tool consistently rejects applications from a specific demographic group. Who should be held responsible: the AI developers, the company that implemented the tool, or the HR manager who used it? Justify your answer with reference to legal and ethical principles.'

Exit Ticket

Ask students to write down two specific challenges that current laws face when trying to regulate AI. Then, have them suggest one potential solution or adaptation for one of the challenges they identified.

Quick Check

Present students with brief descriptions of different AI applications (e.g., facial recognition for security, AI in medical diagnosis, AI for content generation). Ask them to identify one potential ethical concern for each application and briefly explain why it is a concern.

Frequently Asked Questions

How to teach AI liability in Secondary 3 CCE?
Use real cases like autonomous vehicle crashes to spark discussions on responsibility chains. Guide students to weigh developer intent against user actions, linking to MOE moral reasoning standards. Structured debates ensure balanced views and connect to Singapore's AI ethics guidelines for relevance.
What are ethical issues of AI in surveillance?
AI tools like facial recognition raise privacy erosion and false positives risks, disproportionately affecting minorities. Students evaluate trade-offs between security and rights, considering Singapore's PDPA updates. Activities like role-plays help weigh societal impacts and propose safeguards.
How does active learning benefit Law and AI lessons?
Debates and simulations make liability tangible, letting students embody roles like judges or victims to grasp nuances. This builds persuasion skills and empathy, outperforming lectures. In CCE, it aligns with moral reasoning by encouraging evidence-based ethical arguments in collaborative settings.
What challenges do laws face with rapid AI change?
Tech outpaces legislation, creating gaps in areas like deepfakes or algorithmic decisions. Singapore addresses this via agile frameworks, but prediction lags persist. Class mapping exercises help students forecast issues, evaluate reforms, and appreciate proactive governance needs.