Skip to content
CCE · Secondary 3

Active learning ideas

Law and Artificial Intelligence

Active learning works well for this topic because students need to wrestle with complex questions about responsibility, ethics, and accountability in AI systems. By debating, role-playing, and analyzing case studies, they move from abstract concepts to concrete reasoning about real-world implications of AI in law.

MOE Syllabus OutcomesMOE: Justice and the Legal System - S3MOE: Moral Reasoning - S3
30–50 minPairs → Whole Class4 activities

Activity 01

Inside-Outside Circle45 min · Small Groups

Debate Format: Algorithm Accountability Debate

Assign small groups to roles like developer, user, or regulator in a case of AI-caused harm, such as biased hiring. Groups research arguments for 10 minutes, then debate for 20 minutes with rebuttals. Conclude with a class vote and reflection on shared responsibility.

Analyze who should be held responsible when an algorithm causes harm.

Facilitation TipDuring the Algorithm Accountability Debate, assign clear roles (e.g., developers, affected individuals, policymakers) and provide structured argument frameworks to keep discussions focused.

What to look forPose the following scenario: 'An AI-powered hiring tool consistently rejects applications from a specific demographic group. Who should be held responsible: the AI developers, the company that implemented the tool, or the HR manager who used it? Justify your answer with reference to legal and ethical principles.'

RememberUnderstandApplyRelationship SkillsSelf-Management
Generate Complete Lesson

Activity 02

Inside-Outside Circle50 min · Small Groups

Role-Play: AI Courtroom Trial

Form groups to simulate a trial over an AI surveillance error leading to wrongful arrest. Assign roles: prosecutor, defense, judge, AI expert witness. Groups prepare opening statements and evidence, then present to the class acting as jury for verdict.

Predict the challenges for legal frameworks in keeping pace with rapid technological change.

Facilitation TipIn the AI Courtroom Trial, assign roles with specific legal responsibilities (e.g., defendant, plaintiff, judge) and provide a simplified legal template for arguments to model professional courtroom language.

What to look forAsk students to write down two specific challenges that current laws face when trying to regulate AI. Then, have them suggest one potential solution or adaptation for one of the challenges they identified.

RememberUnderstandApplyRelationship SkillsSelf-Management
Generate Complete Lesson

Activity 03

Inside-Outside Circle40 min · Small Groups

Case Study Carousel: Tech Harm Scenarios

Set up stations with Singapore-relevant cases like deepfake scams or AI judicial aids. Groups rotate every 10 minutes, noting legal gaps and proposed laws. Regroup to share findings and prioritize reforms.

Evaluate the ethical implications of AI in areas like surveillance and judicial decision-making.

Facilitation TipFor the Case Study Carousel, rotate groups quickly and require each to summarize one key legal or ethical issue from their scenario before moving on to the next station.

What to look forPresent students with brief descriptions of different AI applications (e.g., facial recognition for security, AI in medical diagnosis, AI for content generation). Ask them to identify one potential ethical concern for each application and briefly explain why it is a concern.

RememberUnderstandApplyRelationship SkillsSelf-Management
Generate Complete Lesson

Activity 04

Inside-Outside Circle30 min · Pairs

Prediction Pairs: Future AI Laws

Pairs brainstorm emerging AI uses like predictive policing, predict legal challenges, and draft simple law amendments. Pairs share via gallery walk, discussing feasibility in Singapore's context.

Analyze who should be held responsible when an algorithm causes harm.

Facilitation TipIn Prediction Pairs, provide a simple template for students to organize their predictions about future laws, including columns for potential harm, affected groups, and proposed legal solutions.

What to look forPose the following scenario: 'An AI-powered hiring tool consistently rejects applications from a specific demographic group. Who should be held responsible: the AI developers, the company that implemented the tool, or the HR manager who used it? Justify your answer with reference to legal and ethical principles.'

RememberUnderstandApplyRelationship SkillsSelf-Management
Generate Complete Lesson

A few notes on teaching this unit

Teachers should approach this topic by grounding abstract legal concepts in relatable scenarios and real-world consequences. Avoid overwhelming students with legal jargon; instead, focus on the ethical and societal impacts of AI decisions. Research suggests that using role-play and debates helps students retain complex ideas by making them personally relevant and emotionally engaging.

Successful learning looks like students confidently identifying key stakeholders in AI accountability, explaining how laws must adapt to technology, and critically evaluating ethical concerns in AI applications. They should also demonstrate empathy for affected groups and propose reasoned solutions to ethical dilemmas.


Watch Out for These Misconceptions

  • During the AI Courtroom Trial, students may assume AI can be sued directly like a person.

    Use the role-play structure to clarify AI’s lack of legal personhood by having students map responsibility from the AI developer to the company deploying the system, using the courtroom roles to trace accountability chains.

  • During the Case Study Carousel, students may believe laws remain unchanged despite technological advancements.

    Have groups compare Singapore’s current laws with proposed updates in the Model AI Governance Framework during the case study rotations, highlighting how legal frameworks must evolve with technology.

  • During the Algorithm Accountability Debate, students may argue AI systems are neutral and unbiased.

    Structure the debate to require students to argue from the perspective of affected groups, using specific examples of bias from the debate scenarios to challenge claims of AI neutrality.


Methods used in this brief