Skip to content
CCE · Secondary 4

Active learning ideas

Artificial Intelligence and Society

Active learning works for this topic because students need to confront real-world tensions between innovation and ethics. When teenagers grapple with dilemmas like biased hiring algorithms or privacy in smart surveillance, they move beyond abstract concepts to see how decisions affect people. Collaborative tasks build critical thinking and communication skills that are essential for informed civic participation.

MOE Syllabus OutcomesMOE: Cyber Wellness - S4MOE: Ethics and Values - S4
35–50 minPairs → Whole Class4 activities

Activity 01

World Café35 min · Pairs

Debate Pairs: AI in Hiring

Pair students to debate pros and cons of AI-driven recruitment, switching sides midway. Provide case cards with Singapore examples. Groups share key insights in a whole-class wrap-up.

Analyze the potential benefits and risks of artificial intelligence for society.

Facilitation TipFor the Case Study Carousel, post a 'Myth vs Fact' board where students add sticky notes to challenge or affirm claims they encounter.

What to look forPose the following question to small groups: 'Imagine an AI system is used to screen job applications. What are two potential ethical problems that could arise, and how could they be mitigated?' Students should record their ideas and be prepared to share one key concern and its proposed solution.

UnderstandApplyAnalyzeSocial AwarenessRelationship Skills
Generate Complete Lesson

Activity 02

World Café45 min · Small Groups

Ethical Dilemma Role-Play: Small Groups

Assign groups AI scenarios like biased loan approvals. Students role-play stakeholders, negotiate solutions, and present guidelines. Debrief on common tensions.

Explain the ethical challenges posed by AI in areas like employment and decision-making.

What to look forStudents will write on an index card: 'One benefit of AI in Singapore is _____. One ethical challenge of AI is _____. A guideline for responsible AI development is _____.'

UnderstandApplyAnalyzeSocial AwarenessRelationship Skills
Generate Complete Lesson

Activity 03

World Café50 min · Small Groups

Guideline Design Workshop: Jigsaw

Individuals research one ethical principle, then form expert groups to compile a class AI code. Groups present posters with justifications.

Design a set of ethical guidelines for the responsible development of AI.

What to look forPresent students with a short case study describing an AI application (e.g., AI in healthcare diagnostics). Ask them to identify one potential benefit and one potential ethical risk discussed in the case study, writing their answers on a mini-whiteboard.

UnderstandApplyAnalyzeSocial AwarenessRelationship Skills
Generate Complete Lesson

Activity 04

World Café40 min · Small Groups

Case Study Carousel: Risks and Benefits

Set stations with AI cases in employment and healthcare. Groups rotate, note ethical issues, and vote on priorities. Synthesize findings.

Analyze the potential benefits and risks of artificial intelligence for society.

What to look forPose the following question to small groups: 'Imagine an AI system is used to screen job applications. What are two potential ethical problems that could arise, and how could they be mitigated?' Students should record their ideas and be prepared to share one key concern and its proposed solution.

UnderstandApplyAnalyzeSocial AwarenessRelationship Skills
Generate Complete Lesson

A few notes on teaching this unit

Teachers approach this topic by first grounding abstract ethics in concrete scenarios students can visualize and measure. Avoid lectures that separate 'the good' from 'the bad'—instead, frame AI as a tool whose impact depends on choices made by people at every stage. Research shows that when students analyze real cases (like Singapore’s facial recognition trials), they develop nuanced judgment rather than binary views. Emphasize iterative improvement: ethical AI is not a destination but a process of testing, feedback, and revision.

Successful learning looks like students moving from simple opinions to reasoned arguments supported by evidence and multiple perspectives. They should be able to articulate trade-offs, propose concrete fixes, and recognize when ethical responsibilities are shared across developers, users, and policymakers.


Watch Out for These Misconceptions

  • During Debate Pairs (AI in Hiring), watch for students assuming AI hiring tools are objective because they are automated.

    Use the debate’s evidence board to trace how training data choices (e.g., past hiring patterns) shape outcomes, and task pairs to propose dataset audits as a correction.

  • During Ethical Dilemma Role-Play, watch for students blaming only developers for biased AI outcomes.

    Redirect to the role-play’s debrief to list all roles (developers, HR managers, job seekers) that share responsibility, using the small group’s notes to identify gaps in accountability.

  • During Guideline Design Workshop, watch for students writing guidelines that focus only on technical fixes (e.g., 'improve the algorithm').

    Guide groups to add human-centered principles (e.g., 'ensure human review for high-stakes decisions') by comparing drafts against the jigsaw’s shared criteria list.


Methods used in this brief