Skip to content
English Language · JC 1

Active learning ideas

Technology in Our Daily Lives

Active learning helps students move beyond passive observation of technology's role in daily life. By engaging in debates, case studies, and role-plays, they connect abstract concepts like algorithms and regulation to their lived experiences, making abstract ideas tangible and debatable.

MOE Syllabus OutcomesMOE: Media Literacy - Middle School
25–45 minPairs → Whole Class4 activities

Activity 01

Think-Pair-Share45 min · Small Groups

Debate Carousel: Regulatory Frameworks

Divide class into groups to research and prepare arguments for or against the adequacy of the EU AI Act and Singapore's Model AI Governance Framework. Groups rotate stations to argue the opposing side, then vote on strongest points. Conclude with a whole-class reflection on key insights.

Evaluate whether existing regulatory frameworks , the EU AI Act, Singapore's Model AI Governance Framework , are structurally adequate to govern emergent AI capabilities or whether they address symptoms rather than the systemic causes of algorithmic harm.

Facilitation TipDuring Debate Carousel: Regulatory Frameworks, assign each station a unique scenario (e.g., data privacy, content moderation) and rotate groups to ensure diverse perspectives.

What to look forPose the question: 'When an AI-driven loan application system unfairly denies a loan to a qualified individual, who is primarily responsible: the bank that deployed the system, the developers who coded the algorithm, or the regulators who approved its use? Justify your answer with reference to specific governance principles.'

UnderstandApplyAnalyzeSelf-AwarenessRelationship Skills
Generate Complete Lesson

Activity 02

Think-Pair-Share35 min · Small Groups

Case Study Walk: Algorithmic Harms

Provide real-world cases of AI biases, such as facial recognition errors or biased loan algorithms. Groups analyze one case, create posters with causes and solutions, then conduct a gallery walk to discuss others. Summarize findings in a shared class document.

Analyze how the concentration of AI development within a small number of corporations creates accountability deficits that democratic institutions, designed for an earlier technological order, are ill-equipped to resolve.

Facilitation TipFor Case Study Walk: Algorithmic Harms, post printed case studies around the room and provide sticky notes for students to annotate evidence of harm before discussion.

What to look forStudents write down one specific emergent AI capability (e.g., generative art, advanced predictive text) and then list one potential governance challenge it presents. They should also suggest which existing framework (EU AI Act or SG Model AI Governance Framework) might be better suited to address it and why.

UnderstandApplyAnalyzeSelf-AwarenessRelationship Skills
Generate Complete Lesson

Activity 03

Think-Pair-Share40 min · Small Groups

Role-Play Chain: Assigning Responsibility

Present a scenario of harm from an AI decision, like wrongful arrest via predictive policing. Assign roles to developers, deployers, users, and regulators. Groups simulate a hearing, present defenses, and deliberate on accountability. Debrief on distributed vs. individual responsibility.

Construct a position on who bears moral and legal responsibility when an algorithmic decision causes demonstrable harm to an individual, and assess whether distributed responsibility across developers, deployers, and regulators is coherent or evasive.

Facilitation TipIn Role-Play Chain: Assigning Responsibility, give each student a role card with a specific stakeholder perspective to ensure balanced and focused dialogue.

What to look forPresent students with a hypothetical scenario of algorithmic harm (e.g., a biased hiring algorithm). Ask them to identify at least two distinct parties (developer, deployer, regulator) and briefly explain their potential role in either causing or mitigating the harm.

UnderstandApplyAnalyzeSelf-AwarenessRelationship Skills
Generate Complete Lesson

Activity 04

Think-Pair-Share25 min · Pairs

Tech Log Pairs: Personal Impacts

Pairs track one day's technology use in communication, learning, and leisure, noting algorithmic influences. Discuss positives, negatives, and governance needs. Share anonymized examples in whole-class mind map.

Evaluate whether existing regulatory frameworks , the EU AI Act, Singapore's Model AI Governance Framework , are structurally adequate to govern emergent AI capabilities or whether they address symptoms rather than the systemic causes of algorithmic harm.

Facilitation TipWith Tech Log Pairs: Personal Impacts, model how to track app usage patterns for three days before analyzing the data for patterns and surprises.

What to look forPose the question: 'When an AI-driven loan application system unfairly denies a loan to a qualified individual, who is primarily responsible: the bank that deployed the system, the developers who coded the algorithm, or the regulators who approved its use? Justify your answer with reference to specific governance principles.'

UnderstandApplyAnalyzeSelf-AwarenessRelationship Skills
Generate Complete Lesson

A few notes on teaching this unit

Teachers should prioritize real-world examples that resonate with students' experiences, such as TikTok feeds or Spotify playlists. Avoid lecturing on algorithms' inner workings; instead, use activities that let students uncover biases themselves. Research shows students grasp ethical trade-offs better when they trace consequences through concrete, relatable scenarios rather than abstract theory.

Success is measured when students move from acknowledging tech's influence to critiquing its mechanisms and consequences. They should articulate trade-offs between convenience and ethics and take ownership of their roles in shaping responsible tech use.


Watch Out for These Misconceptions

  • During Debate Carousel: Regulatory Frameworks, watch for students assuming algorithms are neutral because they 'just process data.'

    During the carousel, direct students to examine the training data and design choices mentioned in their station materials, asking them to identify which human biases might have shaped the algorithm's outputs.

  • During Debate Carousel: Regulatory Frameworks, watch for oversimplified claims that 'regulations always slow innovation.'

    During the carousel, have groups use Singapore's Model AI Governance Framework as a counterexample to test their assumptions about regulation's impact on progress.

  • During Role-Play Chain: Assigning Responsibility, watch for students blaming users exclusively for tech harms.

    During the role-play, provide role cards that outline each stakeholder's limited control, forcing students to grapple with the shared responsibility for outcomes in complex systems.


Methods used in this brief