Skip to content
English Language · JC 1 · AI Governance and Algorithmic Accountability · Semester 1

Technology in Our Daily Lives

Exploring how everyday technology impacts our communication, learning, and leisure activities.

MOE Syllabus OutcomesMOE: Media Literacy - Middle School

About This Topic

The topic Technology in Our Daily Lives guides students to examine how smartphones, social media, and apps shape communication, learning, and leisure. They explore algorithms that personalize news feeds on platforms like Instagram, adaptive tools in apps like Duolingo, and recommendation engines in Spotify or YouTube. This analysis reveals both conveniences, such as instant connectivity, and challenges, including echo chambers and privacy erosion, drawing from students' own habits.

Aligned with MOE English Language standards in media literacy, the unit addresses AI governance. Students evaluate frameworks like Singapore's Model AI Governance Framework and the EU AI Act for addressing algorithmic harms. They analyze corporate dominance in AI development and debate responsibility among developers, deployers, and regulators, building skills in critical evaluation, argumentation, and ethical reasoning essential for JC 1.

Active learning benefits this topic by making abstract governance issues concrete and relevant. Debates and role-plays encourage students to defend positions with evidence, respond to peers, and refine ideas collaboratively. These methods strengthen persuasive writing and speaking while fostering empathy for diverse viewpoints on technology's societal role.

Key Questions

  1. Evaluate whether existing regulatory frameworks , the EU AI Act, Singapore's Model AI Governance Framework , are structurally adequate to govern emergent AI capabilities or whether they address symptoms rather than the systemic causes of algorithmic harm.
  2. Analyze how the concentration of AI development within a small number of corporations creates accountability deficits that democratic institutions, designed for an earlier technological order, are ill-equipped to resolve.
  3. Construct a position on who bears moral and legal responsibility when an algorithmic decision causes demonstrable harm to an individual, and assess whether distributed responsibility across developers, deployers, and regulators is coherent or evasive.

Learning Objectives

  • Critique the structural adequacy of existing AI governance frameworks like the EU AI Act and Singapore's Model AI Governance Framework in addressing emergent AI capabilities.
  • Analyze how the concentration of AI development within a few corporations creates accountability deficits for democratic institutions.
  • Construct a reasoned position on who bears moral and legal responsibility for algorithmic harm, evaluating the coherence of distributed responsibility models.
  • Synthesize arguments from diverse stakeholders regarding the ethical implications of AI development and deployment.
  • Evaluate the effectiveness of current regulatory approaches in mitigating systemic causes of algorithmic harm.

Before You Start

Introduction to Artificial Intelligence Concepts

Why: Students need a basic understanding of what AI is and common applications before analyzing its governance.

Media Literacy and Digital Citizenship

Why: This topic builds on foundational skills of critically evaluating digital information and understanding online interactions.

Key Vocabulary

Algorithmic AccountabilityThe principle that AI systems and their developers/deployers should be answerable for the outcomes and impacts of algorithmic decisions.
AI Governance FrameworkA set of principles, policies, and practices designed to guide the responsible development and deployment of artificial intelligence.
Emergent AI CapabilitiesNew and unforeseen abilities or behaviors of AI systems that arise from complex interactions, often beyond initial design intentions.
Accountability DeficitA gap where responsibility for harm caused by an AI system cannot be clearly assigned to a specific individual or entity.
Distributed ResponsibilityThe concept that responsibility for AI-related harms may be shared across multiple parties, including developers, deployers, users, and regulators.

Watch Out for These Misconceptions

Common MisconceptionAI algorithms are neutral and unbiased.

What to Teach Instead

Algorithms inherit biases from training data and design choices made by humans. Role-plays tracing decision paths help students identify bias sources and propose fixes, turning passive acceptance into active critique.

Common MisconceptionStrong regulations prevent AI innovation.

What to Teach Instead

Regulations guide ethical development without halting progress, as seen in Singapore's framework. Debates with evidence from both sides allow students to weigh trade-offs, building nuanced positions through peer challenge.

Common MisconceptionUsers alone bear responsibility for tech harms.

What to Teach Instead

Responsibility spans developers, deployers, and regulators in complex chains. Simulations clarify roles and foster discussions on coherent accountability models, helping students move beyond simplistic blame.

Active Learning Ideas

See all activities

Real-World Connections

  • Tech policy analysts at organizations like the AI Now Institute in New York are actively researching and publishing reports on accountability deficits in AI, informing legislative efforts.
  • Citizens in Singapore can refer to the Infocomm Media Development Authority's (IMDA) guidance on AI ethics and governance when interacting with AI-powered services, such as chatbots for government services.
  • The European Parliament's debates and eventual adoption of the EU AI Act reflect the challenge of creating regulations that can keep pace with rapid AI advancements, impacting companies developing AI tools globally.

Assessment Ideas

Discussion Prompt

Pose the question: 'When an AI-driven loan application system unfairly denies a loan to a qualified individual, who is primarily responsible: the bank that deployed the system, the developers who coded the algorithm, or the regulators who approved its use? Justify your answer with reference to specific governance principles.'

Exit Ticket

Students write down one specific emergent AI capability (e.g., generative art, advanced predictive text) and then list one potential governance challenge it presents. They should also suggest which existing framework (EU AI Act or SG Model AI Governance Framework) might be better suited to address it and why.

Quick Check

Present students with a hypothetical scenario of algorithmic harm (e.g., a biased hiring algorithm). Ask them to identify at least two distinct parties (developer, deployer, regulator) and briefly explain their potential role in either causing or mitigating the harm.

Frequently Asked Questions

How to integrate AI governance into daily technology lessons for JC English?
Start with students' tech habits to hook interest, then layer in frameworks like Singapore's Model AI Governance Framework via case studies. Use debates to practice evaluation skills from MOE media literacy standards. This builds from personal relevance to societal analysis, strengthening argumentation in 50-minute lessons.
What are main challenges in algorithmic accountability?
Corporate concentration creates power imbalances, outpacing democratic oversight. Distributed responsibility across stakeholders often dilutes action. Students can explore this through role-plays, constructing positions on moral and legal duties while assessing frameworks like the EU AI Act for systemic fixes.
How does active learning enhance technology in daily lives discussions?
Active methods like debates and role-plays make governance tangible, as students embody roles and counterarguments. This deepens media literacy by linking personal tech use to regulations, improves speaking and critical thinking, and boosts engagement over lectures. Peer feedback refines ethical reasoning for real-world application.
Common student views on technology's daily impacts?
Many see only benefits like convenience, overlooking harms such as privacy loss or bias. Address via tech logs and case analyses to reveal nuances. This aligns with MOE standards, equipping students to evaluate sources and argue responsibly on AI's role in communication and leisure.