Technology in Our Daily Lives
Exploring how everyday technology impacts our communication, learning, and leisure activities.
About This Topic
The topic Technology in Our Daily Lives guides students to examine how smartphones, social media, and apps shape communication, learning, and leisure. They explore algorithms that personalize news feeds on platforms like Instagram, adaptive tools in apps like Duolingo, and recommendation engines in Spotify or YouTube. This analysis reveals both conveniences, such as instant connectivity, and challenges, including echo chambers and privacy erosion, drawing from students' own habits.
Aligned with MOE English Language standards in media literacy, the unit addresses AI governance. Students evaluate frameworks like Singapore's Model AI Governance Framework and the EU AI Act for addressing algorithmic harms. They analyze corporate dominance in AI development and debate responsibility among developers, deployers, and regulators, building skills in critical evaluation, argumentation, and ethical reasoning essential for JC 1.
Active learning benefits this topic by making abstract governance issues concrete and relevant. Debates and role-plays encourage students to defend positions with evidence, respond to peers, and refine ideas collaboratively. These methods strengthen persuasive writing and speaking while fostering empathy for diverse viewpoints on technology's societal role.
Key Questions
- Evaluate whether existing regulatory frameworks , the EU AI Act, Singapore's Model AI Governance Framework , are structurally adequate to govern emergent AI capabilities or whether they address symptoms rather than the systemic causes of algorithmic harm.
- Analyze how the concentration of AI development within a small number of corporations creates accountability deficits that democratic institutions, designed for an earlier technological order, are ill-equipped to resolve.
- Construct a position on who bears moral and legal responsibility when an algorithmic decision causes demonstrable harm to an individual, and assess whether distributed responsibility across developers, deployers, and regulators is coherent or evasive.
Learning Objectives
- Critique the structural adequacy of existing AI governance frameworks like the EU AI Act and Singapore's Model AI Governance Framework in addressing emergent AI capabilities.
- Analyze how the concentration of AI development within a few corporations creates accountability deficits for democratic institutions.
- Construct a reasoned position on who bears moral and legal responsibility for algorithmic harm, evaluating the coherence of distributed responsibility models.
- Synthesize arguments from diverse stakeholders regarding the ethical implications of AI development and deployment.
- Evaluate the effectiveness of current regulatory approaches in mitigating systemic causes of algorithmic harm.
Before You Start
Why: Students need a basic understanding of what AI is and common applications before analyzing its governance.
Why: This topic builds on foundational skills of critically evaluating digital information and understanding online interactions.
Key Vocabulary
| Algorithmic Accountability | The principle that AI systems and their developers/deployers should be answerable for the outcomes and impacts of algorithmic decisions. |
| AI Governance Framework | A set of principles, policies, and practices designed to guide the responsible development and deployment of artificial intelligence. |
| Emergent AI Capabilities | New and unforeseen abilities or behaviors of AI systems that arise from complex interactions, often beyond initial design intentions. |
| Accountability Deficit | A gap where responsibility for harm caused by an AI system cannot be clearly assigned to a specific individual or entity. |
| Distributed Responsibility | The concept that responsibility for AI-related harms may be shared across multiple parties, including developers, deployers, users, and regulators. |
Watch Out for These Misconceptions
Common MisconceptionAI algorithms are neutral and unbiased.
What to Teach Instead
Algorithms inherit biases from training data and design choices made by humans. Role-plays tracing decision paths help students identify bias sources and propose fixes, turning passive acceptance into active critique.
Common MisconceptionStrong regulations prevent AI innovation.
What to Teach Instead
Regulations guide ethical development without halting progress, as seen in Singapore's framework. Debates with evidence from both sides allow students to weigh trade-offs, building nuanced positions through peer challenge.
Common MisconceptionUsers alone bear responsibility for tech harms.
What to Teach Instead
Responsibility spans developers, deployers, and regulators in complex chains. Simulations clarify roles and foster discussions on coherent accountability models, helping students move beyond simplistic blame.
Active Learning Ideas
See all activitiesDebate Carousel: Regulatory Frameworks
Divide class into groups to research and prepare arguments for or against the adequacy of the EU AI Act and Singapore's Model AI Governance Framework. Groups rotate stations to argue the opposing side, then vote on strongest points. Conclude with a whole-class reflection on key insights.
Case Study Walk: Algorithmic Harms
Provide real-world cases of AI biases, such as facial recognition errors or biased loan algorithms. Groups analyze one case, create posters with causes and solutions, then conduct a gallery walk to discuss others. Summarize findings in a shared class document.
Role-Play Chain: Assigning Responsibility
Present a scenario of harm from an AI decision, like wrongful arrest via predictive policing. Assign roles to developers, deployers, users, and regulators. Groups simulate a hearing, present defenses, and deliberate on accountability. Debrief on distributed vs. individual responsibility.
Tech Log Pairs: Personal Impacts
Pairs track one day's technology use in communication, learning, and leisure, noting algorithmic influences. Discuss positives, negatives, and governance needs. Share anonymized examples in whole-class mind map.
Real-World Connections
- Tech policy analysts at organizations like the AI Now Institute in New York are actively researching and publishing reports on accountability deficits in AI, informing legislative efforts.
- Citizens in Singapore can refer to the Infocomm Media Development Authority's (IMDA) guidance on AI ethics and governance when interacting with AI-powered services, such as chatbots for government services.
- The European Parliament's debates and eventual adoption of the EU AI Act reflect the challenge of creating regulations that can keep pace with rapid AI advancements, impacting companies developing AI tools globally.
Assessment Ideas
Pose the question: 'When an AI-driven loan application system unfairly denies a loan to a qualified individual, who is primarily responsible: the bank that deployed the system, the developers who coded the algorithm, or the regulators who approved its use? Justify your answer with reference to specific governance principles.'
Students write down one specific emergent AI capability (e.g., generative art, advanced predictive text) and then list one potential governance challenge it presents. They should also suggest which existing framework (EU AI Act or SG Model AI Governance Framework) might be better suited to address it and why.
Present students with a hypothetical scenario of algorithmic harm (e.g., a biased hiring algorithm). Ask them to identify at least two distinct parties (developer, deployer, regulator) and briefly explain their potential role in either causing or mitigating the harm.
Frequently Asked Questions
How to integrate AI governance into daily technology lessons for JC English?
What are main challenges in algorithmic accountability?
How does active learning enhance technology in daily lives discussions?
Common student views on technology's daily impacts?
More in AI Governance and Algorithmic Accountability
Biotechnology, Human Enhancement, and the Precautionary Principle
Investigating how significant inventions throughout history have changed the way people live, work, and interact.
3 methodologies
Surveillance Capitalism and the Ethics of Data Commodification
Learning about digital citizenship, including online safety, privacy, and respectful communication in digital spaces.
3 methodologies
Technological Solutionism versus Structural Reform
Exploring how different technologies (e.g., phones, social media, email) have changed the way we communicate and connect with others.
3 methodologies
Scientific Consensus, Expertise, and the Limits of Public Deference
Investigating how scientific discoveries and technological advancements help address real-world problems, such as health or environmental issues.
3 methodologies
Digital Inequality and the Politics of Technological Access
Brainstorming and discussing how new technologies and ideas can contribute to making our communities and the world a better place.
3 methodologies