Technology in Our Daily LivesActivities & Teaching Strategies
Active learning helps students move beyond passive observation of technology's role in daily life. By engaging in debates, case studies, and role-plays, they connect abstract concepts like algorithms and regulation to their lived experiences, making abstract ideas tangible and debatable.
Learning Objectives
- 1Critique the structural adequacy of existing AI governance frameworks like the EU AI Act and Singapore's Model AI Governance Framework in addressing emergent AI capabilities.
- 2Analyze how the concentration of AI development within a few corporations creates accountability deficits for democratic institutions.
- 3Construct a reasoned position on who bears moral and legal responsibility for algorithmic harm, evaluating the coherence of distributed responsibility models.
- 4Synthesize arguments from diverse stakeholders regarding the ethical implications of AI development and deployment.
- 5Evaluate the effectiveness of current regulatory approaches in mitigating systemic causes of algorithmic harm.
Want a complete lesson plan with these objectives? Generate a Mission →
Debate Carousel: Regulatory Frameworks
Divide class into groups to research and prepare arguments for or against the adequacy of the EU AI Act and Singapore's Model AI Governance Framework. Groups rotate stations to argue the opposing side, then vote on strongest points. Conclude with a whole-class reflection on key insights.
Prepare & details
Evaluate whether existing regulatory frameworks — the EU AI Act, Singapore's Model AI Governance Framework — are structurally adequate to govern emergent AI capabilities or whether they address symptoms rather than the systemic causes of algorithmic harm.
Facilitation Tip: During Debate Carousel: Regulatory Frameworks, assign each station a unique scenario (e.g., data privacy, content moderation) and rotate groups to ensure diverse perspectives.
Setup: Room divided into two sides with clear center line
Materials: Provocative statement card, Evidence cards (optional), Movement tracking sheet
Case Study Walk: Algorithmic Harms
Provide real-world cases of AI biases, such as facial recognition errors or biased loan algorithms. Groups analyze one case, create posters with causes and solutions, then conduct a gallery walk to discuss others. Summarize findings in a shared class document.
Prepare & details
Analyze how the concentration of AI development within a small number of corporations creates accountability deficits that democratic institutions, designed for an earlier technological order, are ill-equipped to resolve.
Facilitation Tip: For Case Study Walk: Algorithmic Harms, post printed case studies around the room and provide sticky notes for students to annotate evidence of harm before discussion.
Setup: Room divided into two sides with clear center line
Materials: Provocative statement card, Evidence cards (optional), Movement tracking sheet
Role-Play Chain: Assigning Responsibility
Present a scenario of harm from an AI decision, like wrongful arrest via predictive policing. Assign roles to developers, deployers, users, and regulators. Groups simulate a hearing, present defenses, and deliberate on accountability. Debrief on distributed vs. individual responsibility.
Prepare & details
Construct a position on who bears moral and legal responsibility when an algorithmic decision causes demonstrable harm to an individual, and assess whether distributed responsibility across developers, deployers, and regulators is coherent or evasive.
Facilitation Tip: In Role-Play Chain: Assigning Responsibility, give each student a role card with a specific stakeholder perspective to ensure balanced and focused dialogue.
Setup: Room divided into two sides with clear center line
Materials: Provocative statement card, Evidence cards (optional), Movement tracking sheet
Tech Log Pairs: Personal Impacts
Pairs track one day's technology use in communication, learning, and leisure, noting algorithmic influences. Discuss positives, negatives, and governance needs. Share anonymized examples in whole-class mind map.
Prepare & details
Evaluate whether existing regulatory frameworks — the EU AI Act, Singapore's Model AI Governance Framework — are structurally adequate to govern emergent AI capabilities or whether they address symptoms rather than the systemic causes of algorithmic harm.
Facilitation Tip: With Tech Log Pairs: Personal Impacts, model how to track app usage patterns for three days before analyzing the data for patterns and surprises.
Setup: Room divided into two sides with clear center line
Materials: Provocative statement card, Evidence cards (optional), Movement tracking sheet
Teaching This Topic
Teachers should prioritize real-world examples that resonate with students' experiences, such as TikTok feeds or Spotify playlists. Avoid lecturing on algorithms' inner workings; instead, use activities that let students uncover biases themselves. Research shows students grasp ethical trade-offs better when they trace consequences through concrete, relatable scenarios rather than abstract theory.
What to Expect
Success is measured when students move from acknowledging tech's influence to critiquing its mechanisms and consequences. They should articulate trade-offs between convenience and ethics and take ownership of their roles in shaping responsible tech use.
These activities are a starting point. A full mission is the experience.
- Complete facilitation script with teacher dialogue
- Printable student materials, ready for class
- Differentiation strategies for every learner
Watch Out for These Misconceptions
Common MisconceptionDuring Debate Carousel: Regulatory Frameworks, watch for students assuming algorithms are neutral because they 'just process data.'
What to Teach Instead
During the carousel, direct students to examine the training data and design choices mentioned in their station materials, asking them to identify which human biases might have shaped the algorithm's outputs.
Common MisconceptionDuring Debate Carousel: Regulatory Frameworks, watch for oversimplified claims that 'regulations always slow innovation.'
What to Teach Instead
During the carousel, have groups use Singapore's Model AI Governance Framework as a counterexample to test their assumptions about regulation's impact on progress.
Common MisconceptionDuring Role-Play Chain: Assigning Responsibility, watch for students blaming users exclusively for tech harms.
What to Teach Instead
During the role-play, provide role cards that outline each stakeholder's limited control, forcing students to grapple with the shared responsibility for outcomes in complex systems.
Assessment Ideas
After Role-Play Chain: Assigning Responsibility, pose the loan application scenario and ask students to refer to their role-play notes to justify which stakeholder bears the most responsibility, citing governance principles from their discussion.
After Tech Log Pairs: Personal Impacts, ask students to write down one AI capability they observed in their daily tech use and one governance challenge it presents, then reference the EU AI Act or SG Model AI Governance Framework to explain which framework better addresses the challenge.
During Case Study Walk: Algorithmic Harms, present the biased hiring algorithm scenario and ask students to identify two distinct parties (developer, deployer, regulator) in their case study notes, explaining their role in either causing or mitigating the harm based on evidence from their walk.
Extensions & Scaffolding
- Challenge students who finish early to create a mock policy brief summarizing findings from the Debate Carousel for a local school board.
- For students who struggle, provide sentence stems during the Case Study Walk (e.g., 'This algorithm harms because...') to scaffold their analysis.
- Deeper exploration: Invite a guest speaker (e.g., a software developer or ethicist) to discuss their work and answer student-generated questions after the Role-Play Chain.
Key Vocabulary
| Algorithmic Accountability | The principle that AI systems and their developers/deployers should be answerable for the outcomes and impacts of algorithmic decisions. |
| AI Governance Framework | A set of principles, policies, and practices designed to guide the responsible development and deployment of artificial intelligence. |
| Emergent AI Capabilities | New and unforeseen abilities or behaviors of AI systems that arise from complex interactions, often beyond initial design intentions. |
| Accountability Deficit | A gap where responsibility for harm caused by an AI system cannot be clearly assigned to a specific individual or entity. |
| Distributed Responsibility | The concept that responsibility for AI-related harms may be shared across multiple parties, including developers, deployers, users, and regulators. |
Suggested Methodologies
More in AI Governance and Algorithmic Accountability
Biotechnology, Human Enhancement, and the Precautionary Principle
Investigating how significant inventions throughout history have changed the way people live, work, and interact.
3 methodologies
Surveillance Capitalism and the Ethics of Data Commodification
Learning about digital citizenship, including online safety, privacy, and respectful communication in digital spaces.
3 methodologies
Technological Solutionism versus Structural Reform
Exploring how different technologies (e.g., phones, social media, email) have changed the way we communicate and connect with others.
3 methodologies
Scientific Consensus, Expertise, and the Limits of Public Deference
Investigating how scientific discoveries and technological advancements help address real-world problems, such as health or environmental issues.
3 methodologies
Digital Inequality and the Politics of Technological Access
Brainstorming and discussing how new technologies and ideas can contribute to making our communities and the world a better place.
3 methodologies
Ready to teach Technology in Our Daily Lives?
Generate a full mission with everything you need
Generate a Mission