Skip to content
Computing · Year 6

Active learning ideas

Ethical Considerations of AI

Active learning helps Year 6 students grasp abstract ethical concepts by making them concrete and personal. When students debate, role-play, and design guidelines, they move beyond passive listening to active reasoning and empathy. These approaches align with how children learn best—by doing, discussing, and collaborating—especially on topics that blend logic with moral reflection.

National Curriculum Attainment TargetsKS2: Computing - Digital LiteracyKS2: Computing - Online Safety
30–45 minPairs → Whole Class4 activities

Activity 01

Socratic Seminar45 min · Small Groups

Debate Carousel: AI in Health Decisions

Prepare four scenario cards on AI health uses, like automated diagnosis. Divide class into small groups to prepare pro and con arguments for 10 minutes. Groups rotate to debate each scenario with another group, noting new points on worksheets.

Critique the idea of AI making decisions about human health or safety.

Facilitation TipDuring the Debate Carousel, assign clear roles (e.g., moderator, data scientist, patient advocate) to ensure every student contributes and stays engaged in the conversation.

What to look forPresent students with a scenario: An AI recommends denying a student access to a specialized school program based on predicted future academic performance. Ask: Who is responsible if the AI is wrong? What information should the AI have access to? What information should it NOT have access to?

AnalyzeEvaluateCreateSocial AwarenessRelationship Skills
Generate Complete Lesson

Activity 02

Socratic Seminar35 min · Pairs

Dilemma Role-Play: Safety Scenarios

Assign pairs roles like doctor, patient, or AI developer in safety dilemmas, such as AI controlling emergency vehicles. Pairs act out 3-minute skits, then switch roles. Whole class votes on best resolutions and discusses outcomes.

Predict potential ethical dilemmas that could arise from advanced AI.

Facilitation TipIn the Dilemma Role-Play, provide a brief ‘script starter’ for each scenario to help students begin their responses without scripting their emotions or values.

What to look forProvide students with a short paragraph describing an AI application (e.g., AI assisting doctors with diagnoses). Ask them to identify one potential ethical concern and one potential benefit, writing their answers on a sticky note.

AnalyzeEvaluateCreateSocial AwarenessRelationship Skills
Generate Complete Lesson

Activity 03

Socratic Seminar40 min · Small Groups

Guideline Workshop: School AI Rules

In small groups, students brainstorm and draft five ethical guidelines for school AI tools, like chatbots. Groups present to class for feedback. Class votes and refines into a shared poster.

Design a set of ethical guidelines for the use of AI in schools.

Facilitation TipDuring the Bias Hunt, give students a simple three-column table to record their findings, keeping the task structured and focused on observable patterns.

What to look forStudents work in small groups to draft one ethical guideline for AI use in schools. After drafting, groups swap their guideline with another group. Each group provides feedback on clarity and feasibility, suggesting one improvement.

AnalyzeEvaluateCreateSocial AwarenessRelationship Skills
Generate Complete Lesson

Activity 04

Socratic Seminar30 min · Individual

Bias Hunt: Dataset Analysis

Provide printed datasets with biases. Individuals or pairs identify unfair patterns, then share in whole class discussion to propose fixes.

Critique the idea of AI making decisions about human health or safety.

What to look forPresent students with a scenario: An AI recommends denying a student access to a specialized school program based on predicted future academic performance. Ask: Who is responsible if the AI is wrong? What information should the AI have access to? What information should it NOT have access to?

AnalyzeEvaluateCreateSocial AwarenessRelationship Skills
Generate Complete Lesson

A few notes on teaching this unit

Experienced teachers approach this topic by grounding abstract ethical questions in relatable, real-world contexts that students can explore through structured interaction. They avoid letting debates turn into opinion-based arguments by requiring students to cite evidence from datasets or scenarios. Teachers also model ethical humility, showing students that responsible AI use includes recognizing limitations and seeking human oversight. Research suggests that guided role-plays and collaborative guideline writing help students internalize ethical standards more deeply than lectures alone.

By the end of these activities, students will confidently identify ethical dilemmas in AI, justify their positions with evidence, and propose thoughtful guidelines. Success looks like students asking critical questions, recognizing bias, and supporting their ideas with examples from the activities. They should also show respect for diverse viewpoints during discussions and debates.


Watch Out for These Misconceptions

  • During the Debate Carousel, watch for students assuming AI decisions are always neutral and fair.

    During the Debate Carousel, pause the discussion and ask groups to examine sample diagnostic or traffic datasets. Have them highlight any patterns or missing groups that might lead to unfair outcomes, then revisit their debate points with this evidence in mind.

  • During the Dilemma Role-Play, watch for students believing AI can fully replace human judgment in ethics.

    During the Dilemma Role-Play, design scenarios where the AI’s recommendation leads to a negative outcome. After the role-play, facilitate a reflection where students identify what human qualities (empathy, context) were missing, and propose how humans should oversee AI in such cases.

  • During the Guideline Workshop, watch for students assuming privacy concerns disappear with advanced AI.

    During the Guideline Workshop, ask students to draft one rule about data collection and one about consent. Provide a short case study (e.g., a student’s medical data shared without permission) and have students revise their guidelines to directly address the risks in the scenario.


Methods used in this brief