Introduction to Artificial IntelligenceActivities & Teaching Strategies
Active learning works well for AI and ethics because students often see the topic as abstract or theoretical until they interact with real-world examples. When students debate, investigate data, and explore local applications, they connect abstract concepts to concrete dilemmas and decisions.
Learning Objectives
- 1Explain the fundamental principles of Artificial Intelligence and Machine Learning using appropriate terminology.
- 2Analyze the current impact of AI technologies on at least three different industries, citing specific examples.
- 3Differentiate between strong AI and weak AI by comparing their capabilities and providing concrete examples.
- 4Critique potential ethical challenges arising from AI applications in areas such as autonomous systems or data privacy.
Want a complete lesson plan with these objectives? Generate a Mission →
Formal Debate: The Trolley Problem for Self-Driving Cars
Students are presented with a scenario where an AI car must choose between two harmful outcomes. They must debate which 'logic' the car should follow and who is responsible for the final decision: the programmer, the owner, or the car itself.
Prepare & details
Explain the basic concepts of Artificial Intelligence and Machine Learning.
Facilitation Tip: During the Trolley Problem debate, assign roles clearly so students must defend perspectives they may personally oppose.
Setup: Two teams facing each other, audience seating for the rest
Materials: Debate proposition card, Research brief for each side, Judging rubric for audience, Timer
Inquiry Circle: Bias in the Data
Groups are given a 'dataset' (e.g., photos for a facial recognition system) that is heavily skewed toward one demographic. They must identify how this bias would affect the AI's performance and suggest ways to make the dataset more inclusive.
Prepare & details
Analyze how AI is currently impacting various industries and daily routines.
Facilitation Tip: For the Bias in Data activity, provide a small subset of structured data so students can see bias emerge in real time.
Setup: Groups at tables with access to source materials
Materials: Source material collection, Inquiry cycle worksheet, Question generation protocol, Findings presentation template
Gallery Walk: AI in Singapore
Students research different ways AI is used in Singapore (e.g., Changi Airport, Smart HDB towns). They create posters highlighting one benefit and one ethical concern for each application, then walk around to critique each other's findings.
Prepare & details
Differentiate between strong AI and weak AI with relevant examples.
Facilitation Tip: During the Gallery Walk, ask guiding questions like ‘Who benefits from this AI system?’ and ‘What data might be missing?’ to focus observations.
Setup: Wall space or tables arranged around room perimeter
Materials: Large paper/poster boards, Markers, Sticky notes for feedback
Teaching This Topic
Start with concrete examples students recognize, like targeted ads or chatbots, before introducing technical terms. Avoid overwhelming students with coding or algorithms; focus on decision-making and data choices. Research shows students grasp bias better when they manipulate biased datasets themselves rather than just reading about it.
What to Expect
Successful learning looks like students questioning assumptions, identifying bias in datasets, and articulating ethical trade-offs in AI applications. Students should move from passive acceptance of AI’s ‘neutrality’ to critical analysis of its societal impact.
These activities are a starting point. A full mission is the experience.
- Complete facilitation script with teacher dialogue
- Printable student materials, ready for class
- Differentiation strategies for every learner
Watch Out for These Misconceptions
Common MisconceptionDuring the Bias in Data activity, watch for students who assume datasets are objective because they are presented in a spreadsheet.
What to Teach Instead
Use the activity to show how even neutral-looking data can encode historical biases, such as hiring data reflecting past discrimination. Ask students to identify which features might lead to biased outcomes.
Common MisconceptionDuring the Future Careers brainstorming session, watch for students who assume AI will eliminate jobs entirely.
What to Teach Instead
Use the brainstorming session to categorize tasks as automatable or augmentable. Have students map how AI tools might create new roles requiring human oversight, like AI trainers or ethics auditors.
Assessment Ideas
After the Bias in Data activity, pose: ‘Imagine an AI system screens job applications. What are two potential benefits and two potential ethical concerns? Specify types of bias, such as selection bias or confirmation bias.’
During the Gallery Walk, give students a short exit slip with AI scenarios (e.g., a chatbot, a recommendation engine). Ask them to classify each as weak or strong AI and justify their choice in one sentence.
After the Trolley Problem debate, ask students to write one industry impacted by AI and one ethical dilemma it raises. Have them list one question they still have about AI governance.
Extensions & Scaffolding
- Challenge: Ask students to design a new AI application for a local industry, including potential biases and mitigations.
- Scaffolding: Provide sentence starters for the debate or a partially completed bias analysis table.
- Deeper: Explore Singapore’s AI governance frameworks and compare them to other countries’ approaches.
Key Vocabulary
| Artificial Intelligence (AI) | The simulation of human intelligence processes by machines, especially computer systems, enabling them to perform tasks that typically require human intellect. |
| Machine Learning (ML) | A subset of AI where computer systems learn from data, identify patterns, and make decisions with minimal human intervention, improving performance over time. |
| Algorithm | A set of rules or instructions followed by a computer to solve a problem or perform a computation, forming the basis of AI and ML systems. |
| Bias (in AI) | Systematic prejudice in an AI system's output, often stemming from biased training data or flawed algorithm design, leading to unfair or discriminatory results. |
| Weak AI (Narrow AI) | AI designed and trained for a specific task, such as virtual assistants or image recognition software, lacking general cognitive abilities. |
| Strong AI (General AI) | A hypothetical type of AI that possesses the intellectual capability of a human being, able to understand, learn, and apply knowledge across a wide range of tasks. |
Suggested Methodologies
More in Impacts of Computing on Society
Bias in AI and Algorithmic Fairness
Students will investigate how biases can be embedded in AI systems and discuss strategies for promoting fairness and equity.
2 methodologies
AI and Automation: Job Displacement and New Opportunities
Students will discuss the economic impact of AI and automation, considering job losses and the creation of new roles.
2 methodologies
Ethical Considerations in AI Use
Students will discuss the ethical implications of AI in various contexts, focusing on fairness, privacy, and accountability in its application.
2 methodologies
Access to Technology and Infrastructure
Students will examine the factors contributing to the digital divide, including access to hardware, software, and internet connectivity.
2 methodologies
Digital Literacy and Skills Gap
Students will discuss the importance of digital literacy and the impact of varying skill levels on participation in the digital economy.
2 methodologies
Ready to teach Introduction to Artificial Intelligence?
Generate a full mission with everything you need
Generate a Mission