Introduction to Artificial Intelligence
Students will gain a foundational understanding of AI, machine learning, and their applications in daily life.
About This Topic
Artificial Intelligence (AI) and ethics explore the societal impact of machine learning and automated decision-making. In the Secondary 3 curriculum, students look beyond the 'magic' of AI to understand how algorithms are trained on data and how that data can contain biases. This topic covers the ethical dilemmas of AI, such as accountability for autonomous vehicle accidents and the fairness of AI in hiring or policing.
Students also discuss the impact of AI on the future of work, particularly in a highly automated economy like Singapore. This topic is perfectly suited for structured debates and role-plays, as there are often no 'right' answers, only different ethical frameworks. This topic comes alive when students can debate real-world case studies and collaborate to design 'Ethical AI' guidelines.
Key Questions
- Explain the basic concepts of Artificial Intelligence and Machine Learning.
- Analyze how AI is currently impacting various industries and daily routines.
- Differentiate between strong AI and weak AI with relevant examples.
Learning Objectives
- Explain the fundamental principles of Artificial Intelligence and Machine Learning using appropriate terminology.
- Analyze the current impact of AI technologies on at least three different industries, citing specific examples.
- Differentiate between strong AI and weak AI by comparing their capabilities and providing concrete examples.
- Critique potential ethical challenges arising from AI applications in areas such as autonomous systems or data privacy.
Before You Start
Why: Students need a basic understanding of how instructions are given to computers to grasp the concept of algorithms that underpin AI.
Why: Understanding how data is structured and analyzed is crucial for comprehending how machine learning models are trained.
Key Vocabulary
| Artificial Intelligence (AI) | The simulation of human intelligence processes by machines, especially computer systems, enabling them to perform tasks that typically require human intellect. |
| Machine Learning (ML) | A subset of AI where computer systems learn from data, identify patterns, and make decisions with minimal human intervention, improving performance over time. |
| Algorithm | A set of rules or instructions followed by a computer to solve a problem or perform a computation, forming the basis of AI and ML systems. |
| Bias (in AI) | Systematic prejudice in an AI system's output, often stemming from biased training data or flawed algorithm design, leading to unfair or discriminatory results. |
| Weak AI (Narrow AI) | AI designed and trained for a specific task, such as virtual assistants or image recognition software, lacking general cognitive abilities. |
| Strong AI (General AI) | A hypothetical type of AI that possesses the intellectual capability of a human being, able to understand, learn, and apply knowledge across a wide range of tasks. |
Watch Out for These Misconceptions
Common MisconceptionAI is 'neutral' and cannot be biased because it is a machine.
What to Teach Instead
AI is only as good as the data it is trained on. If the data is biased, the AI will be too. A 'Sorting' activity with biased data helps students see how 'Garbage In' leads to 'Biased Out'.
Common MisconceptionAI will eventually replace all human jobs.
What to Teach Instead
AI is more likely to change jobs than eliminate them, automating routine tasks while creating new roles. A 'Future Careers' brainstorming session helps students see AI as a tool that requires human oversight.
Active Learning Ideas
See all activitiesFormal Debate: The Trolley Problem for Self-Driving Cars
Students are presented with a scenario where an AI car must choose between two harmful outcomes. They must debate which 'logic' the car should follow and who is responsible for the final decision: the programmer, the owner, or the car itself.
Inquiry Circle: Bias in the Data
Groups are given a 'dataset' (e.g., photos for a facial recognition system) that is heavily skewed toward one demographic. They must identify how this bias would affect the AI's performance and suggest ways to make the dataset more inclusive.
Gallery Walk: AI in Singapore
Students research different ways AI is used in Singapore (e.g., Changi Airport, Smart HDB towns). They create posters highlighting one benefit and one ethical concern for each application, then walk around to critique each other's findings.
Real-World Connections
- In healthcare, AI algorithms analyze medical images like X-rays and MRIs to assist radiologists in detecting diseases such as cancer earlier and more accurately. Companies like Google Health are developing AI tools for diagnostic support.
- The financial sector uses AI for fraud detection, analyzing transaction patterns in real-time to identify suspicious activity for banks like DBS. AI also powers algorithmic trading on stock exchanges.
- Autonomous vehicle developers, such as Waymo and Tesla, are using AI to enable cars to perceive their environment, make driving decisions, and navigate roads without human input, though ethical considerations around accidents remain.
Assessment Ideas
Pose the following question to the class: 'Imagine an AI system is used to screen job applications. What are two potential benefits and two potential ethical concerns related to using AI for this purpose? Be specific about the types of bias that could arise.'
Provide students with short scenarios describing AI applications (e.g., a chatbot for customer service, a recommendation engine for streaming services). Ask them to identify whether each scenario represents weak AI or strong AI and briefly justify their answer.
Ask students to write down one industry significantly impacted by AI and one specific way AI is used within that industry. Then, have them list one question they still have about AI or its societal implications.
Frequently Asked Questions
What is algorithmic bias?
Who is responsible when an AI makes a mistake?
How can active learning help students understand AI ethics?
What is 'Explainable AI' (XAI)?
More in Impacts of Computing on Society
Bias in AI and Algorithmic Fairness
Students will investigate how biases can be embedded in AI systems and discuss strategies for promoting fairness and equity.
2 methodologies
AI and Automation: Job Displacement and New Opportunities
Students will discuss the economic impact of AI and automation, considering job losses and the creation of new roles.
2 methodologies
Ethical Considerations in AI Use
Students will discuss the ethical implications of AI in various contexts, focusing on fairness, privacy, and accountability in its application.
2 methodologies
Access to Technology and Infrastructure
Students will examine the factors contributing to the digital divide, including access to hardware, software, and internet connectivity.
2 methodologies
Digital Literacy and Skills Gap
Students will discuss the importance of digital literacy and the impact of varying skill levels on participation in the digital economy.
2 methodologies
Inclusive Technology Design
Students will explore principles of inclusive design, ensuring technology is accessible to people with diverse needs and abilities.
2 methodologies