Skip to content
Computing · Secondary 3 · Impacts of Computing on Society · Semester 2

Introduction to Artificial Intelligence

Students will gain a foundational understanding of AI, machine learning, and their applications in daily life.

MOE Syllabus OutcomesMOE: Ethics and Social Issues - S3

About This Topic

Artificial Intelligence (AI) and ethics explore the societal impact of machine learning and automated decision-making. In the Secondary 3 curriculum, students look beyond the 'magic' of AI to understand how algorithms are trained on data and how that data can contain biases. This topic covers the ethical dilemmas of AI, such as accountability for autonomous vehicle accidents and the fairness of AI in hiring or policing.

Students also discuss the impact of AI on the future of work, particularly in a highly automated economy like Singapore. This topic is perfectly suited for structured debates and role-plays, as there are often no 'right' answers, only different ethical frameworks. This topic comes alive when students can debate real-world case studies and collaborate to design 'Ethical AI' guidelines.

Key Questions

  1. Explain the basic concepts of Artificial Intelligence and Machine Learning.
  2. Analyze how AI is currently impacting various industries and daily routines.
  3. Differentiate between strong AI and weak AI with relevant examples.

Learning Objectives

  • Explain the fundamental principles of Artificial Intelligence and Machine Learning using appropriate terminology.
  • Analyze the current impact of AI technologies on at least three different industries, citing specific examples.
  • Differentiate between strong AI and weak AI by comparing their capabilities and providing concrete examples.
  • Critique potential ethical challenges arising from AI applications in areas such as autonomous systems or data privacy.

Before You Start

Introduction to Programming Concepts

Why: Students need a basic understanding of how instructions are given to computers to grasp the concept of algorithms that underpin AI.

Data Representation and Analysis

Why: Understanding how data is structured and analyzed is crucial for comprehending how machine learning models are trained.

Key Vocabulary

Artificial Intelligence (AI)The simulation of human intelligence processes by machines, especially computer systems, enabling them to perform tasks that typically require human intellect.
Machine Learning (ML)A subset of AI where computer systems learn from data, identify patterns, and make decisions with minimal human intervention, improving performance over time.
AlgorithmA set of rules or instructions followed by a computer to solve a problem or perform a computation, forming the basis of AI and ML systems.
Bias (in AI)Systematic prejudice in an AI system's output, often stemming from biased training data or flawed algorithm design, leading to unfair or discriminatory results.
Weak AI (Narrow AI)AI designed and trained for a specific task, such as virtual assistants or image recognition software, lacking general cognitive abilities.
Strong AI (General AI)A hypothetical type of AI that possesses the intellectual capability of a human being, able to understand, learn, and apply knowledge across a wide range of tasks.

Watch Out for These Misconceptions

Common MisconceptionAI is 'neutral' and cannot be biased because it is a machine.

What to Teach Instead

AI is only as good as the data it is trained on. If the data is biased, the AI will be too. A 'Sorting' activity with biased data helps students see how 'Garbage In' leads to 'Biased Out'.

Common MisconceptionAI will eventually replace all human jobs.

What to Teach Instead

AI is more likely to change jobs than eliminate them, automating routine tasks while creating new roles. A 'Future Careers' brainstorming session helps students see AI as a tool that requires human oversight.

Active Learning Ideas

See all activities

Real-World Connections

  • In healthcare, AI algorithms analyze medical images like X-rays and MRIs to assist radiologists in detecting diseases such as cancer earlier and more accurately. Companies like Google Health are developing AI tools for diagnostic support.
  • The financial sector uses AI for fraud detection, analyzing transaction patterns in real-time to identify suspicious activity for banks like DBS. AI also powers algorithmic trading on stock exchanges.
  • Autonomous vehicle developers, such as Waymo and Tesla, are using AI to enable cars to perceive their environment, make driving decisions, and navigate roads without human input, though ethical considerations around accidents remain.

Assessment Ideas

Discussion Prompt

Pose the following question to the class: 'Imagine an AI system is used to screen job applications. What are two potential benefits and two potential ethical concerns related to using AI for this purpose? Be specific about the types of bias that could arise.'

Quick Check

Provide students with short scenarios describing AI applications (e.g., a chatbot for customer service, a recommendation engine for streaming services). Ask them to identify whether each scenario represents weak AI or strong AI and briefly justify their answer.

Exit Ticket

Ask students to write down one industry significantly impacted by AI and one specific way AI is used within that industry. Then, have them list one question they still have about AI or its societal implications.

Frequently Asked Questions

What is algorithmic bias?
Algorithmic bias occurs when an AI system produces results that are systematically unfair to certain groups of people. This usually happens because the data used to train the AI was not representative or contained existing human prejudices.
Who is responsible when an AI makes a mistake?
This is a major legal and ethical question. Depending on the situation, responsibility could lie with the developers who wrote the code, the company that deployed the AI, or even the users. Laws are still being written to address this 'accountability gap'.
How can active learning help students understand AI ethics?
Active learning, like the 'Trolley Problem' debate, forces students to move beyond technical definitions and grapple with the human impact of technology. By defending a specific ethical position, they learn to consider multiple perspectives and understand that technology is never separate from the values of the people who create it.
What is 'Explainable AI' (XAI)?
Explainable AI refers to AI systems designed so that humans can understand and trust the results. Instead of being a 'black box' where we don't know why a decision was made, XAI provides a clear rationale, which is essential for ethical and transparent use.