Skip to content
Computing · Secondary 3 · Impacts of Computing on Society · Semester 2

Ethical Considerations in AI Use

Students will discuss the ethical implications of AI in various contexts, focusing on fairness, privacy, and accountability in its application.

MOE Syllabus OutcomesMOE: Ethics and Social Issues - S3

About This Topic

Ethical considerations in AI use guide students to examine fairness, privacy, and accountability in everyday applications. At Secondary 3, they analyze scenarios such as biased facial recognition systems that disadvantage certain ethnic groups, social media algorithms that amplify misinformation, and hiring tools that perpetuate gender imbalances. These discussions highlight how AI decisions impact society and connect to students' experiences with recommendation systems and smart devices.

This topic aligns with the MOE Computing curriculum's focus on Ethics and Social Issues within Impacts of Computing on Society. Students identify ethical questions in daily life, stress the need for transparency in AI decision-making, and propose practical solutions like diverse training data or human oversight. Such work fosters critical thinking and civic responsibility, preparing them for informed participation in a tech-driven world.

Active learning suits this topic well because ethical dilemmas are nuanced and context-dependent. Role-plays, debates, and collaborative solution design help students internalize principles through empathy-building and peer persuasion, making abstract ideas personal and actionable.

Key Questions

  1. Identify ethical questions that arise from the use of AI in daily life.
  2. Discuss the importance of transparency and accountability when AI makes decisions.
  3. Propose solutions to mitigate ethical concerns in simple AI applications.

Learning Objectives

  • Analyze AI decision-making processes in provided scenarios to identify potential biases.
  • Evaluate the ethical implications of AI use in terms of fairness, privacy, and accountability.
  • Compare different approaches to ensuring transparency and accountability in AI systems.
  • Propose specific design modifications or policy changes to mitigate ethical concerns in a given AI application.

Before You Start

Introduction to Artificial Intelligence Concepts

Why: Students need a basic understanding of what AI is and how it functions to discuss its ethical implications.

Data Representation and Processing

Why: Understanding how data is collected and processed is fundamental to grasping issues of bias and privacy in AI.

Key Vocabulary

Algorithmic BiasSystematic and repeatable errors in a computer system that create unfair outcomes, such as privileging one arbitrary group of users over others.
Data PrivacyThe protection of personal information from unauthorized access, use, disclosure, disruption, modification, or destruction.
AccountabilityThe obligation to accept responsibility for one's actions and decisions, especially when AI systems make choices that affect individuals.
TransparencyThe principle that the workings and decisions of AI systems should be understandable and explainable to users and stakeholders.

Watch Out for These Misconceptions

Common MisconceptionAI systems are always neutral and unbiased.

What to Teach Instead

AI reflects biases in training data from human sources. Group discussions of real cases like COMPAS recidivism tool reveal patterns, while active debunking through data audits helps students grasp that neutrality requires deliberate design choices.

Common MisconceptionPrivacy concerns are minor compared to AI convenience.

What to Teach Instead

Data breaches show lasting harms like identity theft. Role-plays simulating leaks build urgency, as students experience 'victim' perspectives and collaborate on consent models, shifting views toward balanced priorities.

Common MisconceptionAI developers bear no responsibility for misuse.

What to Teach Instead

Accountability chains from design to deployment. Debates on regulations clarify shared duties, with peer challenges exposing gaps and active proposal of oversight mechanisms reinforcing collective responsibility.

Active Learning Ideas

See all activities

Real-World Connections

  • Hiring platforms like HireVue use AI to screen job applicants. Ethical concerns arise if the AI is trained on historical data that reflects past discriminatory hiring practices, potentially disadvantaging candidates from underrepresented groups.
  • Social media companies like Meta (Facebook) use AI algorithms to curate news feeds. Issues of fairness and accountability emerge when these algorithms amplify misinformation or create echo chambers, impacting public discourse and individual well-being.
  • Autonomous vehicle developers like Waymo face ethical dilemmas regarding AI decision-making in unavoidable accident scenarios. Determining who or what the AI prioritizes in such critical moments raises questions of accountability and societal values.

Assessment Ideas

Discussion Prompt

Present students with a scenario: 'An AI system is used to approve or deny loan applications. What are three potential ethical issues that could arise from its use?' Facilitate a class discussion, prompting students to consider fairness, bias in data, and the need for human oversight.

Quick Check

Provide students with a short case study of an AI application (e.g., a facial recognition system). Ask them to write down one specific way the AI's decision-making process might lack transparency and one suggestion for how to improve it.

Exit Ticket

On an index card, ask students to define 'algorithmic bias' in their own words and provide one real-world example where it has had a negative impact.

Frequently Asked Questions

What are key ethical issues in AI for Secondary 3 students?
Fairness addresses biases in algorithms affecting hiring or policing; privacy covers data collection in apps and devices; accountability ensures humans oversee AI decisions. Singapore contexts like National AI Strategy emphasize these, helping students link global issues to local policies and propose mitigations like audits.
How can active learning help teach AI ethics?
Active methods like debates and role-plays make ethics tangible by letting students argue positions, empathize with stakeholders, and co-create solutions. In pairs or groups, they dissect cases such as biased loan approvals, revealing nuances discussion alone misses. This builds ownership, critical evaluation, and persuasion skills essential for ethical reasoning.
Examples of AI fairness problems in daily life?
Social media feeds prioritize engaging but divisive content, skewing public opinion. Facial recognition fails higher on darker skin tones, risking unfair surveillance. Students explore these via group analysis, connecting to Singapore's diverse society and learning data diversity fixes fairness.
How to promote AI accountability in class?
Use scenarios where AI errs, like medical diagnostics, and have students assign responsibilities across developers, users, regulators. Collaborative mapping of chains, followed by solution pitches, clarifies transparency needs like explainable AI. Ties to MOE standards by practicing real-world ethical advocacy.