Skip to content
CCE · Secondary 1 · Ethical Reasoning and Decision Making · Semester 2

Bioethics and Technology: AI and Society

Exploring the ethical challenges posed by new technologies like AI and genetic engineering.

MOE Syllabus OutcomesMOE: Ethical Reasoning - S1MOE: Science and Society - S1

About This Topic

Bioethics and Technology: AI and Society introduces Secondary 1 students to ethical dilemmas from artificial intelligence and genetic engineering. They tackle key questions, such as who bears responsibility when autonomous systems cause harm, how governments should regulate technologies that alter human nature, and what principles should guide surveillance for public safety. These explorations highlight technology's dual potential for progress and risk.

Within the MOE CCE curriculum's Ethical Reasoning and Decision Making unit, this topic connects science advancements to societal values and personal responsibility. Students practice applying ethical frameworks like utilitarianism and rights-based thinking to real scenarios, such as self-driving car accidents or CRISPR gene editing. This builds skills in perspective-taking, argumentation, and discerning facts from opinions, essential for informed citizenship in Singapore's tech-driven society.

Active learning suits this topic well. Role-plays and structured debates let students inhabit stakeholder roles, making abstract ethics concrete. Collaborative case analyses reveal nuances in trade-offs, deepen empathy, and encourage ownership of moral reasoning over rote memorization.

Key Questions

  1. Who should be held responsible when an autonomous system causes harm?
  2. How should the government regulate technology that could change human nature?
  3. What ethical principles should guide the use of surveillance for public safety?

Learning Objectives

  • Critique the ethical implications of using AI in decision-making processes, such as loan applications or criminal justice.
  • Evaluate the potential societal impacts of genetic engineering technologies on human nature and identity.
  • Compare different ethical frameworks (e.g., utilitarianism, deontology) when analyzing scenarios involving autonomous systems.
  • Formulate arguments for or against government regulation of advanced technologies based on ethical principles.
  • Synthesize information from case studies to propose responsible guidelines for the use of surveillance technology.

Before You Start

Introduction to Ethics and Values

Why: Students need a basic understanding of what ethics are and why values are important before analyzing complex technological dilemmas.

Understanding Society and Citizenship

Why: This topic requires students to consider societal impact and their role as citizens, concepts introduced in earlier CCE units.

Key Vocabulary

Artificial Intelligence (AI)Computer systems capable of performing tasks that typically require human intelligence, such as learning, problem-solving, and decision-making.
Autonomous SystemA system that can operate and make decisions independently without direct human control, such as self-driving cars or automated drones.
Genetic EngineeringThe direct manipulation of an organism's genes using biotechnology, which can involve altering, adding, or deleting DNA sequences.
Surveillance TechnologyTools and methods used to monitor, collect, and analyze information about people or places, often for security or public safety purposes.
Ethical FrameworkA set of principles or guidelines used to determine what is morally right or wrong, helping to structure ethical reasoning and decision-making.

Watch Out for These Misconceptions

Common MisconceptionAI is neutral and unbiased because it lacks emotions.

What to Teach Instead

AI reflects biases in its training data from human sources. Role-playing biased outcomes, such as unfair loan approvals, helps students trace data origins and propose fairness checks through group analysis.

Common MisconceptionRegulating technology always hinders innovation and progress.

What to Teach Instead

Regulations prevent harm while enabling safe innovation, as seen in Singapore's AI governance frameworks. Debates on balanced rules show students trade-offs, fostering nuanced views via peer challenge.

Common MisconceptionSurveillance for safety justifies any privacy invasion.

What to Teach Instead

Ethical use requires proportionality between safety gains and rights erosion. Case study discussions reveal overreach risks, helping students weigh principles collaboratively.

Active Learning Ideas

See all activities

Real-World Connections

  • Singapore's Smart Nation initiative utilizes AI and surveillance technologies for urban planning and public safety, raising questions about data privacy and algorithmic bias for citizens and policymakers.
  • The development of autonomous vehicles by companies like Waymo and Tesla presents ethical dilemmas regarding accident responsibility, requiring legal frameworks to address harm caused by machines.
  • Debates surrounding CRISPR gene editing technology, used in research by institutions like the Genome Institute of Singapore, prompt discussions on its potential to treat diseases versus altering the human germline.

Assessment Ideas

Discussion Prompt

Pose this question to small groups: 'Imagine an AI system designed to manage traffic flow in Singapore. If it prioritizes emergency vehicles by causing minor accidents for other cars, who is responsible: the programmers, the city council that approved it, or the AI itself?' Facilitate a class discussion on their reasoning.

Exit Ticket

Provide students with a brief scenario about a new genetic therapy. Ask them to write: 1) One potential benefit of the technology. 2) One ethical concern they have. 3) Which ethical principle (e.g., fairness, safety) is most important in this case and why.

Quick Check

Present students with two short statements about AI surveillance: one arguing for its necessity in preventing crime, and another highlighting privacy risks. Ask students to identify the main ethical argument in each statement and whether they lean towards supporting or opposing the technology, briefly explaining their choice.

Frequently Asked Questions

How can active learning help students understand bioethics in AI?
Active strategies like role-plays and debates immerse students in ethical dilemmas, building empathy for diverse views. For instance, embodying a surveillance victim or AI developer reveals trade-offs that lectures miss. Group deliberations refine arguments, making abstract principles like accountability tangible and memorable, while boosting confidence in ethical decision-making.
What are key ethical challenges of AI in society?
Challenges include accountability for autonomous harms, bias perpetuation from flawed data, and privacy erosion via surveillance. Students explore principles like transparency and fairness, applying them to cases like self-driving accidents. This equips them to navigate Singapore's smart nation initiatives responsibly.
How should schools teach responsibility for AI harms?
Use scenarios where students assign blame among developers, users, or systems, debating via structured formats. Link to real examples like algorithm errors in hiring. This develops reasoning skills aligned with MOE standards, emphasizing shared human oversight.
What role does government play in regulating genetic technologies?
Governments balance innovation with ethics, as in Singapore's Bioethics Advisory Committee guidelines on gene editing. Students discuss regulation scopes, weighing public safety against research freedom. Activities like policy simulations clarify how laws evolve with tech.