Ethics and Professional Conduct in ITActivities & Teaching Strategies
Ethics in AI benefits from active learning because students grapple with real-world consequences of abstract concepts. When they examine biased datasets or debate facial recognition policies, they see how technical choices translate into human impacts. This makes the abstract tangible and the moral stakes clear.
Learning Objectives
- 1Critique real-world AI applications for potential sources of algorithmic bias.
- 2Evaluate the ethical frameworks applicable to autonomous decision-making in AI systems.
- 3Propose mitigation strategies for reducing bias in AI training data.
- 4Analyze the societal impact of facial recognition technology and justify proposed limitations.
- 5Synthesize arguments regarding accountability for AI-driven errors.
Want a complete lesson plan with these objectives? Generate a Mission →
Ready-to-Use Activities
Debate Pairs: Facial Recognition Limits
Assign pairs to affirm or oppose public use of facial recognition. Provide case studies on privacy vs. security. Pairs prepare 3-minute arguments, then switch sides for rebuttals. Conclude with whole-class vote and reflection.
Prepare & details
How do we determine if a technological innovation is ethical?
Facilitation Tip: During Debate Pairs: Facial Recognition Limits, assign one student to argue for strict regulation and one to argue for minimal restriction to force nuanced positions.
Setup: Two sides with a center line
Materials: Provocative statement card, Evidence cards (optional), Movement tracking sheet
Small Groups: Bias Audit Simulation
Give groups sample datasets with hidden biases, like loan approval records skewed by gender. Groups identify biases, propose fixes, and test revised data on mock algorithms using spreadsheets. Share findings in a gallery walk.
Prepare & details
What are the implications of open-source versus proprietary software?
Facilitation Tip: During Small Groups: Bias Audit Simulation, provide a small, labeled dataset so students can manually spot underrepresentation or skewed labels.
Setup: Two sides with a center line
Materials: Provocative statement card, Evidence cards (optional), Movement tracking sheet
Role-Play: Whole Class Trolley Problem
Present AI car crash scenarios where the vehicle must choose between harms. Students draw roles: AI designer, victim families, regulator. Role-play discussions, then vote on programming choices and justify positions.
Prepare & details
How should IT professionals handle conflicts of interest?
Facilitation Tip: During Role-Play: Whole Class Trolley Problem, assign roles with conflicting values (e.g., safety engineer, community advocate) to surface ethical trade-offs.
Setup: Two sides with a center line
Materials: Provocative statement card, Evidence cards (optional), Movement tracking sheet
Individual: Ethical Dilemma Journal
Students read a case on AI in hiring, note biases and stakeholders. Write personal stances with pros/cons. Pair-share journals, then discuss class patterns in ethical trade-offs.
Prepare & details
How do we determine if a technological innovation is ethical?
Facilitation Tip: During Individual: Ethical Dilemma Journal, ask students to revisit entries weekly to track how their reasoning evolves.
Setup: Two sides with a center line
Materials: Provocative statement card, Evidence cards (optional), Movement tracking sheet
Teaching This Topic
Teachers should avoid presenting ethics as a purely philosophical exercise disconnected from technical work. Instead, integrate ethical analysis into data science skills, like asking students to audit datasets for bias before training models. Research suggests students retain ethical reasoning better when they apply it to concrete cases rather than abstract principles. Use structured debates and simulations to make invisible biases visible.
What to Expect
Successful learning looks like students articulating specific sources of bias, defending ethical positions with evidence, and proposing actionable solutions. They should move beyond 'AI is bad' to 'Here is how bias occurs and here is how to address it.'
These activities are a starting point. A full mission is the experience.
- Complete facilitation script with teacher dialogue
- Printable student materials, ready for class
- Differentiation strategies for every learner
Watch Out for These Misconceptions
Common MisconceptionDuring Debate Pairs: Facial Recognition Limits, some students may claim AI is unbiased if trained on 'enough' data.
What to Teach Instead
During Debate Pairs, have students examine a sample dataset with known underrepresentation. Ask them to identify which groups are missing and how that might affect the AI's accuracy.
Common MisconceptionDuring Role-Play: Whole Class Trolley Problem, students often assume AI decisions are solely the developer's responsibility.
What to Teach Instead
During Role-Play, assign roles that include users, regulators, and affected communities. Require each role to justify their share of accountability in the final decision.
Common MisconceptionDuring Small Groups: Bias Audit Simulation, students may think ethics is a post-deployment fix rather than a design constraint.
What to Teach Instead
During Small Groups, provide case studies where bias mitigation techniques were integrated into model development. Ask students to compare outcomes with and without these constraints.
Assessment Ideas
After Debate Pairs: Facial Recognition Limits, present students with a scenario where facial recognition misidentifies a person of color. Ask them to apply the debate frameworks to assign accountability and propose a technical fix.
During Small Groups: Bias Audit Simulation, circulate and ask each group to identify one bias in their dataset and propose one debiasing technique. Collect responses to assess understanding of bias sources.
After Individual: Ethical Dilemma Journal, have students pair up to compare their journal entries. Assessors must identify the ethical principle guiding their partner's reasoning and suggest one additional consideration.
Extensions & Scaffolding
- Challenge: Ask students to design a fairness constraint for a biased dataset and test its impact on model performance.
- Scaffolding: Provide a partially completed bias audit checklist for students to fill in during the Small Groups activity.
- Deeper exploration: Have students interview a local professional in AI development about how ethics is integrated into their workflow.
Key Vocabulary
| Algorithmic Bias | Systematic and repeatable errors in a computer system that create unfair outcomes, such as privileging one arbitrary group of users over others. |
| Training Data | The dataset used to train an AI model, from which the model learns patterns and makes predictions or decisions. |
| Autonomous Decision Making | The ability of an AI system to make choices and take actions without direct human intervention or oversight. |
| Facial Recognition | A technology capable of identifying or verifying a person from a digital image or a video frame from a video source. |
| Accountability | The obligation of an individual or organization to account for its activities and accept responsibility for its actions and decisions. |
Suggested Methodologies
More in Impact of Computing and Emerging Technologies
Data Privacy and the PDPA
Understanding data privacy laws, with a specific focus on Singapore's Personal Data Protection Act (PDPA). Students will analyse how companies collect and use personal data.
2 methodologies
Artificial Intelligence and Society
Assessing the socio-economic impacts of Artificial Intelligence and automation. Students will debate the future of work and algorithmic bias.
2 methodologies
Ready to teach Ethics and Professional Conduct in IT?
Generate a full mission with everything you need
Generate a Mission