Skip to content

AI and Decision Making: Transparency and AccountabilityActivities & Teaching Strategies

Active learning works because transparency and accountability in AI stay abstract until students confront real-world stakes. When students role-play hearings, trace model logic, or compare explanations, they move from vague worry to concrete understanding. These hands-on activities force students to articulate why opaque decisions matter and what can be done about them.

11th GradeComputer Science3 activities20 min45 min

Learning Objectives

  1. 1Evaluate the tradeoffs between model accuracy and interpretability in AI systems for loan application processing.
  2. 2Analyze a case study of an AI-driven hiring tool to identify potential sources of bias and their impact.
  3. 3Justify the need for specific accountability mechanisms for autonomous vehicle software failures.
  4. 4Compare different approaches to AI transparency, such as LIME and SHAP, in explaining predictive model outcomes.
  5. 5Design a framework for auditing an AI content recommendation algorithm for fairness and unintended consequences.

Want a complete lesson plan with these objectives? Generate a Mission

45 min·Small Groups

Mock Congressional Hearing: AI Accountability

Assign roles: AI company representative, affected community member, regulator, and independent auditor. Groups prepare three-minute testimony and question rounds based on a provided AI decision-making scenario, then debrief on what accountability structures they would support.

Prepare & details

Explain why it's important to understand how AI systems arrive at their decisions.

Facilitation Tip: For the Mock Congressional Hearing, assign roles in advance and provide each group with a one-page brief that includes both technical details and human impact so students must balance both perspectives in their testimony.

Setup: Groups at tables with access to source materials

Materials: Source material collection, Inquiry cycle worksheet, Question generation protocol, Findings presentation template

AnalyzeEvaluateCreateSelf-ManagementSelf-Awareness
20 min·Pairs

Think-Pair-Share: Black Box Scenarios

Present three brief scenarios where an AI decision affected someone's life (credit denial, medical triage, college admissions). Students individually write who should be accountable and why, compare reasoning with a partner, then share the most contested case with the whole class.

Prepare & details

Analyze scenarios where AI decisions have significant real-world consequences.

Facilitation Tip: During the Think-Pair-Share on Black Box Scenarios, give students two minutes to individually jot down their thoughts before pairing, and then three minutes to share before whole-class discussion to ensure everyone contributes.

Setup: Standard classroom seating; students turn to a neighbor

Materials: Discussion prompt (projected or printed), Optional: recording sheet for pairs

UnderstandApplyAnalyzeSelf-AwarenessRelationship Skills
30 min·Small Groups

Explainability Comparison: Simple vs. Complex Models

Provide two classifiers for the same task: a decision tree students can trace by hand, and a neural network described as a black box. Groups compare the output of both on three test cases, then argue which model they would trust more for each scenario and why.

Prepare & details

Justify the need for accountability when AI systems make errors or biased decisions.

Facilitation Tip: In the Explainability Comparison activity, have students work in pairs to trace a decision tree output and a neural network explanation side by side, then ask them to highlight where each explanation falls short in revealing the model’s full logic.

Setup: Groups at tables with access to source materials

Materials: Source material collection, Inquiry cycle worksheet, Question generation protocol, Findings presentation template

AnalyzeEvaluateCreateSelf-ManagementSelf-Awareness

Teaching This Topic

Teachers work best when they treat transparency as a process, not a product. Avoid presenting explainable AI as a binary—either fully transparent or utterly opaque. Instead, use activities that let students experience the spectrum, from simple decision trees to complex models. Research suggests students grasp accountability better when they trace responsibility across roles rather than pin it on a single person. Keep the focus on how systems function, not just who to blame.

What to Expect

Successful learning looks like students moving from stating problems to proposing and justifying solutions. They should be able to explain why some AI systems are harder to understand than others, identify who is responsible when things go wrong, and suggest ways to make decisions more transparent. Evidence of learning includes clear reasoning in discussions, accurate comparisons in models, and thoughtful questions during role-plays.

These activities are a starting point. A full mission is the experience.

  • Complete facilitation script with teacher dialogue
  • Printable student materials, ready for class
  • Differentiation strategies for every learner
Generate a Mission

Watch Out for These Misconceptions

Common MisconceptionDuring Mock Congressional Hearing: AI Accountability, students may assume that detailed technical explanations are always sufficient for public understanding.

What to Teach Instead

Use the hearing to show that technical explanations often confuse non-experts. Ask students in the audience to raise their hands when they stop following the testimony and have witnesses rephrase their points in plain language.

Common MisconceptionDuring Think-Pair-Share: Black Box Scenarios, students may believe that any explanation provided by an AI system is complete and trustworthy.

What to Teach Instead

Use the scenario cards to reveal gaps between stated reasons and actual causes. After pairs share, ask them to mark on their scenario sheet where the AI’s explanation dodges key details or oversimplifies the decision process.

Common MisconceptionDuring Explainability Comparison: Simple vs. Complex Models, students may think that post-hoc explanations for complex models are fully accurate representations of how decisions were made.

What to Teach Instead

Have students annotate the neural network explanation with questions like 'Does this highlight the most influential input?' and 'Is this the only possible reason for the output?' to surface the approximation gap in explanations.

Assessment Ideas

Discussion Prompt

After the Mock Congressional Hearing: AI Accountability, present the truancy scenario and ask students to write one paragraph identifying what information they would need to understand why a specific student was flagged and who should be held responsible if the system is wrong. Use their responses to assess their understanding of distributed responsibility and the limits of post-hoc explanations.

Quick Check

During Explainability Comparison: Simple vs. Complex Models, provide two short explanations for the same AI decision. Ask students to identify which is more interpretable and which is more explainable, then have them write one sentence justifying each choice. Collect responses to check their grasp of the difference between user-friendly and technically faithful explanations.

Exit Ticket

After the Think-Pair-Share: Black Box Scenarios, ask students to name one profession where AI transparency is critical and explain in 1-2 sentences why. Then ask them to identify one potential consequence of a lack of accountability in AI systems. Use their answers to evaluate their ability to connect abstract concepts to real-world stakes.

Extensions & Scaffolding

  • Challenge: Ask students to draft a two-paragraph policy recommendation for a school board considering a new plagiarism detection AI, citing specific transparency and accountability gaps they identified in the activities.
  • Scaffolding: Provide sentence starters for students who struggle during the hearing, such as: 'The AI flagged this student because...' followed by 'This is problematic because...' and 'To improve accountability, the district should...'
  • Deeper exploration: Invite students to research a real-world AI system used in their community, create a one-page infographic showing how decisions are made, and identify one transparency or accountability gap they would address.

Key Vocabulary

InterpretabilityThe degree to which a human can understand the cause of a decision made by an AI model, examining its internal logic.
ExplainabilityThe ability to provide a human-understandable justification for a specific AI decision after it has been made, even if the internal logic is complex.
Algorithmic BiasSystematic and repeatable errors in an AI system that create unfair outcomes, such as privileging one arbitrary group of users over others.
AccountabilityThe obligation to accept responsibility for the outcomes of an AI system's decisions, including errors or harmful impacts.
Black Box ModelAn AI system, often a deep neural network, where the internal workings are so complex that it is difficult or impossible to understand how it arrives at a specific output from a given input.

Ready to teach AI and Decision Making: Transparency and Accountability?

Generate a full mission with everything you need

Generate a Mission