Skip to content
Computer Science · 11th Grade

Active learning ideas

AI and Decision Making: Transparency and Accountability

Active learning works because transparency and accountability in AI stay abstract until students confront real-world stakes. When students role-play hearings, trace model logic, or compare explanations, they move from vague worry to concrete understanding. These hands-on activities force students to articulate why opaque decisions matter and what can be done about them.

Common Core State StandardsCSTA: 3B-IC-26
20–45 minPairs → Whole Class3 activities

Activity 01

Inquiry Circle45 min · Small Groups

Mock Congressional Hearing: AI Accountability

Assign roles: AI company representative, affected community member, regulator, and independent auditor. Groups prepare three-minute testimony and question rounds based on a provided AI decision-making scenario, then debrief on what accountability structures they would support.

Explain why it's important to understand how AI systems arrive at their decisions.

Facilitation TipFor the Mock Congressional Hearing, assign roles in advance and provide each group with a one-page brief that includes both technical details and human impact so students must balance both perspectives in their testimony.

What to look forPresent students with a scenario: An AI system used by a school district flags students for potential truancy based on their online activity. Ask: 'What information would you need to understand why a specific student was flagged? Who should be responsible if the system incorrectly flags a student who has done nothing wrong?'

AnalyzeEvaluateCreateSelf-ManagementSelf-Awareness
Generate Complete Lesson

Activity 02

Think-Pair-Share20 min · Pairs

Think-Pair-Share: Black Box Scenarios

Present three brief scenarios where an AI decision affected someone's life (credit denial, medical triage, college admissions). Students individually write who should be accountable and why, compare reasoning with a partner, then share the most contested case with the whole class.

Analyze scenarios where AI decisions have significant real-world consequences.

Facilitation TipDuring the Think-Pair-Share on Black Box Scenarios, give students two minutes to individually jot down their thoughts before pairing, and then three minutes to share before whole-class discussion to ensure everyone contributes.

What to look forProvide students with two brief descriptions of AI decision-making explanations: one focusing on technical model details and another on user-friendly justifications. Ask them to identify which is more interpretable and which is more explainable, and to write one sentence justifying their choice for each.

UnderstandApplyAnalyzeSelf-AwarenessRelationship Skills
Generate Complete Lesson

Activity 03

Inquiry Circle30 min · Small Groups

Explainability Comparison: Simple vs. Complex Models

Provide two classifiers for the same task: a decision tree students can trace by hand, and a neural network described as a black box. Groups compare the output of both on three test cases, then argue which model they would trust more for each scenario and why.

Justify the need for accountability when AI systems make errors or biased decisions.

Facilitation TipIn the Explainability Comparison activity, have students work in pairs to trace a decision tree output and a neural network explanation side by side, then ask them to highlight where each explanation falls short in revealing the model’s full logic.

What to look forAsk students to name one profession where AI transparency is critical and explain in 1-2 sentences why. Then, ask them to identify one potential consequence of a lack of accountability in AI systems.

AnalyzeEvaluateCreateSelf-ManagementSelf-Awareness
Generate Complete Lesson

A few notes on teaching this unit

Teachers work best when they treat transparency as a process, not a product. Avoid presenting explainable AI as a binary—either fully transparent or utterly opaque. Instead, use activities that let students experience the spectrum, from simple decision trees to complex models. Research suggests students grasp accountability better when they trace responsibility across roles rather than pin it on a single person. Keep the focus on how systems function, not just who to blame.

Successful learning looks like students moving from stating problems to proposing and justifying solutions. They should be able to explain why some AI systems are harder to understand than others, identify who is responsible when things go wrong, and suggest ways to make decisions more transparent. Evidence of learning includes clear reasoning in discussions, accurate comparisons in models, and thoughtful questions during role-plays.


Watch Out for These Misconceptions

  • During Mock Congressional Hearing: AI Accountability, students may assume that detailed technical explanations are always sufficient for public understanding.

    Use the hearing to show that technical explanations often confuse non-experts. Ask students in the audience to raise their hands when they stop following the testimony and have witnesses rephrase their points in plain language.

  • During Think-Pair-Share: Black Box Scenarios, students may believe that any explanation provided by an AI system is complete and trustworthy.

    Use the scenario cards to reveal gaps between stated reasons and actual causes. After pairs share, ask them to mark on their scenario sheet where the AI’s explanation dodges key details or oversimplifies the decision process.

  • During Explainability Comparison: Simple vs. Complex Models, students may think that post-hoc explanations for complex models are fully accurate representations of how decisions were made.

    Have students annotate the neural network explanation with questions like 'Does this highlight the most influential input?' and 'Is this the only possible reason for the output?' to surface the approximation gap in explanations.


Methods used in this brief