Skip to content
Computer Science · 12th Grade

Active learning ideas

Data Privacy and Anonymization Techniques

Active learning works for this topic because students need to experience the tension between privacy and utility firsthand. Passive lectures cannot convey why removing names is insufficient or how quasi-identifiers function. Hands-on activities let students grapple with real datasets and see the consequences of their choices in anonymization.

Common Core State StandardsCSTA: 3B-NI-04CSTA: 3B-IC-28
18–30 minPairs → Whole Class4 activities

Activity 01

Collaborative Problem-Solving: Re-Identification Attack

Provide students with a simple 'anonymized' dataset of 30 records containing age, zip code, gender, and a sensitive attribute (e.g., a medical condition). Students attempt to re-identify specific individuals using only public information like a phone directory or census data. Most will succeed for at least one individual, making the inadequacy of naive anonymization concrete before any formal technique is introduced.

Is it possible to truly anonymize data in a world of interconnected databases?

Facilitation TipDuring the Re-Identification Attack lab, have students record their steps and findings in a shared document so they can compare results and discuss discrepancies as a class.

What to look forProvide students with a small, simplified dataset containing quasi-identifying attributes. Ask them to identify which attributes are quasi-identifying and explain how they might be used to re-identify an individual. Then, ask them to suggest one anonymization technique that could be applied and why.

ApplyAnalyzeEvaluateCreateRelationship SkillsDecision-MakingSelf-Management
Generate Complete Lesson

Activity 02

Think-Pair-Share18 min · Pairs

Think-Pair-Share: How Much Privacy Is Enough?

Present a scenario: a hospital wants to share patient data with researchers to study disease patterns, but patients expect privacy. Pairs must negotiate a specific k-anonymity threshold and explain what attacks it protects against and what utility it sacrifices. Different pairs will choose different thresholds, surfacing the fact that k is a policy decision, not a technical optimum.

Analyze the trade-offs between data utility and privacy protection.

Facilitation TipFor the Think-Pair-Share, assign specific roles (e.g., data holder, privacy advocate, data analyst) to ensure balanced perspectives during the discussion.

What to look forPresent students with a scenario describing a dataset and a potential privacy risk. Ask them to choose the most appropriate anonymization technique (k-anonymity, l-diversity, or differential privacy) and justify their choice, explaining the trade-offs involved.

UnderstandApplyAnalyzeSelf-AwarenessRelationship Skills
Generate Complete Lesson

Activity 03

Gallery Walk22 min · Small Groups

Gallery Walk: Anonymization Technique Comparison

Post four stations around the room, data suppression, data generalization, k-anonymity, and differential privacy, each with a description, a concrete example, and the same three-column template: 'what attacks it protects against,' 'what it sacrifices,' and 'real-world uses.' Groups rotate and annotate each template, then the class synthesizes a comparison chart during debrief.

Evaluate different data anonymization techniques for their effectiveness and limitations.

Facilitation TipDuring the Gallery Walk, provide a simple rubric for students to evaluate each anonymization technique’s strengths and weaknesses as they move between stations.

What to look forFacilitate a class discussion using the prompt: 'Is it possible to truly anonymize data in a world of interconnected databases?' Encourage students to debate the effectiveness of different techniques and consider the evolving landscape of data linkage and re-identification.

UnderstandApplyAnalyzeCreateRelationship SkillsSocial Awareness
Generate Complete Lesson

Activity 04

Formal Debate25 min · Whole Class

Formal Debate: Is Full Data Anonymization Possible?

One side argues that with sufficient technical effort, data can be released in a form that protects privacy while preserving utility. The other argues that the two goals are fundamentally incompatible and that true anonymization requires degrading the data to the point of uselessness. Students draw on the re-identification lab and their technique research to support their positions.

Is it possible to truly anonymize data in a world of interconnected databases?

Facilitation TipFor the Structured Debate, assign roles in advance and provide a list of key points to ensure the debate remains focused on the tension between privacy and utility.

What to look forProvide students with a small, simplified dataset containing quasi-identifying attributes. Ask them to identify which attributes are quasi-identifying and explain how they might be used to re-identify an individual. Then, ask them to suggest one anonymization technique that could be applied and why.

AnalyzeEvaluateCreateSelf-ManagementDecision-Making
Generate Complete Lesson

A few notes on teaching this unit

Teachers should approach this topic by framing privacy and utility as a design challenge, not just a technical problem. Start with concrete examples students can manipulate, then gradually introduce the mathematical and algorithmic foundations. Avoid overwhelming students with jargon; instead, use activities to build intuition. Research suggests that students retain concepts better when they experience failure first—the Re-Identification Attack lab is designed to reveal the limits of simple anonymization, which makes subsequent techniques more meaningful.

Successful learning looks like students recognizing the limits of simple anonymization, selecting appropriate techniques for given datasets, and justifying their choices with evidence from the activities. They should also articulate the trade-offs between privacy and data utility in their discussions and written work.


Watch Out for These Misconceptions

  • During the Re-Identification Attack lab, watch for students assuming that removing direct identifiers like names and SSNs is enough to anonymize a dataset.

    Use the lab’s simplified dataset to have students identify quasi-identifiers such as birth date, gender, and zip code. Ask them to calculate how many unique combinations exist in the dataset and discuss what this means for anonymity.

  • During the Think-Pair-Share activity, watch for students believing that differential privacy always destroys a dataset’s usefulness.

    Ask students to compare query results (e.g., average income) at different epsilon values in the Think-Pair-Share materials. Have them calculate the relative error introduced by noise and discuss when the trade-off is acceptable.

  • During the Gallery Walk, watch for students assuming that once data is anonymized, it can be shared indefinitely without risk.

    Use the Netflix Prize and AOL case studies from the Gallery Walk materials to ask students to identify how new datasets published after anonymization enabled re-identification, and what this implies for ‘once and done’ anonymization.


Methods used in this brief