The Rise of Deepfakes and AI-Generated ContentActivities & Teaching Strategies
Active learning works for this topic because students need to experience the subtle gaps between reality and AI-generated media. Hands-on detection and creation activities build the critical eye and skepticism required to navigate a world where seeing is no longer believing.
Learning Objectives
- 1Explain the core technological principles behind deepfake generation and AI text creation, such as GANs and large language models.
- 2Analyze the potential societal impacts of AI-generated misinformation on democratic processes and public trust.
- 3Design a set of practical strategies for media consumers to verify the authenticity of digital content.
- 4Critique examples of AI-generated content to identify linguistic or visual inconsistencies.
- 5Synthesize information from various sources to articulate the ethical considerations surrounding AI-driven media.
Want a complete lesson plan with these objectives? Generate a Mission →
Detection Challenge: Spot the Fake
Provide pairs with six short videos and texts: three real, three AI-generated. Students note visual glitches, voice inconsistencies, and textual oddities on a shared checklist. Pairs then present their top suspect and reasoning to the class for a vote.
Prepare & details
Explain the technological processes behind deepfakes and AI-generated text.
Facilitation Tip: During Detection Challenge: Spot the Fake, gradually increase the difficulty of samples to build pattern recognition without overwhelming students.
Setup: Flexible space for group stations
Materials: Role cards with goals/resources, Game currency or tokens, Round tracker
Debate Carousel: Societal Impacts
Divide the class into small groups at stations with prompts on election interference, celebrity scandals, or personal blackmail. Groups prepare arguments for 10 minutes, rotate to respond to others, and refine positions. Conclude with a whole-class synthesis of key risks.
Prepare & details
Predict the societal consequences of widespread AI-generated misinformation.
Facilitation Tip: In Debate Carousel: Societal Impacts, assign roles explicitly so students engage with arguments they might personally oppose.
Setup: Flexible space for group stations
Materials: Role cards with goals/resources, Game currency or tokens, Round tracker
Checklist Workshop: Verification Strategies
In small groups, students review real-world deepfake cases and brainstorm evaluation criteria like source credibility and reverse image search. Groups test checklists on new samples, revise based on results, and share polished versions via a class padlet.
Prepare & details
Design strategies for media consumers to critically evaluate the authenticity of digital content.
Facilitation Tip: In Checklist Workshop: Verification Strategies, model the checklist live on a projector while students follow along with their own copies.
Setup: Flexible space for group stations
Materials: Role cards with goals/resources, Game currency or tokens, Round tracker
Ethical Creation Lab: Simple AI Text
Individuals use free AI tools to generate opinion pieces on a current event. They annotate outputs for unnatural phrasing, then swap with a partner for peer critique. Discuss as a class how edits improve authenticity.
Prepare & details
Explain the technological processes behind deepfakes and AI-generated text.
Facilitation Tip: In Ethical Creation Lab: Simple AI Text, circulate to troubleshoot technical hiccups before they derail creative exploration.
Setup: Flexible space for group stations
Materials: Role cards with goals/resources, Game currency or tokens, Round tracker
Teaching This Topic
Teachers should balance demonstration with trial and error. Show students how to spot mismatches in lighting or audio cues, but then let them practice with fresh examples. Avoid long lectures on AI theory; instead, use short, targeted explanations during activities. Research shows that active practice with immediate feedback helps students internalize detection strategies faster than abstract lessons.
What to Expect
By the end of these activities, students will confidently identify red flags in AI-generated content and articulate its broader societal impacts. They will leave with both practical skills and a nuanced understanding of trust in digital media.
These activities are a starting point. A full mission is the experience.
- Complete facilitation script with teacher dialogue
- Printable student materials, ready for class
- Differentiation strategies for every learner
Watch Out for These Misconceptions
Common MisconceptionDuring Detection Challenge: Spot the Fake, students assume obvious flaws like unnatural blinking always appear in deepfakes.
What to Teach Instead
Use the progressive samples in this activity to highlight subtler cues, such as lighting mismatches or inconsistent shadows, to build students' pattern recognition.
Common MisconceptionDuring Ethical Creation Lab: Simple AI Text, students believe AI-generated text lacks creativity and is easy to identify.
What to Teach Instead
Have students compare AI and human writing samples side by side, then discuss overlaps in style and predictability to reveal deeper linguistic analysis.
Common MisconceptionDuring Debate Carousel: Societal Impacts, students think deepfakes only affect videos, not text-based media like social posts.
What to Teach Instead
Use the cross-media examples in this activity to connect visual and verbal deception strategies, showing how AI tools generate seamless text for scams or propaganda.
Assessment Ideas
After Detection Challenge: Spot the Fake and Ethical Creation Lab: Simple AI Text, present students with two news articles on the same topic, one potentially AI-generated and one human-written. Ask: 'What specific linguistic or factual clues did you use to determine which article might be AI-generated? How did your evaluation process differ from reading a standard news report?' Collect responses to assess their detection strategies.
During Checklist Workshop: Verification Strategies, provide students with a short AI-generated text. Ask them to list three specific features of the text that suggest it was created by AI and explain why each feature is indicative. Review these lists to check their understanding of textual red flags.
After Debate Carousel: Societal Impacts, students write down one strategy they will personally use to verify the authenticity of digital content they encounter online. They should also briefly explain why this strategy is important in the context of deepfakes and AI-generated media. Collect these to assess their commitment to applying skills outside the classroom.
Extensions & Scaffolding
- Challenge students to create their own AI-generated text with a specific bias, then swap with peers to identify the manipulation.
- Scaffolding: Provide a partially completed checklist for struggling students to fill in key detection points.
- Deeper exploration: Invite a local journalist or fact-checker to share real cases of AI-generated misinformation and their verification process.
Key Vocabulary
| Deepfake | A synthetic media where a person in an existing image or video is replaced with someone else's likeness, often created using AI and machine learning. |
| Generative Adversarial Network (GAN) | A type of machine learning framework where two neural networks compete against each other to generate new, realistic data, such as images or audio. |
| Large Language Model (LLM) | An AI model trained on vast amounts of text data, capable of understanding and generating human-like text for various applications, including writing articles or answering questions. |
| Synthetic Media | Digital media that has been created or manipulated using artificial intelligence, including deepfakes and AI-generated text or audio. |
| Disinformation | False information that is deliberately created and spread in order to influence public opinion or obscure the truth. |
Suggested Methodologies
Planning templates for English
More in The Digital Frontier
Social Media and Identity
Critiquing how digital platforms shape self-representation and public perception.
2 methodologies
News in the Age of Algorithms
Evaluating how news is constructed and disseminated through automated systems and echo chambers.
2 methodologies
Understanding Media Bias
Students learn to identify and analyze various forms of bias in news reporting and digital content.
2 methodologies
The Ethics of Digital Communication
Students explore ethical considerations in online interactions, including privacy, cyberbullying, and digital citizenship.
2 methodologies
Analyzing Online Arguments and Trolls
Students deconstruct the rhetoric of online arguments, identifying logical fallacies and the tactics of internet trolls.
2 methodologies
Ready to teach The Rise of Deepfakes and AI-Generated Content?
Generate a full mission with everything you need
Generate a Mission