Skip to content
English · Year 9 · The Digital Citizen · Term 4

The Impact of AI on Information and Media

A discussion on the emerging role of Artificial Intelligence in generating and disseminating information, and its implications for media literacy.

ACARA Content DescriptionsAC9E9LY01AC9E9LY02

About This Topic

Students explore how artificial intelligence shapes information creation and spread in media, focusing on tools that generate news articles, images, and videos. They analyze benefits such as faster reporting during crises and personalized content, alongside risks like deepfakes and biased algorithms that erode trust. This aligns with AC9E9LY01 and AC9E9LY02, where students critically evaluate persuasive texts and multimodal sources in digital contexts.

The topic builds media literacy by prompting students to question source credibility, detect AI hallmarks like unnatural phrasing, and consider ethical issues such as undisclosed AI use. It fosters skills in predicting future media shifts, like AI-dominated newsrooms, and evaluating impacts on critical thinking. Discussions reveal how AI amplifies echo chambers, preparing students for informed citizenship.

Active learning suits this topic because it is timely and abstract. Role-plays of AI scenarios, collaborative fact-checking, and creating sample content make concepts concrete. Students practice real-world skills through debate and peer review, boosting engagement and retention in a fast-changing digital landscape.

Key Questions

  1. Analyze the potential benefits and risks of AI in news generation and consumption.
  2. Predict how AI might change the landscape of media literacy and critical thinking.
  3. Evaluate the ethical considerations of using AI-generated content without disclosure.

Learning Objectives

  • Analyze the potential benefits and risks of AI in news generation and consumption, citing specific examples.
  • Evaluate the ethical considerations of using AI-generated content without disclosure, referencing journalistic standards.
  • Predict how AI might change the landscape of media literacy and critical thinking skills in the next five years.
  • Classify different types of AI-generated media content based on their potential for bias or misinformation.
  • Critique the credibility of a news article, identifying potential AI influence or manipulation.

Before You Start

Identifying Bias in Texts

Why: Students need to understand how authors' perspectives can influence information to analyze algorithmic bias in AI-generated content.

Evaluating Source Credibility

Why: This foundational skill is essential for students to question the reliability of information, whether human or AI-generated.

Understanding Digital Footprints and Online Safety

Why: Students should have a basic understanding of how digital information is created and shared to grasp the implications of AI in this space.

Key Vocabulary

Generative AIArtificial intelligence systems capable of creating new content, such as text, images, audio, or video, based on patterns learned from existing data.
DeepfakeA synthetic media where a person in an existing image or video is replaced with someone else's likeness, often created using AI to deceive.
Algorithmic BiasSystematic and repeatable errors in an AI system that create unfair outcomes, such as favoring one group over others, often stemming from biased training data.
Media LiteracyThe ability to access, analyze, evaluate, create, and act using all forms of communication, particularly in the context of digital media and information.
Information EcosystemThe complex network of information sources, creators, disseminators, and consumers, including how information flows and is perceived.

Watch Out for These Misconceptions

Common MisconceptionAI-generated content is always accurate and unbiased.

What to Teach Instead

AI draws from flawed training data, inheriting human biases and errors. Hands-on analysis of sample texts helps students spot inconsistencies through peer comparison, building discernment skills.

Common MisconceptionIt's easy to distinguish AI from human media without tools.

What to Teach Instead

Sophisticated AI mimics human styles closely. Collaborative detection activities train students to note subtle cues like lack of nuance, fostering critical habits over reliance on gut feelings.

Common MisconceptionUsing AI in media has no ethical issues if results seem correct.

What to Teach Instead

Non-disclosure undermines trust and accountability. Role-plays simulate dilemmas, helping students weigh transparency through group debate and ethical frameworks.

Active Learning Ideas

See all activities

Real-World Connections

  • Journalists at Reuters and the Associated Press are experimenting with AI tools to summarize reports, draft basic news articles, and identify trends, aiming to increase efficiency while maintaining editorial oversight.
  • Fact-checking organizations like PolitiFact and Snopes are developing new methods to detect AI-generated misinformation, including deepfakes and AI-written articles, to combat the spread of false narratives online.
  • Social media platforms like X (formerly Twitter) and Meta are grappling with policies for labeling AI-generated content, seeking to balance user expression with the need to prevent deceptive practices.

Assessment Ideas

Discussion Prompt

Pose the question: 'Imagine you read a news report about a local event, but it sounds too perfect, with no human errors or opinions. How would you verify its authenticity, and what steps would you take if you suspected it was AI-generated?' Facilitate a class discussion, guiding students to consider source checking, cross-referencing, and identifying AI hallmarks.

Quick Check

Provide students with two short news summaries, one clearly human-written and one subtly AI-generated. Ask them to identify which is which and list 2-3 specific textual clues that led them to their conclusion, such as unusual phrasing, lack of emotional depth, or repetitive sentence structures.

Exit Ticket

On an index card, have students write one potential benefit and one potential risk of AI in news reporting. Then, ask them to suggest one rule or guideline that news organizations should follow regarding the use of AI-generated content.

Frequently Asked Questions

How does this topic connect to Australian Curriculum English standards?
AC9E9LY01 requires analysing how language persuades in texts, directly applied to AI's manipulative patterns. AC9E9LY02 extends this to multimodal sources, like deepfakes, building students' abilities to evaluate credibility and intent in digital media.
What active learning strategies work best for teaching AI's media impact?
Debates and role-plays engage students in ethical dilemmas, while jigsaw activities distribute research for collaborative synthesis. Hands-on detection challenges with real samples sharpen observation skills. These methods make abstract risks tangible, encourage peer teaching, and mirror real-world media consumption for deeper retention.
How can teachers address student fears about AI replacing journalists?
Frame AI as a tool augmenting human strengths like empathy and verification. Use timelines to explore hybrid futures, backed by examples of journalists using AI ethically. Class discussions normalize concerns while highlighting critical thinking's enduring value.
What resources support teaching AI media literacy in Year 9?
Free tools like Grok or ChatGPT for generating samples, ABC News AI ethics guides, and News Literacy Project modules. Australian-specific: ACARA exemplars and Screen Australia's digital resources provide context on local media challenges, ensuring relevance.

Planning templates for English