Skip to content
Computer Science · 11th Grade · Artificial Intelligence and Ethics · Weeks 19-27

AI and Privacy Concerns

Examining how AI systems collect, process, and potentially compromise personal data.

Common Core State StandardsCSTA: 3B-IC-24CSTA: 3B-IC-25

About This Topic

AI systems are built on data, and data about people raises urgent questions about privacy. This topic examines how AI technologies collect, aggregate, and analyze personal information in ways that individuals rarely understand and often cannot opt out of. In the US context, students can examine specific laws like COPPA (protecting children under 13), FERPA (protecting student records), and the emerging patchwork of state privacy laws like California's CCPA. The gaps between legal protection and actual data practice are themselves a subject for analysis.

Surveillance capitalism is a key concept here: the business model of collecting user data at scale, building behavioral profiles, and selling predictive products to advertisers or other buyers. Students learn that when a product or service appears free, their data is frequently the product. AI amplifies this model by enabling far more sophisticated inference from behavioral signals than was previously possible.

Active learning is especially productive for this topic because students have firsthand experience with the systems under analysis. Asking students to audit their own app permissions, trace data flows from a single app through multiple third-party services, or draft a privacy policy for a hypothetical app makes abstract concepts immediate and personally relevant.

Key Questions

  1. Analyze how AI technologies can impact individual privacy and data security.
  2. Explain the concept of 'surveillance capitalism' in the context of AI.
  3. Critique current regulations and propose new safeguards to protect privacy in an AI-driven world.

Learning Objectives

  • Analyze the methods AI systems use to collect, aggregate, and process personal data.
  • Explain the business model of surveillance capitalism and its reliance on AI for behavioral profiling.
  • Critique the effectiveness of current US privacy regulations (e.g., COPPA, CCPA) in the context of AI data collection.
  • Design a set of privacy safeguards for a hypothetical AI-driven application.

Before You Start

Introduction to Artificial Intelligence

Why: Students need a foundational understanding of what AI is and how it functions before exploring its ethical implications.

Data Collection and Representation

Why: Understanding how data is gathered and organized is essential for grasping how AI systems utilize personal information.

Key Vocabulary

Personal DataInformation that can be used to identify, locate, or contact an individual, including online identifiers, location data, and biometric information.
Surveillance CapitalismAn economic system centered on the commodification of personal data, where companies collect vast amounts of user information to predict and influence behavior for profit.
Behavioral ProfilingThe process of creating detailed profiles of individuals based on their online activities, preferences, and behaviors, often used for targeted advertising and other purposes.
Data AggregationThe process of collecting and combining data from various sources into a single, unified view, often used by AI systems to build comprehensive user profiles.
Algorithmic BiasSystematic and repeatable errors in an AI system that create unfair outcomes, such as privileging one arbitrary group of users over others.

Watch Out for These Misconceptions

Common MisconceptionIf I have nothing to hide, I have nothing to fear from data collection.

What to Teach Instead

Privacy is not about hiding wrongdoing; it is about autonomy, the ability to control your own information and narrative. Data that seems harmless in isolation can be combined with other data to reveal sensitive information about health, finances, or political views.

Common MisconceptionLaws like COPPA fully protect students from privacy violations by tech companies.

What to Teach Instead

COPPA applies only to children under 13 and to apps that knowingly target them. Most social media platforms have gaps in enforcement, and 11th-graders are entirely outside COPPA's scope. App permission audits help students see the practical limits of legal protection.

Common MisconceptionUsing private browsing or a VPN makes you anonymous online.

What to Teach Instead

Private browsing prevents local storage of history but does not prevent websites, advertisers, or ISPs from recording activity. Device fingerprinting, login-based tracking, and cross-site cookies remain effective even with basic privacy tools enabled.

Active Learning Ideas

See all activities

Real-World Connections

  • Social media platforms like Meta (Facebook, Instagram) and TikTok employ AI to analyze user interactions, likes, and shares to build detailed profiles for targeted advertising, raising significant privacy concerns for their billions of users.
  • Smart home devices, such as Amazon Echo and Google Nest, continuously collect audio data and user habits, which are then processed by AI to provide services and personalize experiences, creating potential privacy risks if data is misused or breached.
  • The development of facial recognition technology by companies like Clearview AI, which scrapes public photos to build a massive database, highlights the tension between security applications and individual privacy rights.

Assessment Ideas

Discussion Prompt

Facilitate a class debate using the prompt: 'Should companies be allowed to collect and sell user data for AI training if they provide a free service?' Ask students to support their arguments with specific examples of AI applications and privacy implications.

Quick Check

Present students with a scenario describing a new AI-powered app (e.g., a personalized news aggregator). Ask them to identify 2-3 types of personal data the app might collect, 1 potential privacy risk associated with that data, and 1 specific privacy safeguard they would recommend.

Exit Ticket

On an index card, have students define 'surveillance capitalism' in their own words and provide one example of how AI amplifies this business model. They should also list one current US privacy law and briefly explain its limitation regarding AI data collection.

Frequently Asked Questions

What is surveillance capitalism?
Surveillance capitalism, a term popularized by Shoshana Zuboff, describes the business model of collecting human behavioral data at scale, using AI to build predictive models of future behavior, and selling those predictions to advertisers and other buyers. The goal is influencing behavior, not just observing it.
What US laws protect privacy in the context of AI?
Key US privacy laws include COPPA (children under 13), FERPA (student education records), HIPAA (health information), and the California Consumer Privacy Act. No single comprehensive federal AI privacy law currently exists, leaving protection fragmented by sector and state.
How does AI make privacy risks worse compared to earlier data collection?
AI enables inference of sensitive attributes from non-sensitive data: location history can reveal religious practice, purchase history can predict pregnancy, and social network data can infer sexual orientation. These inferences are possible even when the sensitive information was never explicitly shared.
How does active learning help students understand AI privacy risks?
Personal data audits and policy drafting exercises make abstract risks concrete because students examine systems they actually use. When students trace their own data through app permission screens or draft a privacy policy from scratch, the gap between legal language and actual data practice becomes immediately visible.