Skip to content
Computer Science · 9th Grade

Active learning ideas

Supervised and Unsupervised Learning

Active sorting, discussion, and role-play make the abstract difference between labeled and unlabeled data concrete for 9th graders. When students physically handle cards or act out training scenarios, they move from memorizing definitions to seeing how data shapes a model’s task. These kinesthetic and social experiences build durable understanding that passive listening cannot.

Common Core State StandardsCSTA: 3A-AP-13
20–30 minPairs → Whole Class4 activities

Activity 01

Stations Rotation25 min · Small Groups

Sorting Activity: Label or No Label?

Give groups two sets of cards: one set has images of animals with labels, one set has images without labels. Groups first use the labeled set to learn a classification rule, then use the unlabeled set to find their own groupings. Class compares the two approaches and identifies what was harder and easier in each.

Differentiate between supervised and unsupervised learning paradigms.

Facilitation TipDuring the sorting activity, circulate and ask each pair to justify one of their placements aloud so thinking becomes public and audible.

What to look forPresent students with scenarios: 'A system that identifies pictures of cats and dogs' and 'A system that groups news articles by topic'. Ask them to write 'S' for supervised or 'U' for unsupervised next to each, and briefly explain why.

RememberUnderstandApplyAnalyzeSelf-ManagementRelationship Skills
Generate Complete Lesson

Activity 02

Think-Pair-Share20 min · Pairs

Think-Pair-Share: Real-World Application Matching

Present 8-10 real AI applications (spam filter, Netflix recommendations, medical diagnosis, market segmentation, fraud detection). Students individually sort each into supervised or unsupervised, then compare with a partner. Pairs where students disagreed share their reasoning with the class.

Explain the role of training data in supervised learning models.

Facilitation TipIn the role-play, remind students to exaggerate their feedback (e.g., enthusiastic thumbs-up or head-shake) to make the supervised feedback loop visible to observers.

What to look forFacilitate a class discussion: 'Imagine you have a dataset of customer purchase histories. How could you use supervised learning to predict future purchases? How could you use unsupervised learning to discover new customer segments?'

UnderstandApplyAnalyzeSelf-AwarenessRelationship Skills
Generate Complete Lesson

Activity 03

Stations Rotation30 min · Pairs

Role-Play: Human as Training Data

One student plays a learning algorithm, one plays the teacher. The teacher shows 10 labeled examples (index cards with drawings and labels), then tests the algorithm on 5 unlabeled examples. Debrief: what made a good training example? What confused the algorithm? Connect to how real models fail when training data is limited or biased.

Predict appropriate applications for each type of machine learning.

Facilitation TipFor the case study discussion, cold-call one student to summarize the previous group’s point before opening the floor to new ideas to keep everyone accountable.

What to look forOn an index card, have students define 'training data' in their own words and provide one example of a real-world application that relies heavily on it.

RememberUnderstandApplyAnalyzeSelf-ManagementRelationship Skills
Generate Complete Lesson

Activity 04

Stations Rotation30 min · Small Groups

Case Study Discussion: When Labels Are Not Available

Groups receive a short scenario where collecting labeled data is expensive or impossible (e.g., rare disease detection, archival document clustering, social network anomaly detection). Groups decide whether supervised or unsupervised learning fits and explain the trade-offs. Each group presents their reasoning in two minutes.

Differentiate between supervised and unsupervised learning paradigms.

Facilitation TipDuring the matching task, provide a sentence stem on the board such as ‘This task is supervised because ______’ to scaffold early explanations.

What to look forPresent students with scenarios: 'A system that identifies pictures of cats and dogs' and 'A system that groups news articles by topic'. Ask them to write 'S' for supervised or 'U' for unsupervised next to each, and briefly explain why.

RememberUnderstandApplyAnalyzeSelf-ManagementRelationship Skills
Generate Complete Lesson

A few notes on teaching this unit

Teachers should anchor the lesson in student experience by starting with tasks they already understand—like teachers grading papers versus detectives finding patterns. Avoid diving into algorithmic details too early; instead, focus on the purpose of the task and the role of labels. Research shows that contrasting extremes first (fully labeled vs. no labels) helps students build precise mental models before adding complexity.

Students will confidently label new datasets as supervised or unsupervised and explain the role of labels in each. They will connect real-world tasks to the correct paradigm and recognize when labels are absent or unnecessary. Success looks like clear reasoning, not just correct answers.


Watch Out for These Misconceptions

  • During Sorting Activity: Label or No Label?, watch for students who equate more data with better accuracy in all cases.

    While sorting, hand a pair a set of animal pictures with only two labeled correctly and ask them to reflect: ‘If a supervised model trained on this set made a confident mistake, what went wrong?’ This redirects focus from quantity to quality and representation.

  • During Role-Play: Human as Training Data, listen for students who assume supervised learning always needs 100% perfect labels.

    Have the ‘teacher’ give intentionally noisy feedback (e.g., occasional thumbs-down on correct answers) and then ask the ‘model’ how its confidence changes after each example, making the impact of imperfect labels visible.

  • During Think-Pair-Share: Real-World Application Matching, notice students who claim unsupervised learning is less accurate than supervised learning.

    During the pair phase, give one group a dataset with clear clusters and another group the same data with shuffled labels. Challenge them to explain which task allows a meaningful accuracy metric and why.


Methods used in this brief