Activity 01
Demo: Supervised Image Classifier
Provide printed animal images; students label half as training data and sort the rest as test data. Groups discuss matches and 'retrain' by adding more examples. Record accuracy before and after.
Explain how a machine 'learns' from data without explicit programming.
Facilitation TipDuring the Demo: Supervised Image Classifier, show students the exact parameters the algorithm adjusts so they connect math to the visual output.
What to look forProvide students with three scenarios: 1) Identifying spam emails (labeled), 2) Grouping news articles by topic (unlabeled), 3) Predicting house prices (labeled). Ask students to write which type of learning (supervised or unsupervised) would be best for each and briefly explain why.
UnderstandApplyAnalyzeSelf-ManagementSelf-Awareness
Generate Complete Lesson→· · ·
Activity 02
Hands-on: Unsupervised Clustering
Give students unlabeled data cards with customer purchase traits. In pairs, they group cards into clusters without prior labels, then compare to a 'model' output. Reflect on patterns found.
Differentiate between supervised and unsupervised learning with simple examples.
Facilitation TipDuring Hands-on: Unsupervised Clustering, ask groups to compare their clusters to a peer’s, highlighting how different starting points affect results.
What to look forPresent students with a simple dataset, perhaps a list of fruits with their colors and sizes. Ask them to imagine training a model to identify apples. What kind of data would they need (labeled/unlabeled)? What would be the 'label' for supervised learning? How might they evaluate if the model is 'learning' well?
UnderstandApplyAnalyzeSelf-ManagementSelf-Awareness
Generate Complete Lesson→· · ·
Activity 03
Timeline Challenge: Data Quality Impact
Distribute biased and balanced datasets for predicting fruit ripeness. Small groups train simple paper models, test predictions, and swap datasets to observe performance drops. Chart results class-wide.
Predict how the quality and quantity of training data impact a machine learning model's performance.
Facilitation TipDuring the Challenge: Data Quality Impact, provide a dataset with duplicates to demonstrate how noise disrupts pattern detection.
What to look forPose the question: 'If you were building a system to recommend music, would you use supervised or unsupervised learning? What are the pros and cons of each for this specific task?' Encourage students to consider the type of data available and the desired outcome.
RememberUnderstandAnalyzeSelf-ManagementRelationship Skills
Generate Complete Lesson→· · ·
Activity 04
Whole Class: Prediction Relay
Project a simple ML flowchart; teams relay to input training data examples verbally, predict outputs, and vote on model improvements. Adjust based on class feedback.
Explain how a machine 'learns' from data without explicit programming.
Facilitation TipDuring the Prediction Relay, rotate roles so every student experiences both predictor and verifier to reinforce feedback loops.
What to look forProvide students with three scenarios: 1) Identifying spam emails (labeled), 2) Grouping news articles by topic (unlabeled), 3) Predicting house prices (labeled). Ask students to write which type of learning (supervised or unsupervised) would be best for each and briefly explain why.
UnderstandApplyAnalyzeSelf-ManagementSelf-Awareness
Generate Complete Lesson→A few notes on teaching this unit
Teachers should focus on the role of data first, then introduce algorithms as tools that optimize based on examples. Avoid starting with code or complex math; instead, use sorting, grouping, and labeling tasks to build intuition. Research shows that students grasp supervised learning faster when they physically tag data, while unsupervised learning clicks when they see how grouping emerges without prior labels. Keep explanations grounded in concrete examples before abstracting.
Successful learning looks like students correctly labeling data for supervised tasks, identifying groupings in unsupervised sets, and articulating why data quality matters. They should explain the difference between labeled and unlabeled data and justify their choices with evidence from their activities.
Watch Out for These Misconceptions
During Demo: Supervised Image Classifier, listen for students saying the computer 'understands' cats like humans do.
Redirect by asking students to trace the algorithm’s steps: it counts pixel patterns, not meanings. Have them compare their own labeling process to the computer’s, highlighting the difference between human intuition and pattern matching.
During Challenge: Data Quality Impact, watch for students assuming more data always improves results.
Ask groups to test a clean dataset, then a noisy one, and measure error rates. Have them present findings to the class, showing how duplicates or mislabels degrade performance.
During Hands-on: Unsupervised Clustering, notice students thinking the algorithm needs no data to find patterns.
Have students physically shuffle cards and observe how groupings emerge only after data is introduced. Ask them to explain why the algorithm needs unlabeled data to self-organize.
Methods used in this brief