Skip to content
Computer Science · 11th Grade

Active learning ideas

Supervised Learning: Classification and Regression

Active learning works for this topic because students need concrete experience to grasp the difference between classification and regression. Handling real datasets—even small ones—builds intuition about when to predict categories versus numbers, and why the problem framing matters as much as the algorithm choice.

Common Core State StandardsCSTA: 3B-AP-09CSTA: 3B-DA-07
20–50 minPairs → Whole Class4 activities

Activity 01

Problem-Based Learning20 min · Pairs

Card Sort: Classification vs. Regression

Give pairs a set of problem cards (predict tomorrow's high temperature, diagnose a tumor as benign or malignant, estimate a car's resale value, classify a review as positive or negative). Partners sort them into classification or regression and write a one-sentence justification for each. Debrief addresses any cards that prompted disagreement.

Explain the difference between classification and regression tasks in supervised learning.

Facilitation TipDuring the Card Sort, give each pair a timer and require them to justify their pairing of scenarios aloud before moving to the next card.

What to look forPresent students with three scenarios: predicting if a customer will click an ad, estimating a car's fuel efficiency, and identifying a handwritten digit. Ask students to label each as either a classification or regression problem and briefly justify their choice.

AnalyzeEvaluateCreateDecision-MakingSelf-ManagementRelationship Skills
Generate Complete Lesson

Activity 02

Problem-Based Learning35 min · Small Groups

Decision Tree Construction Activity

Provide groups with a small labeled dataset (e.g., 20 animals with features like size, diet, habitat) and ask them to build a decision tree by hand, choosing splits that best separate the classes. Groups compare their trees and discuss which features they chose and why. Connect to how algorithms like ID3 make these choices systematically.

Analyze how algorithms like Decision Trees or Linear Regression make predictions.

Facilitation TipWhen constructing decision trees, have students start with a pencil and large paper so they can physically draw splits and revise without undoing digital work.

What to look forProvide students with a small, pre-cleaned dataset (e.g., housing features and prices). Ask them to identify the target variable and state whether this is a classification or regression task. Then, have them write one sentence describing how a Decision Tree might approach this problem.

AnalyzeEvaluateCreateDecision-MakingSelf-ManagementRelationship Skills
Generate Complete Lesson

Activity 03

Think-Pair-Share20 min · Pairs

Think-Pair-Share: When Does the Algorithm Fail?

Show students an example where a decision tree overfits a small dataset (perfect accuracy on training, poor on test). Ask partners to explain in their own words what went wrong and propose one fix. Share explanations with the class. This activity surfaces overfitting intuition before formally defining it.

Construct a simple supervised learning model using a given dataset.

Facilitation TipFor the Think-Pair-Share on failure modes, assign the 'fail' role to one partner so both students actively look for weaknesses rather than only successes.

What to look forFacilitate a class discussion: 'Imagine you are building a model to predict whether a student will pass a course. What kind of data would you need? Would this be a classification or regression problem? What are the potential ethical considerations if your model is biased?'

UnderstandApplyAnalyzeSelf-AwarenessRelationship Skills
Generate Complete Lesson

Activity 04

Problem-Based Learning50 min · Pairs

Live Coding: Sklearn Supervised Model

Students follow along building a simple classification or regression model using scikit-learn on a provided dataset (e.g., iris flowers or housing prices). At three points, the instructor pauses and students predict what the next line of output will be before it runs. Pairs discuss predictions, then see the result. Debrief covers what the metrics mean.

Explain the difference between classification and regression tasks in supervised learning.

Facilitation TipDuring the live coding session, pause after every few lines of code and ask students to predict what the next line will do before revealing it.

What to look forPresent students with three scenarios: predicting if a customer will click an ad, estimating a car's fuel efficiency, and identifying a handwritten digit. Ask students to label each as either a classification or regression problem and briefly justify their choice.

AnalyzeEvaluateCreateDecision-MakingSelf-ManagementRelationship Skills
Generate Complete Lesson

A few notes on teaching this unit

Teachers approach this topic by anchoring every concept in a tangible decision students can visualize or code. Avoid abstract lectures about bias-variance tradeoffs; instead, show how a small change in tree depth affects accuracy on a test set. Research suggests that students grasp supervised learning faster when they alternate between building models by hand and debugging code, so balance unplugged activities with live coding. Warn students that ‘accuracy’ can be misleading, and model evaluation should always include a held-out test set.

Successful learning looks like students confidently labeling tasks as classification or regression, constructing decision boundaries with clear rules, identifying failure modes in their own models, and writing code that trains and evaluates a supervised learner. They should also articulate why simple models often outperform complex ones on small data.


Watch Out for These Misconceptions

  • During the Card Sort activity, watch for students who assume all numeric outcomes are regression problems without considering categorical targets disguised as numbers.

    Have them revisit the sorted cards and compare examples like ‘customer churn (yes/no)’ versus ‘daily sales in dollars’; prompt them to explain why one is classification and the other regression using the card labels.

  • During the Decision Tree Construction Activity, watch for students who build overly deep trees to reach perfect training accuracy.

    Stop the group after 10 minutes and ask: ‘Does your tree make sense to a human?’ Have them prune the tree to three splits and evaluate on a small test set to see performance drop, then discuss overfitting.

  • During the Think-Pair-Share discussion on failure modes, watch for students who blame data quality rather than model choice or evaluation practices.

    Guide them back to the test set they created earlier and ask: ‘If the test error is high, is it the data or the way we measured success? What could we change—data, algorithm, or evaluation method?’


Methods used in this brief