Skip to content
Technologies · Year 10

Active learning ideas

Machine Learning and Predictive Modeling

Active learning works for machine learning because students need to experience the gap between human intuition and algorithmic reasoning firsthand. When they train models themselves, students confront misconceptions immediately through concrete results, not abstract explanations.

ACARA Content DescriptionsAC9DT10K01AC9DT10P02
25–45 minPairs → Whole Class4 activities

Activity 01

Simulation Game30 min · Pairs

Pairs Activity: Bias Detection in Datasets

Provide pairs with a sample hiring dataset showing gender bias in promotions. Students chart patterns, calculate error rates for subgroups, and propose balanced alternatives. Pairs present findings to the class.

How does biased training data lead to discriminatory algorithmic outcomes?

Facilitation TipDuring the Pair Activity on Bias Detection, assign each pair one dataset variant so they compare outputs and argue which skew causes the most unfair classifications.

What to look forPose the question: 'Imagine a hiring algorithm that consistently favors male candidates. What steps could a developer take to identify and correct the bias in the training data or the algorithm itself?' Facilitate a class discussion on potential solutions.

ApplyAnalyzeEvaluateCreateSocial AwarenessDecision-Making
Generate Complete Lesson

Activity 02

Simulation Game45 min · Small Groups

Small Groups: Train Your Own Classifier

Using Teachable Machine or Scratch extensions, groups gather images or text data for categories like fruits or sentiments. They train models, test on new data, and log accuracy drops from poor training sets. Groups swap models to evaluate.

What are the limitations of using historical data to predict future behavior?

Facilitation TipWhen students Train Your Own Classifier in small groups, circulate and ask: ‘What would happen if this feature disappeared? How would your prediction change?’ to push critical analysis.

What to look forAsk students to write down one example of a prediction an algorithm might make (e.g., predicting movie preferences). Then, have them list one potential limitation or ethical concern related to that prediction.

ApplyAnalyzeEvaluateCreateSocial AwarenessDecision-Making
Generate Complete Lesson

Activity 03

Simulation Game35 min · Whole Class

Whole Class: Ethical Algorithm Debate

Divide class into teams to argue for or against 'Algorithms can make ethical decisions alone.' Prep with case studies like facial recognition errors, then debate with evidence from prior activities. Vote and reflect.

Is an algorithm capable of making an ethical decision without human intervention?

Facilitation TipFor the Ethical Algorithm Debate, give students exactly 60 seconds to prepare their opening point after the prompt is revealed to sharpen concise reasoning under time pressure.

What to look forPresent students with two short descriptions of datasets: one that is diverse and representative, and another that is skewed towards a particular demographic. Ask them to identify which dataset is more likely to lead to a fair predictive model and explain why.

ApplyAnalyzeEvaluateCreateSocial AwarenessDecision-Making
Generate Complete Lesson

Activity 04

Simulation Game25 min · Individual

Individual: Prediction Journal

Students select a personal dataset like sports scores, build a simple prediction rule, test on new data, and journal limitations like overfitting. Share key insights in a class gallery walk.

How does biased training data lead to discriminatory algorithmic outcomes?

Facilitation TipIn the Prediction Journal, require a visual sketch of one failed prediction to make abstract errors visible and discussable.

What to look forPose the question: 'Imagine a hiring algorithm that consistently favors male candidates. What steps could a developer take to identify and correct the bias in the training data or the algorithm itself?' Facilitate a class discussion on potential solutions.

ApplyAnalyzeEvaluateCreateSocial AwarenessDecision-Making
Generate Complete Lesson

A few notes on teaching this unit

Teach this topic by making students confront model failures early and often. Avoid starting with definitions of supervised learning; instead, let them experience training data first. Research shows that confronting misconceptions through active testing embeds durable understanding better than lectures. Focus on small, iterative steps: collect data, train, test, reflect, then revise.

Successful learning looks like students recognizing the limits of raw data, questioning model outcomes, and proposing ethical fixes. They articulate why bias persists, how predictions fail, and what fairness requires beyond technical accuracy.


Watch Out for These Misconceptions

  • During the Train Your Own Classifier activity, watch for students assuming the algorithm understands meaning like a human does.

    As groups train models, have them intentionally add nonsense features (e.g., ‘number of vowels in email’) and observe how the model still uses them. Then prompt: ‘Why did the model treat this as important? What does this show about how algorithms learn?’

  • During the Bias Detection in Datasets activity, watch for students believing that neutral data produces neutral models.

    Pairs analyze a loan dataset with missing demographic labels filled in. They recalculate predictions after balancing gender representation, then present how fairness scores changed. Ask: ‘Did the data alone fix the problem? What else matters?’

  • During the Prediction Journal activity, watch for students assuming more data always improves predictions.

    Students select one prediction (e.g., house price) and graph accuracy versus dataset size. They must explain why accuracy drops when noisy data is added, connecting to real limits like historical redlining data.


Methods used in this brief