Skip to content
Computing · JC 2 · Abstract Data Structures and Algorithms · Semester 1

Algorithm Design Paradigms and Amortised Analysis

Students will understand abstraction as simplifying complex ideas by focusing on essential details and hiding unnecessary ones.

MOE Syllabus OutcomesMOE: Computational Thinking - Middle School

About This Topic

Algorithm design paradigms give students tools to tackle computational problems systematically. In JC 2 Computing, students compare divide-and-conquer for problems like merge sort, where recursion splits tasks before merging results; greedy for activity selection, which builds optimal schedules by always choosing the shortest compatible interval; and dynamic programming for 0/1 knapsack, which tabulates subproblem solutions to avoid recomputation. They explain why other paradigms fail: greedy overlooks future costs in knapsack, while divide-and-conquer creates unbalanced subproblems in activity selection.

Amortised analysis complements this by evaluating average costs over operation sequences, vital for dynamic data structures. Students apply the aggregate, accounting, or potential method to dynamic arrays, proving append costs O(1) amortised despite rare O(n) resizes. This highlights abstraction: users see constant-time inserts, implementation details hidden.

Active learning suits this topic well. Students code paradigms, run benchmarks on datasets, and debate failures collaboratively. Such hands-on comparison makes abstract trade-offs concrete, builds debugging skills, and reveals when mathematical analysis trumps intuition.

Key Questions

  1. Compare divide-and-conquer, greedy, and dynamic programming paradigms, providing a canonical problem for each and explaining why the other two paradigms fail or are suboptimal for that problem.
  2. Apply amortised analysis (aggregate, accounting, or potential method) to a dynamic array to explain why the amortised cost of append is O(1) despite occasional O(n) resizing operations.
  3. Evaluate whether a greedy algorithm yields an optimal solution for the activity-selection problem and the 0/1 knapsack problem, and explain the structural difference between these two problems that accounts for the different outcomes.

Learning Objectives

  • Compare and contrast the suitability of divide-and-conquer, greedy, and dynamic programming paradigms for solving canonical computational problems.
  • Apply amortised analysis using the aggregate, accounting, or potential method to demonstrate the O(1) amortised cost of appending to a dynamic array.
  • Evaluate the optimality of greedy algorithms for activity selection and 0/1 knapsack problems, explaining the structural differences that lead to different outcomes.
  • Design an algorithm using one of the discussed paradigms to solve a novel problem, justifying the choice of paradigm.
  • Analyze the time complexity of algorithms implemented using divide-and-conquer, greedy, and dynamic programming approaches.

Before You Start

Basic Data Structures (Arrays, Linked Lists)

Why: Students need a foundational understanding of how arrays store data and the concept of contiguous memory allocation to grasp dynamic array resizing.

Algorithm Analysis (Big O Notation)

Why: Understanding Big O notation is essential for analyzing the time complexity of algorithms and for comprehending amortised analysis, which describes average-case performance.

Recursion

Why: The divide-and-conquer paradigm heavily relies on recursion, so students must be comfortable with recursive function calls and base cases.

Key Vocabulary

Divide-and-ConquerAn algorithm design paradigm that recursively breaks down a problem into two or more smaller sub-problems of the same type, solves them independently, and then combines their solutions.
Greedy AlgorithmAn algorithm design paradigm that makes the locally optimal choice at each stage with the hope of finding a global optimum. It does not reconsider previous choices.
Dynamic ProgrammingAn algorithm design paradigm that solves complex problems by breaking them down into simpler subproblems, solving each subproblem only once, and storing their solutions to avoid redundant computations.
Amortised AnalysisA method for analyzing the average performance of an operation over a sequence of operations, even if some individual operations are expensive.
Dynamic ArrayA data structure that can grow or shrink in size as elements are added or removed, often implemented using an underlying array that is resized when capacity is reached.

Watch Out for These Misconceptions

Common MisconceptionGreedy algorithms always yield optimal solutions.

What to Teach Instead

Students confuse local optima with global ones, as in knapsack where greedy by weight fails. Active coding of both greedy and DP on sample instances shows suboptimal greedy results. Group debates highlight structural differences like overlapping subproblems.

Common MisconceptionAmortised cost means every operation is O(1), ignoring worst cases.

What to Teach Instead

Learners overlook sequence analysis, thinking resize dominates all appends. Hands-on simulations with cards reveal total cost spreads evenly. Peer teaching of methods clarifies potential as stored 'energy' for future work.

Common MisconceptionAll paradigms perform equally on any problem.

What to Teach Instead

Students undervalue problem structure. Benchmark races in pairs expose runtime gaps, like greedy's speed on activity selection versus DP's necessity elsewhere. Collaborative whiteboarding cements paradigm-problem fits.

Active Learning Ideas

See all activities

Real-World Connections

  • Financial analysts use greedy algorithms to optimize investment portfolios by selecting assets that offer the best immediate return, though this may not always yield the highest long-term profit.
  • Software engineers developing real-time operating systems for autonomous vehicles use dynamic programming to efficiently plan complex sequences of actions, ensuring timely responses to environmental changes.
  • Game developers employ divide-and-conquer strategies to render complex 3D environments, breaking down large scenes into smaller, manageable parts for faster processing and display.

Assessment Ideas

Quick Check

Present students with three problem descriptions: one suited for divide-and-conquer (e.g., finding the maximum element in an unsorted array), one for greedy (e.g., making change with fewest coins), and one for dynamic programming (e.g., Fibonacci sequence). Ask students to identify the most appropriate paradigm for each and briefly justify their choice.

Discussion Prompt

Facilitate a class discussion using the following prompt: 'Consider the 0/1 knapsack problem. Why does a greedy approach of always picking the item with the highest value-to-weight ratio fail to guarantee an optimal solution, unlike in the activity selection problem? What is the key structural difference?'

Exit Ticket

Provide students with a scenario involving a dynamic array that needs to resize. Ask them to explain, using one of the amortised analysis methods (aggregate, accounting, or potential), why the average cost of adding an element remains constant over many additions, despite occasional costly resizing operations.

Frequently Asked Questions

How to teach algorithm design paradigms in JC2 Computing?
Start with canonical problems: merge sort for divide-and-conquer, activity selection for greedy, knapsack for DP. Have students code each, benchmark on datasets, and analyze failures. This builds intuition for paradigm choice through direct comparison, aligning with MOE computational thinking standards.
What is amortised analysis for dynamic arrays?
Amortised analysis averages costs over many operations. For dynamic arrays, append seems O(n) during resize but O(1) overall via doubling: total work for n appends is O(n). Methods like aggregate sum costs, accounting precharges resizes, potential tracks imbalance. Students apply this to justify resizable arrays in practice.
Why does greedy work for activity selection but not knapsack?
Activity selection has optimal substructure where local greedy choices extend globally due to interval compatibility. Knapsack lacks this: picking heaviest item first ignores value density, missing better combinations. Students see this by solving small instances manually, then scaling with code.
How can active learning help students grasp amortised analysis?
Simulations with manipulatives like cards let students act out resizes, tallying costs manually before coding. Pair programming to implement and test methods reveals patterns empirically. Class data pooling shows O(1) amortised consistently, making math tangible and countering worst-case fixation.