Definition
Formative assessment is any assessment activity undertaken during the learning process with the explicit purpose of informing instruction and improving student learning before it concludes. It is not a test or a grade — it is a continuous conversation between teacher and learner about where learning stands and where it needs to go.
The canonical definition comes from Paul Black and Dylan Wiliam's 1998 synthesis: assessment is formative when evidence of student learning is elicited, interpreted, and used to make decisions about the next steps in instruction. Three actors participate in that feedback loop — the teacher, peers, and the learner — and all three can initiate it. A Class VII science teacher scanning exit slips after a lesson on photosynthesis, two Class X students comparing their reasoning during a think-pair-share on quadratic equations, and a Class XII learner checking their essay draft against a success criteria checklist are all enacting formative assessment.
The word "formative" captures the temporal logic: this assessment forms the learner while formation is still possible. By contrast, summative assessment measures what a learner achieved after instruction ends. Both serve essential purposes, but confusing them — grading formative work, or treating term-exam scores as actionable feedback — weakens both. India's National Education Policy 2020 and the National Curriculum Framework 2023 both call explicitly for this distinction: moving toward "competency-based assessment" that privileges formative, process-oriented evidence over high-stakes, one-shot examinations.
Historical Context
The intellectual foundation of formative assessment runs through several decades of cognitive and educational research, beginning well before the term itself became common.
Benjamin Bloom's 1969 work on mastery learning introduced the core insight: if students receive corrective feedback at regular checkpoints during instruction, achievement improves substantially. Bloom observed that one-on-one tutoring produced results two standard deviations above conventional classroom instruction. He attributed the gap largely to the tutor's constant monitoring and real-time adjustment — a dynamic familiar to the Indian tradition of the guru-shishya relationship, in which the teacher continuously reads the learner and adjusts accordingly. Formative assessment is, in Bloom's framing, a classroom approximation of that feedback loop scaled to a class of forty.
Michael Scriven coined the term "formative evaluation" in 1967, originally applied to curriculum development rather than student assessment. Lee Cronbach extended the concept to student learning shortly after. But it was the 1998 work of Paul Black and Dylan Wiliam at King's College London that elevated formative assessment to a research priority in classroom practice. Their review article "Inside the Black Box," published in the Phi Delta Kappan, synthesized 250 studies and found effect sizes ranging from 0.4 to 0.7 — enough to move an average student from the 50th to approximately the 70th percentile. The scale and accessibility of the review made it one of the most-cited papers in educational research.
Wiliam continued to develop the framework through the Assessment Reform Group in the United Kingdom, and his 2011 book Embedded Formative Assessment translated the research into structured classroom routines that practitioners could adopt without wholesale curriculum revision. In the Indian context, NCERT's Continuous and Comprehensive Evaluation (CCE) framework — introduced at the secondary level under the Right to Education Act and the 2009 curriculum revision — drew on precisely this research tradition, emphasising scholastic and co-scholastic formative indicators alongside term-end summative examinations. The NCF 2023 carries that work forward, explicitly naming formative assessment as a cornerstone of the new competency framework.
Key Principles
The Feedback Loop Closes
Formative assessment only works when evidence of learning actually changes what happens next. Collecting data and filing it away is monitoring, not formative assessment. The defining characteristic is that the information loops back to instruction: the teacher reteaches a confusing concept, accelerates past content students have already mastered, or redesigns a task that failed to generate the intended thinking. If the feedback loop does not close, the assessment was not formative — regardless of the tool used.
Feedback Targets the Gap
Effective formative feedback identifies the gap between a student's current understanding and the learning goal, then provides information that helps bridge it. This is the framework articulated by Sadler (1989) and later systematised by Hattie and Timperley's 2007 model of feedback. Feedback that simply marks an answer wrong gives students a verdict without a direction. Feedback that names what the student did, what the learning objective requires, and what a concrete next step looks like gives them traction. NCERT textbook activities that include "Think and Discuss" or "Let Us Check" sections are designed with this gap-targeting logic.
Learning Goals Must Be Transparent
Students cannot self-assess, respond to feedback, or learn from peers unless they know what they are aiming for. Formative assessment depends on clear, specific, and student-accessible learning objectives. In CBSE schools, where curriculum is structured around chapter-wise learning outcomes, teachers who share not just the topic but the specific competency expected — and help students see what "meeting the standard" looks like through worked examples or model answers — make formative feedback actionable rather than confusing.
Peer and Self-Assessment Extend the Feedback
No teacher can generate meaningful, individualised feedback for every student on every task — particularly in Indian classrooms that may have forty to sixty students. Peer assessment and self-assessment scale the feedback system without scaling teacher workload, and carry their own learning benefits. When students assess a peer's Class IX history project against shared criteria, they practise the analytical thinking the learning objective requires. When students evaluate their own work honestly, they build the metacognitive awareness that predicts long-term academic success (Zimmerman, 2002).
Low Stakes Protect Honest Evidence
If students believe their formative responses will be graded and added to their term record, they perform rather than reveal. In a system where board examination results carry significant consequences — for Class X and XII students especially — the pressure to protect marks is acute. Research consistently shows that removing grades from formative activities, and being explicit with students that errors are expected and useful, improves both the quality of evidence collected and students' willingness to take intellectual risks (Butler, 1988). Framing in-class checks as "practice" rather than "assessment" helps establish the psychological safety that honest formative evidence requires.
Classroom Application
Primary Classes (Class 1–5): The Traffic Light Check
A Class III mathematics teacher introduces a new concept on fractions using the NCERT Maths textbook. At a natural pause in the lesson, she asks students to hold up a red, yellow, or green card (or show one, two, or three fingers): green means "I understand," yellow means "I'm not sure," red means "I need help." She scans the classroom in seconds. She calls the red-card students to sit closer to her for additional modelling while the green-card students work on the "Try These" practice problems from the textbook. The yellow-card students pair up to compare their approaches. The teacher has differentiated instruction in under a minute using evidence from the room rather than intuition.
Middle School (Class 6–8): Slates and Whiteboards for Simultaneous Response
A Class VII science teacher asks each student to write their answer to a comprehension question on a small writing slate or mini whiteboard and hold it up simultaneously. She walks the row and sees immediately that about a third of the class has confused photosynthesis with respiration. Rather than marking every notebook, she selects three anonymous responses — one accurate, one partially correct, one misconceived — and leads a brief class analysis. Students revise their answers. The teacher has collected evidence from every student and corrected the misconception without any grading. Writing slates, common in many Indian primary and middle school classrooms, are particularly well suited to this technique.
Secondary and Senior Secondary (Class 9–12): The One-Minute Exit Slip
A Class XI economics teacher pauses five minutes before the period ends and asks two questions: "What is the most important concept from today's lesson on demand elasticity?" and "What is one thing you are still unsure about?" Students write on a half-sheet of paper and hand it in. The teacher reviews them that evening and opens the next class by addressing the three most common points of confusion — without naming individuals. Students learn that their uncertainty is expected and valued; the teacher learns precisely where to begin the next period. In schools where board exam preparation dominates Class XI–XII, framing these slips as "helping me help you before the exam" normalises the practice.
Exit tickets are one of the most practical implementations of this principle across all classes — a structured end-of-period prompt that generates actionable evidence in under five minutes.
Research Evidence
Black and Wiliam's 1998 review established the foundational evidence base. Synthesising approximately 250 studies, they found that well-implemented formative assessment produced effect sizes between 0.4 and 0.7, with the strongest effects observed for low-achieving students. This is notable: formative assessment is not a strategy that primarily benefits already high-performing learners. In the Indian context, where learning-level gaps within a single classroom can span several years of instruction — as documented in ASER (Annual Status of Education Report) surveys — this finding carries particular significance. The mechanisms identified included clearer learning goals, richer feedback, and greater student ownership of learning.
Hattie and Timperley's 2007 paper "The Power of Feedback," published in Review of Educational Research, meta-analysed 196 studies involving 6,972 effect sizes and found an average effect size of 0.79 for feedback — one of the strongest instructional influences in the entire synthesis. Critically, they found that feedback addressed to the self ("you are a good student") was largely ineffective. Feedback addressed to the task, the process, and the learner's self-regulation strategies produced the strongest gains.
Kingsley and Grabner-Hagen (2015) examined digital formative assessment tools in K–12 classrooms and found that immediate feedback — available through classroom response systems — produced stronger learning outcomes than delayed written feedback, when students had sufficient guidance to act on what they received. Speed of feedback matters, but only when paired with clarity.
Kingston and Nash's 2011 meta-analysis, published in Educational Measurement: Issues and Practice, is worth noting for intellectual honesty: it found smaller effect sizes (approximately 0.20) than the Black and Wiliam synthesis. Kingston and Nash attributed the difference to study quality and implementation fidelity. Formative assessment with weak implementation produces weak results. The research supports the practice, but not uncritically — execution matters.
Common Misconceptions
Formative assessment means giving more tests. In the context of CBSE's earlier CCE framework, "formative assessment" was sometimes operationalised as a series of graded FA1–FA4 tasks that still carried marks toward the term total. This is precisely the misconception the research warns against. Formative assessment is defined by what happens with the evidence, not the format of the tool. A five-question quiz whose results are filed in the register and forgotten is not formative. A rich question-and-answer exchange in which the teacher adjusts the period's direction based on what students say is highly formative — no quiz required.
Formative assessment is only the teacher's job. This misconception reduces formative assessment to a monitoring task performed on students rather than a collaborative process involving them. When students learn to assess their own understanding, set learning goals, and give useful feedback to peers, they become active participants in their own learning progress. Peer assessment in particular generates feedback at a volume and frequency that no single teacher — managing a class of fifty — can match, and the act of evaluating another's work deepens the assessor's own understanding.
Formative assessment results should go in the marks register. Recording formative work in the marks register — a practice that persisted in some schools under the CCE system — conflates its diagnostic purpose with the evaluative purpose of summative assessment. When students know that every response will be scored, they protect their marks rather than reveal their thinking. The most useful formative evidence often comes from incomplete understanding, wrong turns, and half-formed ideas — exactly what grades punish. The NCF 2023 is explicit on this point: holistic assessment should separate formative records from the summative grades that appear on report cards.
Connection to Active Learning
Formative assessment and active learning are mutually reinforcing: active learning generates observable evidence of thinking, and formative assessment gives teachers and students a mechanism to use that evidence. Without formative feedback, active learning can be engaging but directionless; without active learning structures, formative assessment lacks the rich evidence it needs to be useful.
Think-pair-share is one of the most powerful formative assessment vehicles in common use. When students pair up to discuss a question — on a Class VIII civics concept, a Class X geometry proof, or a Class XII chemistry reaction mechanism — before sharing with the class, the teacher circulates and listens, collecting real-time evidence of what students understand, what they confuse, and what they find genuinely difficult. The sharing phase reveals which ideas are widespread and which are idiosyncratic. The teacher can adjust instruction on the spot based on what the pairs surfaced.
Gallery walk transforms formative evidence into a physical artefact the class can examine collectively. When groups post their project posters or solved problems on the classroom walls and rotate through, both the teacher and the students can see the range of responses across the class. This works especially well for Class IX–X project work, science fair preparation, and social science map activities. The teacher gains rapid assessment data on the whole group; students calibrate their own understanding against their peers'. The annotations students add during the walk are themselves formative evidence.
Chalk talk — the silent collaborative writing protocol — generates a visible record of student thinking without the social pressures of verbal discussion. Students write questions and responses directly on shared chart paper or a section of the blackboard. The teacher can photograph the conversation and review it as formative data, while students see where their peers' thinking converges and diverges from their own. In large Indian classrooms where verbal participation can be inhibited by social dynamics, chalk talk provides a low-pressure alternative route to honest formative evidence.
The concept of assessment for learning provides the broader philosophical framework that unites these practices. Where formative assessment names the technical practice, assessment for learning names the orientation: assessment used not to sort or certify, but to support the learner in making progress — an orientation fully consistent with the vision of the NEP 2020 and the NCF 2023.
Sources
- Black, P., & Wiliam, D. (1998). Inside the black box: Raising standards through classroom assessment. Phi Delta Kappan, 80(2), 139–148.
- Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112.
- Wiliam, D. (2011). Embedded Formative Assessment. Solution Tree Press.
- Sadler, D. R. (1989). Formative assessment and the design of instructional systems. Instructional Science, 18(2), 119–144.