Ask most teachers what they think of standardized testing and you'll get a look. Ask them what they think of testing in general and you might get the same look. The frustration is understandable — many schools now run benchmark assessments every six to eight weeks on top of state testing, progress monitoring for intervention students, curriculum-embedded unit tests, and the state summative at the end of the year. For some students, that's five or six distinct rounds of formal testing per year, and none of it is the kind of feedback that helps them in the moment.

This is the testing fatigue problem. And it's getting conflated, unfairly, with formative assessment — which is a different thing entirely and which doesn't have to look like a test at all.

The Distinction That Gets Collapsed

Summative assessment measures what a student learned after the learning is complete. A unit test, a state exam, a quarterly benchmark — these are snapshots taken after the fact. Their primary audience is administrators, curriculum teams, and accountability systems. They're useful for big-picture evaluation, but they arrive too late to change instruction for the students being assessed.

Formative assessment measures what a student is learning while the learning is in progress. Its primary audience is the teacher — and ideally, the student themselves. A well-designed formative assessment doesn't feel like a test because its purpose is feedback, not evaluation. The question isn't "can you show me what you know?" It's "where are you right now, and what do you need next?"

The problem in most districts is that formative assessment has been operationalized as more frequent summative assessment — shorter versions of the same kind of high-stakes evaluative instrument, administered more often. That's not formative assessment. That's just more testing. And it produces testing fatigue without producing the actionable, moment-to-moment feedback that formative assessment is supposed to generate.

What Real Formative Assessment Looks Like

Effective formative assessment is woven into instruction, not added on top of it. It happens through the practice students are already doing, through the questions teachers ask during discussion, through the patterns of errors a student makes as they work through a problem set.

A teacher who gives students three problems at the start of class to check yesterday's understanding, then uses what she sees to adjust her opening instruction — that's formative assessment. It took three minutes and didn't feel like a test to any student in the room.

A teacher who watches where students pause and backtrack during independent practice, then pulls a small group to address the specific concept they're stuck on — that's formative assessment too. It didn't require a formal instrument at all.

The challenge is doing this consistently and systematically across a class of 30 students. A teacher who's skilled at reading individual student performance in the moment can do it informally. But doing it for every student, tracking patterns over time, and connecting those patterns to specific instructional needs — that's where the cognitive load gets significant.

How Technology Can Support This (Without Adding More Tests)

The promise of adaptive learning platforms, when they're working well, is that they transform the practice students are already doing into a continuous stream of formative data. Students aren't taking an extra assessment. They're doing their math practice, and the system is simultaneously inferring which skills they've mastered, which they're developing, and which they haven't yet reached.

That inference is what makes the difference between a digital worksheet and an adaptive platform. A worksheet produces a score. An adaptive system produces a skill-level picture — not just "she got 7 out of 10" but "she's solid on adding fractions with like denominators, still developing with unlike denominators, and hasn't been exposed to mixed numbers yet."

That distinction matters for teachers trying to plan small-group instruction. A score tells you how a student performed. A skill map tells you what to teach next.

The Student Side of Formative Assessment

There's an aspect of formative assessment that doesn't get enough attention: what it does for students when it's designed right.

When feedback arrives quickly and feels low-stakes, students are more willing to take risks on harder problems. A student who knows that getting something wrong in practice is information — not a grade, not a judgment — approaches difficult material differently. They're more likely to try, which means they're more likely to encounter the difficulty that produces learning.

This is the psychologically interesting part. Testing anxiety is real and well-documented in the research literature. But the anxiety is largely about stakes and evaluation — about being judged. Formative feedback in a practice context, when it's framed correctly, doesn't trigger the same response. Students who use adaptive platforms consistently report that they don't feel like they're being tested. They feel like they're practicing. That's exactly right, and it's exactly why this kind of embedded assessment can generate data that high-stakes tests cannot.

What This Means for Assessment Calendar Planning

A district that's running rich formative assessment throughout the year — through adaptive practice, through classroom observation tools, through teacher-embedded checks for understanding — is in a position to ask hard questions about what formal benchmark assessments are actually adding.

Some benchmarks are required. State accountability systems don't disappear. But for districts with some control over their internal testing calendar, the question is worth asking: if teachers already have weekly skill-level data on every student, what is the six-week benchmark telling us that we don't already know?

In several districts we work with, the answer has been "less than we expected." Where adaptive data is being used actively, teachers often find that benchmark results confirm what they already knew from day-to-day observation. The benchmark becomes a validation exercise rather than a discovery exercise. That's a useful outcome — and it opens the door to reducing the number of formal assessments while actually improving the quality of the formative data teachers have access to throughout the year.

The goal isn't fewer assessments as an end in itself. It's more useful feedback, arriving faster, without the overhead that makes students dread sitting down to take another test.