When Clayton County Unified School District's math coordinator, Terrence Howell, first presented the idea of a district-wide adaptive learning pilot to his principals, two of the fourteen raised immediate concerns. Not about the technology — about timing. "We'd just finished implementing a new pacing guide," one principal told him. "Teachers weren't ready for another change layer."
Howell's team made a deliberate decision: they would position the pilot not as a replacement to the pacing guide but as a support layer beneath it. Same standards, same sequence — different level of responsiveness to individual students. That framing mattered. By the end of the semester, even the skeptical principals were asking to expand.
The Setup
The pilot ran across all 14 elementary schools in the district, focused on grades 3 through 5 math. Teachers were given a half-day onboarding session and a single printed reference card. Howell was emphatic that the tool needed to work for teachers who had thirty other things on their minds. "If it takes more than two minutes to pull up the information I need, I won't pull it up," one of his fifth-grade teachers told him early on. That feedback shaped how they configured the dashboard view.
Students used the platform for 25 to 30 minutes per day, four days a week, during independent work time. The fifth day was reserved for direct instruction and small group work. This structure — tech time as a complement to teaching, not a substitute — was a deliberate part of the design from the start.
What the Numbers Showed
At the end of the semester, district math scores on the Georgia Milestones benchmark showed a 23% improvement over the same period the prior year. That figure comes with context worth examining.
The prior year had been a difficult one for the district — post-COVID recovery was still affecting attendance and instructional continuity — so some portion of the gain reflects a rebound from an unusually low baseline. Howell acknowledges this. But the district also ran a quasi-experimental comparison: eight schools that used the platform more consistently (students averaging 22 or more minutes per session) outperformed the six that used it less consistently (averaging under 15 minutes per session) by a statistically significant margin on the same benchmark. The correlation between usage fidelity and score improvement was the clearest signal in the data.
For students who entered the pilot at least one grade level below proficiency, the gains were steeper. That subgroup showed an average improvement of 1.6 grade levels on the platform's internal skill assessments over 18 weeks — roughly twice the expected trajectory for that population under previous interventions.
What Teachers Said
Numbers tell part of the story. The qualitative data told a different part.
Teachers in the pilot consistently described the same shift in their preparation routine. Before the pilot, most teachers reviewed class-level benchmark data once per month, using it to plan their next unit's emphasis. After the pilot, they were checking the dashboard two or three times per week — because the data was specific enough to act on.
"Before, I knew my class was weak on fractions. After, I knew that seven specific students hadn't mastered comparing fractions with different denominators," said a fourth-grade teacher at Jonesboro Elementary. "Those are very different problems. One requires you to reteach a whole unit. The other requires you to pull a small group for twenty minutes."
That shift in data granularity — from subject-level to skill-level — was consistently identified as the most practically useful change. Teachers weren't just seeing that students were behind. They were seeing exactly where, in enough detail to do something about it immediately.
What Surprised the District
Two things caught Howell's team off guard.
The first was the effect on above-grade-level students. The pilot was designed primarily as a support tool for struggling students, and those results were strong. But the highest-performing students also showed stronger growth than expected. When students who had already mastered grade-level content were given access to above-grade material through the adaptive pathway, many took it without prompting. A handful of fifth-graders ended the semester working on seventh-grade content independently. "We hadn't really thought about ceiling effects," Howell said. "It turned out removing them mattered too."
The second surprise was the effect on homework completion. The district's homework completion rates in math increased by 17% during the pilot semester. The team traced this to the fact that students who were doing adaptive practice during school were arriving at homework tasks with more confidence — they'd already practiced the relevant skills at the right level, rather than being asked to apply skills they hadn't fully developed yet.
What Didn't Go Smoothly
Not everything worked. Three schools experienced hardware challenges — older devices that ran the platform slowly enough to affect student engagement. Two teachers left the pilot after the first month, citing concerns about screen time (see our separate piece on screen time and edtech evidence). And the onboarding, while brief by design, turned out to be insufficient for a handful of teachers who needed more time to feel comfortable interpreting the dashboard data.
Howell's team added a monthly data review session midway through the semester — a 30-minute structured look at each class's skill performance with a curriculum coach. Participation was voluntary. Eleven of the fourteen schools sent at least one teacher. The schools whose teachers attended those sessions showed stronger score gains than those that didn't.
Where the District Goes Next
Clayton County is expanding the platform to grades 6 and 7 in the fall. They're also building out a more structured teacher support track — not longer onboarding, but better-timed check-ins at weeks two, six, and twelve, based on where the adoption research suggests teachers typically hit friction. The coaching model is something they're building themselves, which is probably how it should be: the technology is a platform, not a program.
Howell's takeaway from the semester is straightforward: "The gains are real, but they're not automatic. You need teachers who trust the data and leaders who give them time to use it. The tool accelerates good practice. It doesn't substitute for it."