An arrow showing the way to a red flag against a green landscape to demonstrate the direction of through-year assessment and how we get there.

Through-Year Assessment: Are We Asking Too Much?

Mar 29, 2023

A Q & A with Center President Scott Marion 

The Center for Assessment just published a major paper about through-year assessment. It’s a hot topic in states right now; More than a dozen are designing, piloting or implementing versions of this model to replace their current end-of-year tests for federally mandated accountability. 

Co-authors Nathan Dadey, Carla Evans and Will Lorié write that through-year programs may provide benefits, but they also pose many conceptual, technical and practical challenges, and will demand significant tradeoffs.

Scott Marion, the Center’s president and executive director, spoke with Catherine Gewertz, its editorial director, about what led the Center to elevate through-year assessment as a key issue facing education leaders.

Why is the Center highlighting through-year as an important topic right now?

It was one of our highest priorities for a state-of-the-field paper because there’s so much state activity going on with through-year. There’s a lot of uncertainty about what exactly is through-year. People are using different definitions, and they’re adopting different types of models that they’re all calling “through-year.”

We were obsessed with some of the technical issues early on, but there’s a whole host of practical issues, policy issues, and technical issues that we think states need to be thinking about. We did a virtual convening about through-year in November 2021, and 18 months have gone by, so we figured it was a good time to see where things are now.

Let’s pause here so you can remind everybody what through-year is.

Through-year, as we’ve been defining it, is basically multiple assessment events spread over the course of the year. The way we have been defining it is that those test events contribute in some way to the overall score or determination—achievement level—of a kid, or a school or a subgroup, or whatever the case may be, at the end of the year.

I noticed that you said that through-year is a model in which the within-year results factor in, in some way, to the summative determinations. But not everyone uses that definition. Some people call it through-year even if they don’t factor within-year results into a summative determination. Right?

Well, I think they’re being fast and loose with the terminology then, to be honest. I think if the various tests don’t contribute to the overall determination, then, in my view, they just have a collection of assessments. I’ve referred to such tests as being “loosely coupled.” 

They’re built like the summative assessment. They use the same type of item formats and the same test platform. They might be moving towards a balanced system of assessment. But that doesn’t make it a through-year. In my mind—and not everyone agrees with me, I know—a through-year means that the pieces are supposed to come together in some way for an accountability score.

In our paper, we use the broader bucket definition—through-year can include systems that don’t factor the within-year results into a summative determination—just so that we could include a list of states that are using or exploring different sorts of models. 

Most of the current interest and action on through-year is in the context of accountability, right? 

Correct. They are using through-year to replace ESSA-required accountability tests. They are also trying to listen to folks from districts who are complaining that they are not getting useful information from the current tests.

The paper lays out the center’s key concerns about this trend and outlines important issues states should consider in a possible move to the through-year model. What would you say is your main overarching concern about the upsurge of interest in through-year?

My main concern is that we’re asking tests designed for an accountability purpose to serve some other purpose as well. That other purpose is most often about learning, about informing instruction. 

We have a long history of trying to combine instruction and accountability, and accountability always wins. 

So my biggest question is: If people all of a sudden feel like they can build a better mouse trap than has ever been built, I sure hope the field can build a body of evidence to back it up. I haven’t seen any empirical evidence, or even logic, that suggests that what they’re trying to do is possible. 

Share: