Making Sense of Recent COVID-19 Impact Studies by Interim Assessment Vendors

Mar 25, 2021

Part 1: The Practical Implications of COVID-19 “Learning Loss” Studies

It has been a year since COVID-19 made it impossible for states to administer state assessment programs last spring. Since then, multiple studies using data from commercial “interim” assessments have been put forward to help the field understand the pandemic’s impact on student learning. Through these special Covid-19 impact studies, assessment vendors have attempted to shed light on the effects of the pandemic on learning nationally. In this three-part CenterLine series, we address the practical implications of these studies for learning recovery. We believe that this series is relevant to the work of state- and district-level policymakers, assessment directors, and specialists at all levels responsible for interpreting data to inform learning recovery efforts.

Several interim assessment vendors have published studies attempting to quantify the effects on student learning in reading and mathematics precipitated by COVID-19 school disruptions. These studies include: NWEA using its MAP assessmentCurriculum Associates using i-ReadyRenaissance using STAR, and Amplify using DIBELS. These four studies use different samples of schools and students. They employ different measures, and they vary in their methodological approaches and depth of documentation. However, they all attempt to answer basic questions about the likely effect of COVID-19 on student learning outcomes. 

As noted elsewhere on CenterLine (Lorié, 2020) and as conceptualized in these studies, the term “learning loss” is defined as the difference between student achievement during the pandemic and what it would have been if there had been no disruption. We use the terms COVID impact, and COVID effect synonymously as alternatives to the term learning loss.

A summary of these studies’ research questions, findings, and samples is available here

These Studies are Shaping Perceptions of “Learning Loss” Nationally

These studies provide good models for exploring the impact of COVID-19 locally, using local data. However, absent official state assessment results, these interim assessment studies have been thrust into the limelight amidst a massive demand for information about the pandemic’s effects on learning on a larger scale. EdWeek, the Wall Street Journal, and CNN are among the outlets that have cited these studies. Because of this exposure, these studies are inevitably shaping a national picture of student learning during the pandemic, the COVID impact, the quality of remote education, and the grades and subjects most needing attention.

But an incomplete picture can be misleading, especially if that picture depicts a situation different from the context and conditions of the original studies. Without attending to these studies’ stated limitations, it is all too easy to arrive at an erroneous picture of the effects of the pandemic on student learning. That picture could lead to unwarranted conclusions and poorly-informed decisions at the school, district, and state levels. The two most critical limitations of these studies are:

  1. Self-Selection. Data for these studies come from schools and districts that purchase and use interim products; none of these client bases represent any specific state or region – much less the nation.
  2. Missing Data. These studies’ analytic samples exclude schools and students who did not test due to the very disruption whose effect these studies attempt to quantify. In one analysis, NWEA was missing 2020 test data from 50% of the students tested in 2019.

Taking the Findings of these Impact Studies Out of Context Carries Risks

Although the limitations of these Covid-19 impact studies are fully noted within each report, general statements likely to appear in headlines often imply a national context that can be misleading.

For example, the Wall Street Journal headline referenced above reads, “Student Test Scores Drop in Math Since COVID-19 Pandemic,” with a secondary headline, “Reading skills are modestly behind in some grades in an analysis of widely used tests for elementary and middle school students.” However, self-selection and missing data in the Renaissance study undermine any claim that nationally, students have been affected less in reading than in math.

Actions based on these studies’ findings statements may not be warranted for districts or schools other than those that participated. And for them, they have their own vendor data to guide them. As the Curriculum Associates study acknowledges, “All results should be interpreted as only representative of the students included in this analysis.” (p.16)

These Studies are Starting Points for Conversations to Guide Learning Recovery

Rather than viewing these studies as precise, authoritative pictures of the impact of the pandemic on student learning, they are best understood as starting points for conversations to guide learning recovery efforts. They provide a framework for conceptualizing the COVID effect and asking the right questions about it. 

The studies provide excellent templates for exploring the impact of the pandemic locally, using only local data. Of course, local analyses are still subject to the second critical limitation identified above: not having test data on students or schools who did not assess. But they can give us more actionable pictures to guide learning recovery. 

Our second post in this series will describe how developing robust recovery plans requires districts and states to understand which students are missing in these kinds of analyses. In the third post, we will describe how districts and states can use information about students tested and students missing from testing to gain a better understanding of the recovery task ahead of them.

Share: