Including Missing Data in the Estimate of the Impact of the Pandemic on Student Learning

Part 3: The Practical Implications of COVID-19 Learning Loss Studies

It has been a year since COVID-19 made it impossible for states to administer state assessment programs last spring. Since then, multiple studies using data from commercial “interim” assessments have been put forward to help the field understand the pandemic’s impact on student learning. Through these special “learning loss” studies, assessment vendors have attempted to shed light on the effects of COVID-19 learning loss nationally. In this three-part CenterLine series, we address the practical implications of these studies for learning recovery. We believe that this series is relevant to the work of state- and district-level policymakers, assessment directors, and specialists at all levels responsible for interpreting data to inform learning recovery efforts.

“How bad is it?” This question has been repeated ad nauseam since the start of the COVID-19 pandemic. In education, the “how bad is it” question is often aimed at the state of student learning. The COVID-19 Impact studies discussed in this series offered a much-needed look at student learning. Those studies, however, provide an incomplete answer to our question because, as noted by Kufheld et al., “Student groups especially vulnerable to the impacts of the pandemic were more likely to be missing from our data. Thus, we have an incomplete understanding of how achievement this fall may differ across student groups and may be underestimating the impacts of COVID-19.” (p. 2).

State summative assessments administered in spring 2021 will likely face the same problem of missing students. As we discussed in our previous post, for a district or state leader, answering the question of “how bad is it?” largely depends on how similar, or not, their students who are missing from testing are to those who have been tested. In this post, we offer simple examples to demonstrate how state and district leaders can use what they know about the students missing from testing to estimate the pandemic’s effect on student learning.

Identify Relevant Groups of Students and Establish Their Baseline Performance

An initial step is to identify those subgroups of students who are your highest priority as you investigate the effects of the pandemic. Those may be students who have been disengaged or unable to consistently access remote instruction, or perhaps student groups targeted for support before the pandemic. After identifying the subgroups of students, determine the percentage of your student population that they represent. 

The next step is to establish the baseline performance for those students, as well as your remaining students. As described in this recent Center guidance, baseline performance may be based on prior performance in a single year or combined across multiple years or tests.  

For the sake of simplicity in this post, for our examples, we created three equally-sized groups based on student performance on the 2019 state test. We assume further that the means of the adjacent groups differ by a standard deviation. The baseline performance and representation of the groups are shown in Table 1.

Examine Current Test Performance to Understand the Effects on Performance and Missingness of Data

Spring 2021 state test results (presented in the table below) show a mean score of 495, a drop of only 5 points, or one-tenth of a standard deviation from 2019. However, upon closer inspection, performance in each group actually dropped by 25 points, a more significant drop of half a standard deviation. 

The 2021 data further reveal that the state mean score based on students tested was inflated because 50% of the students tested in 2021 were from the High Performing group and only 10% were from the Low Performing group. When adjusted for representativeness, the adjusted mean score for 2021 is 475, better reflecting the 25-point drop by each group.

Account for Differential Impact Across Groups

In the example above, the impact of the pandemic was the same for each of the three groups. In reality, the impact may be different across groups. Table 3 presents the case where there was no change in performance for the High Performing group, a 25-point drop for the Middle Performing group, and a 50-point drop for the Low Performing group. Note that the adjusted state mean for this case is still 475, unchanged from the previous example.

Although the adjusted state mean is the same in both cases, the impact of the pandemic on student learning is very different district by district. The example shown here for High Performing v. Low Performing districts is extreme and simplified for the purpose of this post. Differences in the impact of the pandemic across districts, however, are quite likely.

Analyses like these help us estimate a best-case scenario of the COVID-19 impact. This is why: The analyses assume that once we know a student’s group (in this case, pre-pandemic performance), then whether they are missing or not makes no difference in their 2021 score (or what it would have been). But in reality, we know that missing from testing is related to other factors, such as not attending school regularly, which is associated with lower test scores. Thus, a safe assumption is that students missing, as a group, probably have lower scores than we estimate given their group membership.

Concluding Thoughts on the Practical Implications of the COVID-19 Learning Loss Impact Studies

Our aim with this series has been to help states and districts build on the findings of the COVID-19 learning loss impact studies to better understand how relevant aggregate interim, and eventually state summative assessment results, are to their local context. Our premise is that these studies are not an end point in drawing conclusions about the impact of the pandemic, but rather provide a starting point for further investigation.

Whether examining the results of these studies, state-level results from spring 2021 state testing, or their own local test results, the key question policy makers should be asking themselves is: “how likely is it that these results going to hold true for my schools, district or state?” 

Districts that differ widely from the samples of students included in reported test results are likely to vary from those results, sometimes significantly. Schools, districts and states vary widely on a huge number of dimensions that also might affect estimates of the impact of the pandemic. Our goal is to provide some context for how variable the findings can be. It could very well be the case that for some states and districts, student performance is much worse than reported, while in others, it might be better.  

The relevant student subgroups likely are not entirely missing from testing, but instead under represented. So, the goal is to first define the degree and type of missingness (i.e., how many students are missing and who are they), then attempt to understand the impact of the pandemic on the learning of the students missing from testing, and finally to use all of this information to better estimate, and ultimately improve, our understanding of the pandemic’s impact on student learning. Additionally, it is safest to treat our understanding of the impact as a best-case scenario.

As we discussed in the previous post, information about factors influencing students’ opportunity to learn, including information on their social and emotional wellness, will be critical to understanding the impact of the pandemic and recovery planning.

Share: