Getting Ahead of the Curve: Planning for Accurate Equating in 2021
We cannot know what classrooms and teaching will look like in the coming school year, but that doesn’t need to prevent states from planning ahead for accurately equating spring 2021 summative assessments. In making that statement we assume that those tests will happen and their blueprints and administration will be unaltered while acknowledging that the conditions of learning may be very different than in the past. In fact, it is those expected differences that require us to consider their potential to negatively influence equating accuracy and get ahead of the curve by planning for them now.
The Window of Opportunity to Prepare for Post Equating Challenges
The primary goal of equating is to adjust for differences in the difficulty of operational test forms so that the resulting scaled scores mean the same thing as they did in previous years. Equating is generally used to adjust for small differences in difficulty between test forms that occur under the most favorable conditions. The conditions in 2021 may be anything but favorable, and accurate equating is not a given under the extreme circumstances in which we find ourselves. Therefore, it is important to make plans for checking equating accuracy in 2021.
The primary risk is that COVID-19 may influence student performance on some items in ways that make estimated item difficulty (and other statistical parameters) anomalous in spring 2021. Since we cannot predict all possible shifts in item difficulty, and because removing items from equating also risks equating inaccuracies, states have a limited window of opportunity to consider the possibility of embedding an external anchor as a backup to current equating plans. The function of this anchor is to provide a pool of extra items that could be used should currently planned anchor items not function well and no longer be appropriate for use in “anchoring” to the test scale.
These extra anchor items could replace previously planned field test items, or they could be distributed across test forms and students as additional items. This design change would offer some flexibility for content and psychometric teams to swap in items with more stable statistical characteristics if needed, while also trying to maintain blueprint coverage of the equating block of items.
Pre-Equating
It is reasonable to assume that the operational forms that most states will administer in 2021 were constructed based on items whose data were collected before the COVID-19 disruptions. This implies the pre-equated psychometric characteristics of 2021 operational forms, including their difficulties, should be unaffected by COVID-19. This bodes well for testing programs that use a pre-equating model. The scoring tables developed during the test construction process for the spring 2021 forms will be completely uninfluenced by the disruptions we are experiencing in learning and assessment.
Even so, there is a potential utility in planning for post-equating analyses in 2021 to assess the stability of item parameters, and to evaluate possible changes in measurement model and person fit. The findings from such post-equating analyses can substantively contribute to the state’s understanding of any “COVID effect” on both student performance and measurement stability. Such information can also serve to inform our level of confidence in item statistics for items field-tested in 2021.
Post Equating
For testing programs that use a post-equating model, the scoring tables for the operational forms will be produced based on the spring 2021 data. It is therefore vital to identify steps or procedures in the post-equating process that can be affected by the COVID-19 disruptions and devise mitigation strategies to support the comparability of scores resulting from the equating process. For most programs, the anchor set used to link the scores on the spring 2021 forms to the base scale deserves attention. It is entirely possible that a higher number of items will be flagged by the stability check for post-equating in 2021, potentially affecting the viability of the anchor set. This is where embedding an external anchor in the spring 2021 forms can be helpful as a backup plan. If adding items to the test is not possible, then the state may consider identifying which existing items on the test form could also serve as equating items, if necessary, in anticipation of the higher degree of instability in 2021.
Alternatively, states could consider a transition to a pre-equated process for 2021. Although this decision requires consideration of item bank size and depth, test design, field testing plans, and other design elements and conditions, this could be a viable option for states concerned about the risks involved in post-equating in 2021.
A Note on Using Existing Tests in Spring 2021
Since the tests planned for spring 2020 were built during the months preceding school closures and were never used, states will have the very reasonable option to administer them in 2021 instead. For states using fixed forms, this may be true for previously administered tests as well (e.g., Spring 2019). The benefit of using previously developed tests is obvious—they already exist. This is the logical choice for states that do not wish to change their test blueprints as it minimizes effort and cost.
From both a trend analysis and psychometric perspective, the re-use of 2019 tests in 2021 represents an appealing option. It offers the added benefits of supporting direct raw score comparisons between 2019 and 2021, and the ability to check the stability of item parameters for all items on the test. Where the latter may serve further psychometric research studies, the former allows states to directly evaluate student performance pre and post-pandemic. Because of its potential to serve these dual purposes, administering the 2019 tests, if feasible, is certainly an option that states should seriously consider.
Equating in 2021 was one of the topics we discuss on September 1st as part of the Center for Assessment’s RILS distributed conference on the Implications of the COVID-19 Pandemic on Assessment Accountability.
The September session, Spring 2021 Summative Assessment, focused on implications for administering statewide summative assessments in Spring 2021. We discussed issues and technical considerations related to test design, administration, scoring, field testing, scaling, equating, standard setting, and reporting. Our goal was to provide states and their assessment providers with a list of questions and practical guidance to support them as they develop or refine their operational plans for summative assessments in 2021.