It’s In the Details: Let’s Be Specific About the Uses of Assessment Results
How Assessment Results Can Inform Instruction
“We have selected Assessment XYZ to improve teaching and learning in our district.” This is a common refrain heard from many school and district leaders. However, such refrains must be translated into actionable guidance – all those involved designing, implementing and leading programs of assessment need to do a better job explaining how assessment results can and should be used.
Often, district assessments take the form of off-the-shelf interim or benchmark assessments, but district-developed assessments or assessment batteries are also common. Even when the instructional purpose(s) of these assessments are further articulated, they generally remain vague – e.g., to “identify students’ strengths and weakness and subsequently modify instruction to enhance student learning” (Abrams, McMillan & Wetzel, 2015).
This lack of specificity is a hallmark of many district-led assessment programs (cf., Abrams et al., 2015; Clune & White, 2008). Simply saying that assessment results are meant to guide instruction does little to help educators understand in what ways the information can, and cannot, be used to inform their instruction (and at the district-level, how administrators can better support students and educators).
Educators are often expected to take assessment results and knowledge about their students, add a dash of magic, and then know exactly what to do to modify instruction for each and every child. This is challenging, to say the least. To support educators all of those involved in implementing district-led assessment, and those leading the programs in particular, need to do better in defining specific ways in which assessment results can and should be used.
Using Assessment Results to Make Instructional Modifications
There are a number of ways in which classroom instruction might be modified in light of students’ strengths and weaknesses, as identified through district-led assessment. Instructional modification may be quite proximal to assessment administration, immediately after, say the next class day (e.g., assuming assessment results are returned in real time), or quite distal to instruction, say in the following academic year. Examples of the latter include shifts to the scope and sequence of instruction for the same course in the next academic year. In addition, instructional modifications may be implemented for an individual student, a group of students or the entire class. Moreover, the modifications could include changes to instruction within regular class periods or through the addition of supplemental instruction during the school day, after school, on weekends or even during the summer.
Proximal, within-class period instructional modifications could take the form of a new one-day lesson that addresses conceptual misunderstandings pointed out by assessment results. Within this new lesson, educators could engage in formative assessment conversations to better understand and address these misconceptions and end the lesson with student exit tickets. The next class period could then start with a 20 minute mini-lesson built on the exit tickets (akin to a model suggested by Penuel, Frumin, Van Horne, & Jacobs, 2018). Instructional modification could also come in the form of shifting homework to target areas of student weaknesses and providing students with classroom time to get help on the homework from one another and the teacher.
Supporting Educators
These two example approaches of proximal, within-class period instructional modifications could both be useful in a given context. Clearly, educators should not be forced into one mold, but school and district leaders could support them by providing a menu of instructional options tailored to the assessment their students take – particularly if the options are carefully aligned to common instructional scopes and sequences. Developing concrete exemplars of assessment use would be not only help educators but also those who are designing or selecting assessments at the district level.
Thus those implementing district-led assessments need to work with educators and other partners to develop guidance that gets beyond simply stating that assessments should be used to “inform instruction.” This type of guidance can also help prompt conversations around the timing and content of the assessment(s) and how those assessment features interact with curriculum and instruction. Better yet, implementers should work hand in hand with educators and others to determine their priorities for district-wide improvement first and determine whether a district-led assessment program will help reach those goals. Such determinations should carefully define and consider the ways in which the use of assessment information will lead to meeting the given goals. In educational lingo, implementers should develop a theory of action that connects the near- and long-term intended outcomes of a policy with the inputs and actions intended to bring about those outcomes.
Those leading assessment initiatives should drill down at least one more layer to concretely define the way assessment results can and should be used, at minimum. Ideally, these uses are connected to and informed by goals for district wide improvement. Doing so allows assessments to become a useful part of the educational landscape and not just another hill for educators to go around.
1 Examples of such batteries include the Woodcock–Johnson Tests of Cognitive Abilities or the Dynamic Indicators of Basic Early Literacy Skills (DIBELS).
2 My critique extends to all purposes and uses, but I focus on instruction here.
3 Such uses presuppose that assessment information (e.g., scores on each item, sub-standard, standard or whole test) is specific enough to actually guide instruction.
4 One example of a theory of action is provided by a brief oriented towards the development of state innovative systems of assessment developed by the Center for Assessment and Knowledge works. Another example is provided by the district data team toolkit provided by The Massachusetts Department of Education. Both of these examples do not line up perfectly to district led assessment programs, but are easily adaptable.
References
Abrams L. M., McMillan J. H. & Wetzel, A. P. (2015). Implementing benchmark testing for formative purposes: teacher voices about what works. Educational Assessment, Evaluation and Accountability, 27(4), 347-375.
Clune, W.H., & White, P.A. (2008). Policy Effectiveness of Interim Assessments in Providence Public Schools. WCER Working Paper No. 2008-10, Wisconsin Center for Education Research. Madison: University of Wisconsin.
Penuel, W. R., Frumin, K., Van Horne, K., & Jacobs, J. K. (2018). A phenomenon-based assessment system for three-dimensional science standards: Why do we need it and what can it look like in practice? Paper presented at the Annual Meeting of the American Educational Research Association, New York, NY. Available online at: http://learndbir.org/resources/A-Phenomenon-based-Assessment-System-for-Three-dimensional-Science-Standards.pdf