What’s Even Going On?
Collecting Data on Student Experiences to Understand Student Learning in Light of COVID-19
As the school year gets underway, one question on everyone’s mind is: What impact will all of the disruptions have on student learning? Answering this question requires collecting data on student experiences in addition to data on student achievement.
Data on student experiences – data that really lets us dig into what students’ day to day school lives are like – has largely been absent from statewide collection to date.
When policymakers consider collecting data about students, federally-mandated large-scale state assessments understandably command a great deal of their attention. Much of the current conversation is focused on the administration of a statewide summative assessment, not for the purposes of accountability, but instead to quantify where students are in their learning relative to prior years. The administration of summative assessments looks to be challenging for the 2020-2021 school year, particularly because many students will likely still be engaged in hybrid and remote instruction when it comes time for testing. At the present time, it is largely unknown whether (and if so, how) statewide summative assessments can be successfully administered under these conditions.
Even with successful administration, however, assessment data alone leaves a lot out. Student achievement data on its own cannot answer questions like:
- What did instruction look like for students?
- What types of instruction are associated with higher or lower student achievement?
- Are there specific types of instruction that are working (or worrying)?
Collecting data on student experiences helps us address these questions and also provides a lens to understand what is going on, even if the administration of a state summative test is less than successful.
Considering the Data That We Need
Answering these kinds of questions requires the collection of additional data to understand the on-the-ground experience of students, teachers and school leaders. In a perfect world, such data would detail the experience of each student, capturing key features of both instruction and the support students receive as they learn.
Such student-level data could include details on:
Face-to-Face Instruction
- How many days of face-to-face instruction per week the student received, as well as how many days were available to the student
- The size of the student’s class
- Whether additional educators, e.g., paraprofessionals, were available to support the student during instruction
Remote Instruction
- Whether remote instruction was online or through other means (e.g., pick up and drop off paper packets)
- Whether the student’s entire class was remote, or whether a subset of students opted for remote instruction
- How many hours per week of remote, online synchronous and asynchronous instruction per week the student participated in, as well as how many hours were available to the student (initial research suggests that many students spent far less time in online instruction in than they would in face to face instruction)
- The types of interactions occurring during remote learning:
- Frequency of direct interaction between the student and his or her teacher(s), (e.g., daily, weekly)
- Types of interactions between the student and his or her teacher(s):
- Messaging via a learning management system
- Video conferencing
- Access to appropriate technology, including whether the student has:
- Access to sufficient technology to engage in online learning
- Sufficient familiarity with technology to engage in instruction
- Support from teachers and others to use technology
- Whether the student has:
- A quiet space to learn
- Support from adults (e.g., a guardian or other adult) in terms of using technology, saying on track with instruction, as well as support with instruction
In addition to that student-level data, details would also be collected on:
Teacher Training
- The degree to which educators are prepared and trained for remote instruction, including:
- Access to and familiarity with a learning management system
- Experience and training with remote or online instruction
- Sufficient experience with technology
Disruptions to Instruction
- Whether, and if so, when, interruptions and shifts to instruction occurred, including the type of shift (e.g., shift to remote from hybrid) and the reason for the shift (e.g., COVID-19 outbreak, natural disaster)
The information above would help paint a rich picture of student experiences, but still misses a number of important aspects of students’ socio-emotional wellbeing and learning environments. For example, specifics about the scope and sequence of instruction, such as whether, and if so how, the scope and sequence of instruction was modified to allow for measures such as social distancing. In considering data collection, states may be well-served by considering what overarching categories of data are important, then working to collect data following those categories.
Considering the Data That We Can Actually Collect
Fully capturing data on the above aspects of student experiences this school year is likely impossible. Educators and administrators are stretched thin simply making instruction happen for their students this year. Ultimately, bad data collection may be worse than no data collection at all. Those considering collecting this type of data should carefully consider the burden additional data collection may place on teachers and leaders, and work to ensure that any data collected is of high quality and ultimately useful. Doing so will likely require explaining, in detail, why the data collected is important and how it will be used in real and meaningful ways to those collecting the data. Any attempt that does not connect to the real needs of those on the ground will be met with, at best, limited engagement.
Thus, when considering the above list, it’s important to prioritize what data is most important and actually possible to collect. At a minimum, districts and states should consider capturing the:
- Number of days of face-to-face instruction per week each student received
- The hours per week of remote, online synchronous and asynchronous instruction per week in which the student participated
- Whether, and if so when, interruptions and shifts to instruction occurred, including the type of shift
Getting even this data at the student level may be daunting, given that it would likely require weekly school-level data collection that accounts for the multiple instructional options schools may provide. The data could be collected at the school-level instead, reducing the response burden while also limiting analyses that could be conducted. One way such a school-level data collection could look is:
- A qualitative description of each type of instruction (e.g., what does hybrid instruction look like for this school?)
- How many students are engaged in this type of learning?
- How many days of face-to-face instruction do students typically receive?
- How many hours of remote instruction do students typically receive a week?
- Was this type of instruction interrupted? If so, when did that interruption occur and what type of instruction replaced it?
This above approach to data collection at the school-level, as well as all of the prior suggestions for types of data to capture, are meant to be illustrative. Any and all data collection should be tailored to the specific reason(s) the data is being collected. In addition, the actual collection of data will involve much greater specificity than is provided here.
Finally, the instructional experiences of students can shift rapidly. It is likely that as the 2020-2021 academic year unfolds, student classroom experiences will shift. These shifts will be important to capture. For example, an unplanned shift to remote instruction of two weeks from a hybrid model, with a rapid shift back, is different from a planned shift to remote instruction for a number of months.
Concluding Thoughts
If there were ever a time that education data can help us to understand and improve student learning, it is now. By all estimates, the COVID-19 pandemic is a Katrina-like event, with significant and long-lasting impacts for a substantial group of students. Ameliorating this impact will take deliberate and effective planning using the best evidence (i.e., data) available. The time to start collecting this data is right now.
The need for collecting this kind of data is not new, but the pandemic has made that need an imperative. In tackling this challenge, states should work to build an approach that not only addresses this school year, but provides the basis for years to come. That is, work done this year should not be a one-off response, but instead the start or continuation of data collection, and analysis, that goes beyond outcomes and investigates student experiences.