Why Have Many (All?) Approaches to Improve K-12 Educator Assessment Literacy Failed to Produce Significant Changes in Teacher Practices at Scale?
Making Teachers Active Agents in Improving Assessment Literacy
Most of us would agree that if the problem of improving K-12 educator assessment literacy has not been solved, it is not because of a lack of trying. A quick search of books with “assessment literacy” in the title, or a more general search for books on formative assessment or classroom assessment, returns many excellent resources written by esteemed researchers, practitioners, and scholars. And for those who don’t want to read books, there’s now a plethora of online assessment literacy professional learning modules, including from our own organization. With all of these resources available, we have to ask: why have many (all?) approaches to improve K-12 educator assessment literacy failed to produce significant changes in teacher practices at scale?
To answer this question, let’s first consider what previous education reform efforts can teach us about why some reforms fail to change teaching practices. Then let’s examine the meaning of the word ‘scale’ and consider how clarifying what we mean by scale may be essential to promoting reforms that are more aligned with the lessons learned from the past, as well as being more aligned with current views of assessment literacy.
A History Lesson: Top Down, One-Size-Fits-All Approaches Don’t Work
In his 2002 book on American curriculum reform, Hebert Kliebard has a chapter about successes and failures in educational reform and the extent to which there are historical “lessons” that can be derived and applied in general to understand why some common educational practices are so resistant to change. He uses three reform efforts as examples: (1) Horace Mann’s system of common schools; (2) progressive and child-centered education reforms; and (3) the educational objectives movement.
His key point relevant for the discussion here about the failure of assessment literacy reforms is that education reforms that aim to reconfigure patterns of teaching and learning (e.g., child-centered reforms) often fail because they attempt to change the authority and control structures in classrooms. Teachers are highly resistant to giving up control over classroom learning spaces and may hybridize reforms, but typically these types of reforms do not change teaching practices writ large.
Educational reforms that aim to use social science research to change educational practice (e.g., educational objectives movement) often result in pro forma activity. Kliebard argues this outcome occurs because these reforms fail to account for the supremely contextual nature of educational practice. These reforms try to dictate certain behaviors and superimpose rules on the craft of teaching rather than providing a set of intellectual tools or principles to guide practice in context.
What Does it Mean to ‘Scale’?
What does it mean when we say that an education reform scales? Scale is simply the outcome of scaling up; that is, applying a process or product that has been successful with a small number of teachers, often in a controlled or highly supported setting, to all teachers in a school, district, or state. But what type of outcome? Morel, Coburn, Catterson, and Higgs (2019) posit four different ways of conceptualizing scale for innovations in education:
- Adoption: widespread use of an innovation without conceptualizing the expected outcomes.
- Replication: widespread implementation with fidelity that produces expected outcomes.
- Adaptation: widespread use of an innovation that is modified according to the needs of local users.
- Reinvention: innovations serve as a catalyst for further innovation; local actors remix and create something new.
Simple adoption of a program without building capacity and understanding among educators is rarely a good idea. Similarly, replication assumes that the same context or conditions exist across all classrooms, schools, or districts, and that outcomes in one setting should generalize across all settings.
Shifting Views of Assessment Literacy
These multiple meanings of scale map well onto the different ways in which assessment literacy has been understood and conceptualized over the past 30-40 years (DeLuca et al., 2019).
Assessment literacy was initially viewed as a practical professional skill and encompassed the set of knowledge and skills teachers need to be ‘literate’ in the area of assessment. This view focused on teachers’ technical knowledge and skills in assessment, with a substantial emphasis on psychometric principles and test design (see, for example: 1990 Standards for Teacher Competence in Educational Assessment of Students). Brookhart (2011) critiqued the 1990 standards for the lack of attention to formative assessment processes, as well as the social and theoretical aspects of assessment.
In 2015, the Joint Committee for Standards on Educational Evaluation released the Classroom Assessment Standards for PreK-12 Teachers. These standards defined assessment literacy via sixteen standards where teachers exercise “the professional judgment required for fair and equitable classroom formative, benchmark, and summative assessments for all students” (p. 1). The critique of this approach is that it focuses on what teachers need to know and be able to do, as a single set of knowledge and skills that apply equally well across contexts, conditions, situations, and settings. However, this approach doesn’t take into account the importance and role of context in the enactment of assessment knowledge and skills within schools.
DeLuca and colleagues (2019) among others describe assessment literacy as negotiated, situated, and differential across teachers, scenarios, and contexts. In this viewpoint, assessment literacy is understood as a negotiated professional aspect of teachers’ identities where teachers integrate their knowledge of assessment with their knowledge of pedagogy, content, and learning context (e.g., Xu & Brown, 2016).
In other words, teachers must be able to adapt their set of assessment literacy tools to the students they teach, the content they teach, their teaching style and preferences, the setting in which they teach, etc. Overly prescribing how teaching and learning must take place, such as in previous child-centered reforms, or failing to account for the supremely contextual nature of teaching and the craft of teaching, as in the previous educational objectives movement, perhaps have contributed to the failure of previous K-12 educator assessment literacy reforms.
Conclusion
Returning to our original question, “Why have many (all?) approaches to improve K-12 educator assessment literacy failed to produce significant changes in teacher practices at scale?” I offer three summary considerations.
First, perhaps education reformers have focused too much on assessment literacy as a set of discrete knowledge and skills rather than intellectual tools that require capacity building related to assessment and assessment system principles.
Second, perhaps education reformers have failed to recognize the “supremely contextual” nature of teaching and learning and the ways teachers resist reforms that superimpose rules on the craft of teaching.
Third, perhaps education reformers have focused on scaling in the sense of adoption and replication rather than scaling as adaptation, which is better suited to the first two summary considerations. Shifting the focus to adaptation may allow us to better conceive of the conditions that must be in place for professional development to be successful and what that success looks like in different settings.