We Make Assessment Literacy Too Hard!
The Lessons of Behavioral Economics Can Help
I’m a big fan of behavioral economics, the field created by psychologists Daniel Kahneman and Amos Tversky and brought into economic popularity by Richard Thaler. I’ve learned a lot from reading their books and articles about how we think and behave. But I also can’t stop thinking about how their insights apply to our work in assessment and accountability.
Sadly, Daniel Kahneman died earlier this year. This sparked many articles and podcasts about his life and work. One was an interview with Richard Thaler, in which he was asked to sum up behavioral economics. This was his response:
“Humans aren’t stupid! Life is hard!”
It was a bit of a eureka moment for me. This is the issue I’ve been wrestling with for 30 years when it comes to assessment literacy.
Our Irrational Choices
To clarify what I mean, let me back up and explain a bit about behavioral economics. Before Tversky, Kahneman, Thaler, and others, economists and economic models assumed that humans were rational beings and would make sensible choices when faced with decisions. Behavioral economists recognized that humans do not always act as those models might predict. Our histories, psychologies, cultures, beliefs, and many other influences keep us from acting like economic robots. A few examples might help.
The phrase “throwing good money after bad” is another way to describe what Kahneman and Tversky termed “the fallacy of sunk costs.” Many of us have experienced this when trying to keep an old car going longer than we should because we’ve already invested so much money in it. I’m guilty of falling for another sunk cost when a very tempting chocolate dessert is on the menu. Even though I might be full after a very enjoyable few bites, I will likely keep eating because I’ve already paid for the dessert (i.e., the sunk cost). But my pleasure soon turns to discomfort.
A more striking example comes from Richard Thaler’s work developing nudges to improve the uptake of 401 (k) plans. Many people are fortunate to work in companies that offer generous matches for employees. Before Thaler and others came along, most plans required employees to opt into these plans. The thinking was that rational humans would, of course, take advantage of this free money. But they didn’t!
The Power of the Default Opt-in
When employees had to check a box (or something like that) to agree to let their employer contribute to their 401 (k) plans, fewer than 30 percent took advantage of this program. The other 70 percent were “irrationally” leaving that money on the table. This changed when employers implemented a simple nudge: They opted employees into these programs by default. They could certainly opt out, but they had to make an active choice to do so. Guess what? Standard enrollment in these plans typically approaches 95 percent.
I had read about these results years ago, but when I listened recently to Thaler recounting them in that interview, it hit me. We treat assessment literacy like the opt-in option to 401 (k) plans. We make it hard, and we adopt a deficit mindset toward teachers and other assessment users.
I’ve lost count of the times I’ve heard measurement professionals say, “An assessment course should be required for all pre-service teachers.” Yes, learning more about assessment is great, but that puts all the burden on teachers and leaders to learn what we know. We must make interpreting and using assessment results much easier, like the default opt-in option for 401 (k)s.
How does this apply to assessment literacy? My colleague Caroline Wylie wrote a terrific pair of blogs explaining why common approaches to assessment literacy—which break complex topics into short video clips or otherwise oversimplify them—amount to “cruel optimism.” Yes, we need to make assessment literacy easier, but we cannot make it so simplistic that it becomes useless or hand teachers these bite-sized tutorials and imagine that’s sufficient.
Consider test score reports from state or interim assessments. They are often designed by measurement experts who try to pack as much information as possible into the report. I’m happy to see more teachers and other users of this test information involved in the report-design process. However, we still have a long way to go to maximize the likelihood that educators and other users interpret student assessment information appropriately and use it to make productive educational decisions.
Helping Interpret The Data
If we apply Thaler’s thinking, we would offer some “default” interpretations based on the data in the report instead of just presenting teachers with tables of numbers, even with some nice graphics. Expecting teachers to wrestle alone with test data, perhaps bringing in other supporting or conflicting information, and then accurately interpret and act on this information is a form of cruel optimism and an even heavier lift than getting employees to actively opt-in to a 401 (k) plan.
For example, curriculum and assessment experts could identify common—and even some uncommon—patterns of results, provide easy-to-interpret narrative descriptions of students’ performance, and offer suggestions for potential next curricular and instructional actions.
I know some interim and summative assessment providers try to do things like this now. Unfortunately, they usually do so without situating their suggestions in the actual curriculum students are experiencing. These sorts of curriculum-agnostic approaches might have some benefit. However, as Carla Evans and I documented in our recent book, they’re limited in supporting instruction and learning.
It’s unlikely that we’ll see large-scale and interim assessments more closely connected to the enacted curriculum. But I am hopeful that we can capitalize on the power of artificial intelligence to provide accurate and actionable interpretations of student assessment results. This should be done in ways that support teachers’ learning about their students, their discipline, and assessment literacy.
Teachers must be involved in evaluating the veracity of the interpretations and recommended uses. Having teachers in that loop should increase the likelihood that we can design systems to improve their knowledge and skills as they interact with the systems.
What We Can Do Right Now
These are some hopefully near-future-oriented ideas. But there’s plenty we can do right now. We should design all reports and related materials with communications experts close at hand. In fact, they should lead the process, with assessment and content experts brought in as resources. This way, we’d be more apt to design reporting systems with the user in mind and talk about them in ways people can understand.
Even so, we must examine any potential report designs with some type of cognitive-laboratory methodology. These approaches generally ask the user or examinee to think aloud as they navigate the report, which enables designers to gain insight into how users make sense of the information in the reports and where they struggle.
Far too often, we’re guilty of bringing a deficit mindset to assessment literacy. Rather than seeing our own failures to provide actionable information, we believe that teachers, leaders, and other users lack important knowledge and skills. This belief can lead to the “cruel optimism” of our current assessment literacy practices.
It’s unfair to expect all teachers to become assessment experts, particularly when assessments are quite distal from their instruction. It’s even worse to assume that teachers are the source of the “problem with assessment literacy.” To convert that idea into Thaler’s terms:
Teachers aren’t stupid! Assessment literacy is hard!
We need to learn the lessons of behavioral economics to better support teachers and their students. Hopefully, we’ll soon be able to say, “Assessment literacy is hard, but teachers now have the tools to make smart decisions about student learning.”