According to Nobel Laureate Daniel Kahneman, it is human nature to want to improve by “trying a little harder.” For programs aiming to improve foundational literacy, this might translate into adding more teacher training, redesigning textbooks or applying new types of assessment. However, we often discover that adding more program components does not translate into significantly improved learning outcomes.
Why is this? If we use a behavioral science lens to understand the challenge, it presents insights which traditional research and evaluation do not. As we highlighted in this post, understanding how best to improve learning outcomes is all the more important in the wake of widespread learning losses stemming from COVID-related school closures.
If we consider a traditional impact evaluation, it presents a generalized impact between two time points. We learn the average impact an education intervention has on student learning outcomes, comparing the gains in program schools to those of control schools. Typically, we collect additional information to explain why there is impact (or not).
However, a simple technique of reshaping this data makes us ask very different questions.
An example from Nepal
If we take the average learning outcome at two time points for each program school and order these schools by greatest to least impact on learning outcomes, we see very different outcomes by school.