Why scaling "what works" usually doesn't work for foundational literacy programs

Data on foundational learning programs that obtain the best results don't give a complete picture that explains their success. Applying a behavioral science lens sheds light on what makes the difference.

July 31, 2023 by Simon King, School-to-School International, and Amber Gove, RTI International
|
4 minutes read
Class 10 students at Shree Krishna Ratna School in Chautara, Sindhupalchowk District, Nepal. June 2019
Class 10 students at Shree Krishna Ratna School in Chautara, Sindhupalchowk District, Nepal. June 2019
Credit: GPE / Kelley Lynch

According to Nobel Laureate Daniel Kahneman, it is human nature to want to improve by “trying a little harder.” For programs aiming to improve foundational literacy, this might translate into adding more teacher training, redesigning textbooks or applying new types of assessment. However, we often discover that adding more program components does not translate into significantly improved learning outcomes.

Why is this? If we use a behavioral science lens to understand the challenge, it presents insights which traditional research and evaluation do not. As we highlighted in this post, understanding how best to improve learning outcomes is all the more important in the wake of widespread learning losses stemming from COVID-related school closures.

If we consider a traditional impact evaluation, it presents a generalized impact between two time points. We learn the average impact an education intervention has on student learning outcomes, comparing the gains in program schools to those of control schools. Typically, we collect additional information to explain why there is impact (or not).

However, a simple technique of reshaping this data makes us ask very different questions.

An example from Nepal

If we take the average learning outcome at two time points for each program school and order these schools by greatest to least impact on learning outcomes, we see very different outcomes by school.

Nepal graph

The figure above shows the impact on learning outcomes by school for the USAID Nepal Early Grade Reading Program between 2016 and 2018. The school with the greatest impact on learning outcomes increased the average number of words read per minute by 32; at the bottom performing school, the scores decreased by 13 words per minute.

The Nepal program had a significant impact on learning outcomes in 2018, with the average gain for a third-grade student in a program school of 8.3 correct words per minute; students in non-program schools lost 2.4 words per minute. However, the graphic above shows that just 15% of the schools contributed towards 80% of that generalized impact.

If we can replicate these schools' success, can we increase overall program impact on learning outcomes?

Findings when applying a behavioral science lens

Applying a behavioral science framework, the Nepal program studied the characteristics that made these high performing schools different. It was found that these schools included individuals such as teachers, head teachers and community leaders making a difference. These individuals had personality characteristics aligned to Rogers' concept of “early implementors” that include: rationality, empathy, good communication and tolerance for uncertainty.

This observation is why scaling what works usually does not work. Scaling personality traits to individuals in other schools is almost impossible to achieve.

So, if replicating success is challenging, what are the alternatives?

Daniel Kahneman says we create positive behavior change by diminishing the restraining forces—removing the barriers that are holding people back from doing what is being asked of them. To scale a foundational literacy program, we need to look at where the program has had little or no impact and identify what barriers impede positive change.

Our immediate response might be identifying the challenges of large class sizes, teacher experience, student characteristics or other factors that were not typically within the scope of a program to change. While these issues are serious and should be addressed, we also know that some of our program schools found success in similar circumstances.

To really make a difference in improving learning outcomes for all students, we must understand how humans respond to change and apply that to education programming. There is no useful, convenient model for how teachers react to change. Human response to change varies based on individual personality characteristics and environmental influences. We also look to our peers and influencers to see how they adapt to change.

Most educational change is usually mandated. When anything is mandated, we often behave in different ways. We might embrace the change, resist it or find a way to adapt it to make it work for us—trying just a little harder than what we had been doing before the change was mandated.

Research tells us that teacher resistance to implementing these programs is usually not the issue. However, what tends to happen is that many teachers adapt programming in different and typically ineffective ways to make the new curriculum and pedagogical approaches more familiar, and require a lower mental effort to implement.

These are predictable behaviors we should be considering for educational program design as we seek to help education systems recover from the learning losses of COVID.

*****

Read other blogs in this series

Related blogs

Leave a comment

Your email address will not be published. All fields are required.

The content of this field is kept private and will not be shown publicly.

Comments

  • No HTML tags allowed.
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.