Towards Nuts and Bolts for a Data Revolution
We need to address the lack of education data more vigorously. Here are a few concrete suggestions about what to do and maybe not do to help developing countries get this "data revolution" right.
August 21, 2014 by Luis Crouch, RTI International
|
8 minutes read
Toy nuts and bolts

I recently read a blog post from Jean-Marc Bernard “Data Are Not Just to Please Statisticians” in which he makes a strong case for getting the “data revolution” done.

I was intrigued by many good points he makes and strongly agree that we need to address the lack of education data more vigorously. Let me add a few concrete suggestions about what to do and maybe not do.

I’ll do it in the two main areas he suggests: household surveys and learning assessments.

On household surveys:

Many countries already do survey, but the data are not always used. So, we need to make sure that the data can help resolve controversial issues. For example, we need to find out why children don’t complete primary school. Is it because they are dropping out or because they never start school? We should create public debates around the whole issue of a “crisis in the foundation years”, including early childhood development.

We should generate less data per survey, but do them more often. It would also be useful to include queries on children with disabilities, on school safety, on the interaction between poverty, ethnicity, and gender and so forth.  We could add on elements or rotate them as we go along - rather than trying to capture everything at once. Some of the household surveys RTI has done are massive, and even I wonder whether all that data get used.

Surveys need to detect overall progress.  Sub-national data is useful for local planning purposes and should be managed locally. But what we really need is a picture of overall progress, especially for groups at sociological disadvantage. This requires reasonable sample sizes, but not of the size you need for sub-national surveys.

Part of the limitation is cost. While Jean-Marc’s estimate seems about right, we need to work harder to optimize cost in general.  We need to convene the experts who’ve done education-oriented household surveys and seek ways to lower the cost per survey so we can do more of them.

Surveys need to be more frequent: Some countries change quickly. Doing a survey every 5-10 years is not enough. Large surveys could be done less frequently, but more specialized ones that are smaller and cheaper should be done more often.  

More emphasis on capacity building. We need more local capacity on data collection, data analysis and how to use data for more rationalized decision-making.

Innovate with use of technology.  Mobile phone surveys, for example, may have limitations (but so do face-to-face surveys), but they also can be amazingly fast and inexpensive and can result in automatic tabulation.  While not every household has a mobile phone, we can take advantage of their speed and low-cost. We could also think of integrating GIS technology in the survey design, implementation, and even analysis stages.

Linking to Education Management Information Systems (EMIS) and joining forces is key.  EMIS are the standard administrative systems that gather data on enrollments and so on, directly from schools. They can learn to use surveys, and can help develop the approaches to sampling.

Repository of data: We need to develop and maintain a repository of information about methodology and results for each survey.

The last couple of points bring up a hugely important issue: what is the institutional base of such surveys? Do we need a loose partnership or, as Jean-Marc suggests, call for some kind of network or panel or task force of experts?  Ownership and leadership are important as some of the successful surveys such as Multiple Indicator Cluster Surveys (MICS), done by UNICEF and Demographic Health Surveys (DHS), funded by USAID have shown. While a loose networks or panels of experts can provide technical support, real action probably requires funding, institutionalization, and leadership. How this is to be sorted out remains an open question.

On learning assessments:

Assessments are key for global analysis and comparison. While learning assessments are not going to fix the problem, it is obvious that without them we won’t know much. We need to encourage local measurement, but experience around the world suggests that local experts derive a lot of support and skill from participating in internationally-comparable, rigorously-constructed assessments.

Common metrics are useful. As noted by Jean-Marc, ACER, in Australia, has proposed methods and approaches whereby one could have common constructs and metrics, even if actual assessments are carried out by different institutions.

Assessments help to improve teaching and learning. While international comparisons are useful, the real purpose of assessments is to support teachers. Let’s look at Latin America: while countries do increasingly use assessments to drive classroom practice, teacher coaching and support, it was a long process getting there. The actual “transmission belt” between assessment and teaching and learning did not receive sufficient attention at the beginning. Hence, it is important to create successful examples of how assessments can be used --  otherwise they may wither or even regress. 

It will also be important to avoid a situation where assessments cause a “closed loop of measurement” and very narrow forms of support. This could be the demand for specific textbooks and techniques that may help improve performance on the assessments. We should avoid that assessments are used to create a market for books that just help “answer” the questions in the assessment. Neither should there be markets for specific forms of teacher training).

Overall, I think it would be worthwhile to support Jean-Marc’s call on both topics. Maybe we should start by getting an expert group together to develop concept notes that can be floated to funders, and to coordinate with already ongoing initiatives in these areas.

Related blogs

Thanks to you both, Luis and Jean-Marc, for your clear and cogent thoughts on data and the data revolution in education. Luis, I want to focus in my comment on your conclusion that "...the real purpose of assessments is to support teachers" and, presumptuously, complete your thought beyond the period (".") by proposing to add "... to elevate the learning and outcomes of all of their students." Let me mention further two other points on which I am confident that we agree: that student learning does not depend just on teachers and the classroom and that not all student learning is captured by formal assessments.

I focus on these to make two points of my own. One, the notion of separating out Household Surveys and Student Assessments silos the data in ways that risk making action at both the system and the classroom level less relevant and therefore less effective. Your "transmission belt" also connects the school with the community and families.

Second, I worry about your statement that "... what we really need is a picture of overall progress, especially for groups at sociological disadvantage." For a system to obtain "overall progress," teachers and others at the micro level need to obtain individual progress one student at a time. I just listened to an old Ted Talk, by Nina Tandon, an Electrical & Biomedical Engineer at Columbia University, that offers a rich metaphor. Talking about prescribing drug treatments for cancer and other illnesses, she says that "In the past we’ve just taken a sort of statistical approach. We say, 'okay, 50 % of people with this condition respond to this drug, so let’s prescribe it to 100% of the people....' if you consider that everyone’s cancer is different, those kind of statistical methods don’t work as well. They just don’t." Isn't education like that? System data are fine, but they both can look widely different in the hands of different teachers and can mean very different things for how different teachers react to them and act.

While I agree that there are big issues for which broad-brush big data solutions may be useful, I fear that the revolution is leading us to exaggerate dangerously how many such big issues there truly are. In addition, by taking systemic views of certain phenomena - e.g., ICT in education - we may come to conclusions that are very different from those we would take by studying such phenomena through an individualistic lens. Finally (for now), the system approach to data, data analysis and data use naturally begets systemic, homogeneous solutions, while what we need to get "true progress" are individualistic, heterogeneous solutions that each teacher in each classroom and school can use for each student.

This is not an argument against "big data." It is an argument to look seriously at the balance between "big data" and "small data," with a strong suggestion that we are heading precipitously and perilously to an ever greater disequilibrium in favor of the big.

In reply to by Joshua Muskin

Great comments, Josh. All I'd add in reply is that if / when one decides to get really serious about these things, I hope that the nuances you suggest get fully discussed. I think in particular the balance between the homogeneous and heterogeneous needs to be aired out. Certainly, we all make some assumption of *some* homogeneity, or else one could not even talk about any concept such as "education" or "schools," I suppose. But on the other hand it is a mistake to let the need for classification and categorization get away from one, and lead to thinking that just because one uses the concept "schools," they are all the same.

In reply to by Luis

I perfectly agree, Luis.

Leave a comment

Your email address will not be published. All fields are required.

The content of this field is kept private and will not be shown publicly.

Comments

  • No HTML tags allowed.
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.