Evidence

Africa has made unprecedented gains in school enrolment in recent years. In 1999, only 59% of primary school-age children were enrolled in school in Sub-Saharan Africa. By 2016, 80% were enrolled.1

Due to the tremendous efforts of governments, local communities, non-profit organisations, and the international community, school enrollment rates in Africa are converging on universal primary enrollment. Yet learning levels remained low: in 2017, over 80% of Grade 2 students in Ghana, India, and Malawi could not read a simple word and over 60% of Grade 2 students in Ghana, India, and Uganda could not perform two-digit subtraction 2. J-PAL affiliated researchers have conducted over 200 evaluations in more than 40 countries to test the effectiveness of a wide variety of education programmes, with the aim of improve learning outcomes.3 Yet learning levels remain low…

Inputs alone do not improve learning outcomes

Many programmes aiming to improve the quality of education in Africa have been ineffective because they fail to address the needs of all students.

Not only is business-as-usual failing to improve learning outcomes in Africa, but many new innovative programmes from governments and non-profits have also been unsuccessful. Because schools tend to be overcrowded and have fewer resources than in developed countries, many programmes have sought to reduce class size, add inputs such as flipcharts or textbooks, or provide schools with cash to independently purchase inputs. Unfortunately, rigorous impact evaluations from J-PAL affiliates show that interventions of this type in Kenya , Sierra Leone, Niger, and The Gambia were not effective. 4 10  11 

Teachers tend to teach to the top of the class

Despite the success in getting children to school, learning outcomes are still desperately low in many contexts.

A study which assessed the impact of textbook provision in Kenya, like other input interventions, found no evidence that textbook provision increased average test scores, or that it reduced either grade repetition or dropout rates. However, textbooks benefited students with higher average test scores before the programme: those in the top 40% of the class before the programme increased their test scores between 0.14 and 0.22 standard deviations after one year, compared to a control group. This insight combined with several other studies helped illuminate a key reality common to many contexts: teachers teach to the top of the class – the few students who are at the level of the curriculum – while most students are left behind. Given the structure of education systems across many parts of the world, this is unsurprising. Many teachers are faced with classes with a wide variety of learning needs, dense and ambitious curricula, and high stakes primary leaving exams which incentivise teachers to move at the pace of the fastest learners.

Tailoring instruction to the level of the child improves learning outcomes

Pratham’s mission statement is: “Every child in school and learning well.” Driven by this mission, Pratham began designing and implementing programmes which focused on providing children with basic skills by tailoring teaching to children’s learning levels.11 In 2001 J-PAL and Pratham partnered to investigate the impact of Pratham’s “Balsakhi” programme. Balsakhis (“children’s friends” – female secondary school graduates) pulled Grade 2 to 4 children who were struggling with the curriculum out of class for two hours a day to focus on basic reading and mathematics skills. The programme improved children’s learning outcomes by 0.14 standard deviations in the first year and 0.28 standard deviations in the second year. This was the beginning of a long learning partnership between Pratham and J-PAL, and the start of what we now know as Teaching at the Right Level (TaRL). Since then , Pratham has partnered with J-PAL affiliated professors to rigorously evaluate, adapt, and improve TaRL models which can be efficiently scaled. This process began with early proof of concept randomised evaluations which showed the effectiveness of TaRL and continued with subsequent iterations of the approach to understand how best to implement at scale with government teachers during the school day.

Multiple TaRL delivery models have been tested

Through this series of evaluations, researchers discovered that targeted instruction could be successful when delivered by tutors, volunteers, and government teachers, both in-school and out-of-school. 12  13  14  15 16  Through this process, key programme components have been identified and strengthened. A recent paper by Banerjee et al 14  identifies two particularly strong models which work well at scale.

  • Tutor- or volunteer-led learning TaRL camps held for periodic bursts of time were effective in Uttar Pradesh, India, in a location with relatively weak government support structures. This model includes local instructors leading TaRL activities for forty days with supplementary support in summer camps.
  • Government teacher-led TaRL instruction throughout the school year was effective in Haryana, India, a state with relatively strong government systems. This intervention included a dedicated time for TaRL during the school day and support for teachers through strong mentoring and monitoring.

Impact of programmes targeting instruction to the level of the child

In addition to the extensive evidence from India, rigorous evaluations from Ghana and Kenya have also found that creating homogenous learning level groups allows instructors to target instruction and help children learn. 15  16  17  In addition, evaluations on programmes involving adaptive computer-assisted learning, which adjusts to children’s current learning levels, and using targeted tutoring, show positive outcomes. 18  19  20 

Interested in learning more about how researchers, policymakers, and implementers conduct research and use rigorous evidence to guide programs and policymaking? Explore these resources:

 

1) The World Bank, n.d. “Adjusted net enrollment rate, primary (% of primary school age children).” Accessed, September 18, 2018. https://data.worldbank.org/indicator/SE.PRM.TENR?locations=XM&view=chart&year_low_desc=false

2) World Bank. 2018. “World Development Report 2018: Learning to Realize Education’s Promise.” Washington, DC: World Bank.

3)The Abdul Latif Jameel Poverty Action Lab, n.d. “Evaluations.” Accessed, September 18, 2018. https://www.povertyactionlab.org/evaluations?f[0]=field_themes:2

4) Glewwe, Paul, Michael Kremer, Sylvie Moulin, and Eric Zitzewitz. 2004. “Retrospective vs. Prospective Analyses of School Inputs: the Case of Flip Charts in Kenya.” Journal of Development Economics 74(2004): 251-68.

5) Glewwe, Paul, Michael Kremer, and Sylvie Moulin. 2009. “Many Children Left Behind? Textbooks and Test Scores in Kenya.” American Economic Journal: Applied Economics 1(1): 112-35.

6) Sabarwal, Shwetlena, David K. Evans, and Anastasia Marshak. “The permanent input hypothesis: the case of textbooks and (no) student learning in Sierra Leone.” The World Bank (2014).

7) Barrera-Osario, and Leigh L. Linden. “The Use and Misuse of Computers in Education: Evidence from a Randomized Controlled Trial of a Language Arts Program.” Working Paper, Columbia University, 2009.

8) Beasley, Elizabeth, and Elise Huillery. “Willing but Unable? Short-Term Experimental Evidence on Parent Empowerment and School Quality.” The World Bank Economic Review (2016): lhv064.

9) Blimpo, Moussa Pouguinimpo, David Evans, Nathalie Lahire. 2015. Parental human capital and effective school management : evidence from The Gambia. Policy Research working paper; no. WPS 7238

10) Borkum, Evan, Fang He, and Leigh Linden. “The Effects of School Libraries on Language Skills: Evidence from a Randomized Controlled Trial in India.” Working Paper, March 2013.

11) Banerji, Rukmini, and Madhav Chavan. “Improving literacy and math instruction at scale in India’s primary schools: The case of Pratham’s Read India program.” Journal of Educational Change 17, no. 4 (2016): 453-475

12) Banerjee, Abhijit, Shawn Cole, and Esther Duflo. 2007. “Remedying Education: Evidence from Two Randomized Experiments in India.” The Quarterly Journal of Economics 122.3 (2007): 1235-1264.

13) Banerjee, Abhijit V., Rukmini Banerji, Esther Duflo, Rachel Glennerster, and Stuti Khemani. “Pitfalls of participatory programs: Evidence from a randomized evaluation in education in India.” The World Bank (2008)

14) Banerjee, Abhijit, Rukmini Banerji, James Berry, Esther Duflo, Harini Kannan, Shobhini Mukherji, Marc Shotland, and Michael Walton. “Mainstreaming an effective intervention: Evidence from randomized evaluations of “Teaching at the Right Level” in India.” No. w22746. National Bureau of Economic Research, 2016.

15) Duflo, Esther, Pascaline Dupas, and Michael Kremer. 2011. “Peer Effects, Teacher Incentives, and the Impact of Tracking: Evidence from a Randomized Evaluation in Kenya.” American Economic Review 101(5): 1739-74.

16) Innovations for Poverty Action. 2018. “Evaluating the Teacher Community Assistant Initiative.” Accessed July 19, 2018. https://www.poverty-action.org/study/evaluating-teacher-community-assistant-initiative-ghana

17) Duflo, Annie. 2017. “TaRL Webinar Series: Session 1.” Accessed July 24, 2018. https://www.povertyactionlab.org/sites/default/files/event/TaRL-Webinar-Session-1.pdf

18) Muralidharan, Karthik, Abhijeet Singh, and Alejandro J. Ganimian. “Disrupting education? Experimental evidence on technology-aided instruction in India.” No. w22923. National Bureau of Economic Research, 2016.

19) Cabezas, Verónica, José I. Cuesta, and Francisco A. Gallego. “Effects of short-term tutoring on cognitive and non-cognitive skills: Evidence from a randomized evaluation in Chile.” Santiago, Chile (2011).

20) Fryer Jr, Roland G., and Meghan Howard Noveck. “High-Dosage Tutoring and Reading Achievement: Evidence from New York City.” No. w23792. National Bureau of Economic Research, 2017. 21. Bates, Mary Ann, and Rachel Glennerster. “The generalizability puzzle.” Stanford Social Innovation Review 15, no. 3 (2017).