Date: June 27, 2022

 

We are excited to announce that long-time Khulisa evaluator, Margie Roper, was recently appointed to lead Khulisa’s Education & Development Division. Margie is a technical evaluation expert with more than 25 years’ experience leading and executing: monitoring and evaluation (M&E); social research; learning and development; and capacity-building and training assignments.

 

Margie Roper, leader of Khulisa’s new Education & Development Division.

 

As part of this new division, Margie and her team provide technical expertise and leadership on: program development; monitoring; evaluation; and knowledge-sharing in the areas of education, human trafficking, and social development. The Education & Development Division also has expertise in performance, implementation, and impact evaluations across multiple educational and development sectors, including:

 

·         early childhood education;

·         formal schooling;

·         adult education;

·         career development;

·         youth development;

·         social justice;

·         civic engagement;

·         active citizenship; and

·         community development.

 

The Division develops innovative, feasible strategies and frameworks based on evidence for improving access to, and quality of, educational and developmental programs. These strategies and frameworks help program implementers, decision-makers, and beneficiaries to take ownership of their empowerment trajectories.

 

Four Trends in Evaluating Education Interventions

 

Education evaluations have evolved quickly and significantly over the past few years, especially as the Covid-19 pandemic has transformed the educational landscape across the globe. Below, Margie shares four trends in evaluating education interventions, which she has observed through Khulisa’s recent evaluations in the education sector.

 

1.      Use benchmarks and criteria.

Being an evaluator requires evaluative thinking and critical analysis skills, and an evaluator’s judgements must be based on sound criteria and benchmarks. This may not sound new. However, Khulisa’s recent educational evaluations and research have deepened our understanding of sound evaluative criteria and the best ways to measure change over time using benchmarks.

In diverse socio-economic and geographic settings, such as in South Africa, evaluation cannot assume a “one size fits all” approach. Therefore, benchmarks are critical to determining performance and creating strategies to address equity differences.

For example, the African Language and English First Additional Language benchmarks, which Khulisa helped develop in 2022 as part of the USAID PERFORMANCE contract, provide a step up in our ability to determine learner achievements in the classroom. The benchmarks also enable teachers, schools, and early grade reading programme developers to respond with targeted pedagogies and materials – to encourage learners who are not yet achieving the benchmark. These benchmarks provide standardized criteria for evaluators across the country to determine the success of early grade reading programmes.

2.      Move beyond single data sets; link data sets to allow for meta-analysis.

A stand-alone data set, collected as part of a specific evaluation, can be far more useful when it becomes part of a larger resource that evaluators can use for meta-analysis. The importance of data for development was the focus of the 2021 World Development Report entitled “Data for Better Lives”, highlighting how the data revolution is transforming the world.

Evan at the micro-evaluation level, we can still contribute to these larger data sets. For example, data from single evaluations of early childhood development (ECD) interventions conducted for corporate social investment campaigns or foundations might be useful as part of a broader, national ECD database, such as the South Africa Department of Basic Education ECD Census supported by the LEGO Foundation and UNICEF.    

 

Although still in its infancy, initiatives like the ECD Census will provide crucial opportunities to build a national data set. Using clear linking fields is important: Schools’ individual Education Management Information System (EMIS) numbers and unique learner identities were critical in linking Khulisa’s data across the five waves of the Early Grade Reading Study data collection. This data (available in the USAID Developmental Data Library ), allows for future meta-analysis, and interventions can use existing secondary data sources and contribute to these sources once new data sets are complete.

 

3.      Evaluate scale for systemic education reform.

Scaling is more complex than we think. Scaling an intervention – such as expanding a small, successful developmental intervention across a larger, more diverse geographic or cultural area – is not simply a case of replication. Every district, school, and teacher is unique, and what works in one place, or for one person, may not work in another area or for another person.
 
Khulisa’s recent Reading Support Project has given us insight into scaling pathways, the enablers and obstacles for teaching early grade reading, changing classroom practices, and sustaining those changes in the future. Program designers seeking to scale an intervention can plan to disseminate teacher and learner materials for replication, hold teacher training events, and advocate for school and district support in a new area. But this approach does not recognize difference, and program designers might wind up duplicating efforts.
 
If one plans correctly for scale and its potential challenges, then project elements such as change management and leadership, pedagogic design, dynamic evaluation, and mediating classroom practices can be adapted, embedded, and sustained across schools and educational systems.

 

4.      Behavioral science is key to understanding change processes.

Change is driven and sustained by individuals. To achieve substantive change, evaluators must understand:

 

·         the program pathways to change;

·         how individuals (whether children or adults, individuals or communities) learn;

·         the different social and cultural influences on behavior;

·         the thoughts and motivations underpinning change processes; and

·         broader contextual influences.

For example, Khulisa recently conducted a study on the psychosocial effects of COVID-19 on teachers, principals, learners, and parents, and the teaching/learning practices of these groups. The study indicated high levels of stress during the pandemic and also illustrated factors that contributed to these groups’ wellbeing and resilience. In order to make substantive recommendations to the client, our evaluators needed to fully understand the range of challenges and emotions that Covid-19 is causing for each of the groups studied.

 

Now more than ever before, classrooms are multi-grade classes, given the considerable differences in learning experiences over the past two years and ongoing backlogs in achieving learning outcomes. Teachers need skills to provide inclusive, differentiated, and pedagogically sound learning experiences for children. Behavioural science needs to be integrated into our evaluation frameworks, indicators, analysis, and evaluative judgements.

 

As Khlulisa’s Education & Development Division grows and evolves, we will continue to identify and monitor new trends in education evaluation and other related fields. Keep an eye on our News, EvalTuesdayTips, and Blog sections for future updates.


Leave a Reply

Khulisa

Khulisa