Date: February 17, 2020

True or false? One of the easiest ways to measure the effectiveness of a development program or behavior change intervention is through the number of people reached by the program or intervention.




If you’ve answered ‘true’ to this question, you are one of many who underestimate the complexity of measuring reach.


Reach is often defined as ‘the number of people trained’, the ‘number of people who attended a conference’ or the ‘number of people whose awareness have increased’. Sometimes it becomes even more complex, with those numbers augmented with indirect measures. For example, ‘30 teachers were trained and on average each has a class of 30 pupils, therefore 900 pupils were reached by the program’.


Although this sounds accurate, Khulisa’s evaluators often come across programs where the rigor and generalizability of data are compromised because program implementers did not clearly define reach.



For example, when implementers refer to the ‘number of people trained’, do they mean those who registered, enrolled for the first day, or completed the training? If there was a testing component, are implementers referring only to individuals who passed?


Unless one defines reach upfront, it can be difficult to determine afterwards.


Once there is an agreement on direct or indirect measures to determine reach to primary and secondary beneficiaries, the next question is: ‘Are they real or estimated numbers, and are you using an evidence-based formula to calculate them?’


Taking our example of the teacher (primary beneficiaries) and pupils (secondary beneficiaries), do you use government school and class statistics? Or the regulations (which may say that there can be no more than 30 pupils per class)? Or teachers’ self-reported number of pupils? Or do you send a fieldworker out to count the number of pupils?


In a recent Khulisa education evaluation, we found that a teacher reported pupil numbers using the norms and standards of 30 pupils per class, but through an evaluation observation we recorded 62 pupils in that one class.


Once reach is defined, the next difficulty is agreeing on the source documentation. What evidence must be kept in order to prove reach?


If it is an attendance register, for example, how is the data captured? What data quality issues need to be prevented, such as double counting attendees, incorrect spellings of names (which might result in entry duplications), or a place to safeguard your hard copy registers so that they are not lost?


In the context of these question marks, we drafted five steps that can help implementers overcome some of the difficulties in measuring reach:

1. Define upfront what you mean by reach, knowing it is important to stick consistently to one measurement approach throughout the program.


Indicator protocols are a good way to define what is meant by reach, guide how data is collected, and ensure enough detail for improved data quality. An indicator protocol template is available from www.measureevaluation.org

2. Be clear about the secondary beneficiaries your program is reaching and agree on the approach and formula that you’ll use to consistently measure this across implementation sites.


In a teacher training intervention, for example, secondary beneficiaries may include the learners, the learners’ parents or the school’s leadership. Be very specific about these definitions and who is included and excluded. Is a ‘parent’ defined as the mother, father or guardian? Are stepparents or child-headed households excluded? Also decide on a standard approach or formulae. It’s important that secondary beneficiaries are defined in your indicator protocol.

3. Put in place primary data collection monitoring tools and systems that supports your definition.


If you’re using an attendance register, for example, do the fields speak to the data you want to collect? If there are three columns to sign for each day of training, how do you prevent people from back signing? Ensure that your primary data collection tool speaks to your definition of reach. (Better Evaluation provides resources on various primary data collection methods).

4. Provide data quality training for data collectors, data entry staff and others.


Do the data capturers have a shared understanding of the data they are collecting before the start of the program? If you have different people implementing the program, ensure data consistency by training them in the definitions and data audit trails that you’ll use to justify the validity and reliability of the data.


Khulisa presented this workshop entitled: “Fundamentals of data collection – What I wish I had known at the start!” at the South African Monitoring and Evaluation Association (SAMEA) Conference in Johannesburg in October 2019.

5. Accept that reach is complex – from planning to dissemination of findings.


When you’re starting the dialogue with stakeholders and users of the evaluation, ensure that you have a shared understanding of the definitions you have selected upfront, and confirm that the specific definitions are useful for end-users to improve program effectiveness and impact.



Leave a Reply

Khulisa

Khulisa