Date: July 13, 2021

EvalTuesdayTip #3 Piloting / Pre-testing the tools

By: Zamokuhle Thwala, David Ndou, and Leticia Taimo

Over the years Khulisa has developed many tools for our evaluations. This development process often involves a number of steps until the tool is validated and ready to be used in the field. Last week we took you through how to think through tools and indicators for data collection and how to administer them.

This week we will take you through Piloting / Pre-testing those tools.

Once you have designed and set up your tools (paper-based or digital), you need to pilot/ pre-test them.  It is always tempting to skip the pilot/pre-test, to save time, but it’s a false economy.  Doing a pilot/ pre-test will actually save time as you will discover any hiccups/misunderstandings in the data collection process. It is also an opportunity to test your analysis plan.  The pilot/pre-test answers these questions:

  • Are you actually getting analyzable data?
  • Are the questions clear and answer options also clear?
  • Are there floor or ceiling effects?
  • Will the data assist in answering the evaluation questions?
  • Is the tool too long or irrelevant and the respondent loses attention or focus?
  • Can the fieldworkers collect the data in the time allotted?

Consider how many pilots you need to finalize the data collection tools and collect quality data. For Khulisa’s current impact evaluation in the North West province of South Africa, we conducted three pilots of learner assessments and contextual tools to be administered in schools. These tools were refined after each pilot, based on: 1) feedback received from fieldworkers; 2) observations of the fieldwork process, and 3) analysis of the pilot data.

These three sources answer these questions: are there any misspellings? Are questions being understood by respondents? Are questions clear? Are the questions culturally sensitive? If translated, is the translation clear and appropriate? Does the sequence of the questions flow? Do the questions elicit the information we need? How long does it take to administer each tool? Are we able to administer all tools within the expected timeframe? Are we missing any critical information?

If tools are on mobile data collection platform, are the correct skip patterns for questions showing up? Are the questions on the digital questionnaire the same as the paper version? Do the results upload to the server as expected? Do the fieldworkers understand how to administer the tools correctly (i.e. do they give the respondent any hints)?

The answers to the questions above help inform the changes done to the tools.

When planning a pilot, ensure that the pilot population is truly representative of the future evaluation respondents. In our case, we ensured that the pilot schools had similar characteristics to the evaluation sample schools. Because the evaluation is designed to assist the government to establish reading benchmarks in Setswana and in English as a First Additional Language (EFAL), the tools need to be tested with a range of learners with different reading abilities.

Khulisa’s pilot process went as follows:

  1. Pilot 1: a very small sample of three schools in Gauteng Province, with 40 learners per school, identified in conjunction with the government, conducted by Khulisa evaluation team members.  This pilot showed the passages intended to be used for the learner assessments were too difficult and that tasks were taking long to complete. In addition, it helped us understand what types of COVID-19 wellbeing related questions worked well / did not work so well with the learners. The evaluators identified which passages would work best for the assessments and what questions to add to the learner COVID-19 questionnaire.
  2. Pilot 2: 6 schools with 45 learners per school, identified in conjunction with the government, conducted in North West Province by Khulisa evaluation fieldwork team supervisors. This pilot helped the evaluation team identify how the two passage options (Option A and Option B) worked, how the Setswana passages worked with dialects spoken in that province and tested all contextual tools. During the pilot, the evaluation team found that there was insufficient time to complete all the required tools with a team of 2 fieldworkers. This meant adapting by adding a third fieldworker to each team to ensure that all tools could be completed. In addition, the pilot helped the evaluators select the passages for the final tools, identify how the tasks should be presented to learners and which items to prioritize in the contextual tools given the time constraints in schools, which would not be completely addressed by adding a third fieldworker. 
  3. Pilot 3: in the same schools and learners as Pilot 2[1], conducted in North West Province by Khulisa evaluation fieldwork team supervisors. This pilot served to confirm that the changes made after Pilot 2 were implementable and effective.  Minor additional issues were identified.

Overall, these pilots and the associated statistical analysis of the results means that the learner assessments are ready to be administered to 10,298 learners and all other tools are ready to collect contextual data from 229 schools.

Even though we have gone through this extensive pilot process, we have one more pre-test process during fieldworkers/data collector training. During training, we are able to further pick up on mistakes, understand the context, and ensure that the tools we are using will yield the intended results.

During fieldworker training, we schedule a simulation day on Day 4 of the 5-day training. In the first 3 days, we train all fieldworkers on all tools and provide opportunity role-plays and to practice the tools. During this simulation day, fieldworkers go into a school to administer the tools. This simulation day has a number of benefits such as fieldworkers gaining an idea of what a day of data collection entails, familiarizing themselves with using the tablet/paper, and getting comfortable with the tools. It also allows the evaluation team to detect edits that need to be made to paper or software versions of tools.

We always schedule a week in between training and actual piloting or fieldwork where we can make any final changes to the tools and send them to print or upload them on the required software.

Over the years, Khulisa has learned the importance of piloting tools to determine their validity and reliability. In other words, during the piloting phase, we evaluate if the tools actually measure what they are intended to measure and if they yield the same results consistently. In the final #EvalTuesdayTip of this series, we unpack how to incorporate feedback loops throughout the entire tool development process and how to analyze data to make informed decisions on all changes done to the tools.


[1] One school from Pilot 2 was unavailable during Pilot 3, so the tools were tested in a replacement school with similar characteristics as the original school, identified by the government.

Khulisa

Khulisa