Date: July 6, 2021

#EvalTuesdayTip: Design and set up data collection tools

By: Zamokuhle Thwala, David Ndou and Leticia Taimo

In the first #EvalTuesdayTip in this How to Develop and Validate Tools For Fieldwork series, we explained that in planning an evaluation, Khulisa first checks if primary data collection is required. If your evaluation requires primary data, you then need to consider the tools and indicators needed to answer your questions.

Follow these steps:

Write everything down. At first, it is good to have more questions/indicators than less. That way you are able to see which ones will work and which will not, as well as which ones are more relevant to answer your evaluation questions. Once the tools are piloted, indicators can be cut down to the most appropriate, relevant and realistic ones. In some cases, your client might have pre-determined indicators that must be collected during fieldwork, and your tool development should take this into account. 

The process of identifying questions/indicators involves consultations with different evaluation stakeholders involved in the evaluation. This is crucial as it provides different lenses and views from different contexts. We will talk more about this consultation process in our future #EvalTuesdayTip which will be published on Tuesday, 13th of July 2021.

Consider the types of tools that will allow you to triangulate data.  For example, an observation checklist may confirm interviewee’s responses, a group interview or individual interview template, a questionnaire that will be filled out by the respondent, a test, a video/photos, etc.

At this stage you also decide the languages to administer the tools and if there is a need for translation. Because Khulisa is developing language learner assessments, we have a team of two local Setswana Specialists who assist with the tool development process. However, you can hire a translation service.  It is always crucial to employ two separate translation services to 1) translate and the 2) back translate into English.  This ensures a high quality translation.

While developing the tools, you also decide how you will be collecting the data. For example, collecting data using paper-based forms or mobile data collection using tablets. If you choose paper-based tools, you need to plan the logistics of getting the forms back to head office, data entry and translation (if necessary), quality assurance, storing the paper-based forms.  Khulisa tends to prefer offline tools allowing data collection in a low connectivity environment (and also saving on data costs) see a handy list of offline tools here.

Enter the tools into the required software (or preparing the tools to be printed if using a paper-based method). Always ensure you take time testing that the tools on the software work according to plan (e.g. checking that skip logic works, all questions and response options show up, etc.) and do a test print of each tool to see that it comes out perfectly.

Map each question in your tools against the evaluation questions, this is your data analysis plan.  There are always questions that do not map perfectly, examine each of them, do you need them to provide context or to analyze other data?  For example, asking the gender of a respondent may not answer an evaluation question, but does allow you to disaggregate results according to gender.  Eliminate questions that do not contribute to your evaluation and may cause respondent fatigue.

Other hints: 1) Eliminate double-barreled questions. 2) Ensure there’s an escape clause for the respondent (e.g. response options such as “not applicable”, “unsure”, “I don’t remember”, “other, please specify”). 3) Ask a few open-ended questions, our favorite is “If you could change one thing about the project/program what would it be?” This is handy for coming up with evaluation recommendations! 

After the tools are developed and the pre-tests are done on the software and print, they are ready to be piloted. Find out more about how to pilot tools in our #EvalTuesdayTip on June 13 2021.