Date: July 2, 2020

When is the right time to commission an external evaluation? What do organizations need to consider before commissioning one? What are the benefits and the pitfalls to look out for?




We are often bombarded with questions when organizations are first considering commissioning an independent organization to conduct their evaluation.


Khulisa recently completed an external evaluation of the Allan Gray Orbis Foundation’s programmes. To capture some of the lessons of this exciting journey, we sat down with the Foundation’s M&E Specialist, Asgar Bhikoo, who reflected on their experiences of working with an independent evaluator.


In this Q&A, Asgar shares the tips and learnings he encountered with the entire evaluation process.


How did the evaluation come about and how was it received at the Allan Gray Orbis Foundation?



The Foundation had its first encounter with evaluation in 2014. It was an evaluation of the Scholarship Program and it was done as a Master’s dissertation. After this, it led us to think, ‘This is interesting, but we need more’. This resulted in several review processes and in 2015 there was a need for a 10-year review of the Foundation’s programs.


The reviews were useful, and provided a favorable perspective of impact, but an independent review of the Foundation’s impact was needed, and this was supported by the Board.  And that’s where the first talks about an external evaluation began.  The organization was transitioning as there was a changes in leadership and personnel. This resulted in a pause in commissioning and carrying out the external evaluation. However, once positions were filled, and there was clear direction as to what the requirements were in respect to the evaluation, and the team managing the evaluation process, the process to commission an external evaluation was restarted.


All this aided the evaluation process with Khulisa – an external company coming in to say, ‘you do a lot of things well, but there’s a lot of things you could refine’. Critical external feedback helps set the tone to say ‘okay, cool, we do a lot of good work, but from time to time, we do need to check ourselves against best practices and we can’t always do that ourselves’.


Furthermore, it is also good to take a step back, review progress made, and whether the changes made have resulted in the intended impact. Moreover, if you’re running a programme, you tend to look at it from a programme participant facing point of view. Sometimes having the assistance of an external partner to make sense of delivery from a neutral perspective is very important.


One of my colleagues gave good feedback when we started this evaluation process. He said: ‘You know, whatever comes after this evaluation, it’s going take some adjustment for the Foundation for the first 10 months, as this type of feedback would not have been experienced before.


And then afterwards, people are going to wake up and start cross-referencing and quoting this as the baseline or the point from which to act. And this is what we’re seeing now with evaluation findings and recommendations being taken forward, even though there was an initial resistance.


What was the highlight of the evaluation process for you?


It was the participatory and the multi-stakeholder approach used by Khulisa. For example, our programme participants could give input into the Theory of Change and we could find out what happened to those scholars that did not become fellows, which we weren’t able to do in the past. Multiple stakeholders including current staff members, EXCO,  ex-staff members, our Board, E-squared, programme participants that dropped off of the programme, as well as some applicants that nearly made it onto the programme were included. This provided an excellent source of getting to a common perspective of how the programme works, and how impact has occurred.


I think the fact that Jocelyn [one of the youth entrepreneurship specialists on the evaluation team] added the entrepreneurial environmental context model, was really valuable as we could start talking more about how we are impacting context. The verification of the impact we are having through the data collection processes was also important for us. This coincided with internal projects to understand what it takes to aid the development of a successful entrepreneur.


Were there any unexpected or unanticipated outcomes of the evaluation process?



The biggest surprise was when there was no difference in the quasi-experimental part of the evaluation – the entrepreneurial mindset survey administered with our beneficiaries – and a comparison group of those who almost made it into the programs.


I think it was a sobering experience for all of us, but it left us with a number of questions which were important about the tests used, how long it takes to develop an entrepreneurial mindset and planning for future evaluations. It also helped us understand that there may be different motivations for applying to, and participating in our programmes.


We had uncovered similar insights earlier in the year with three groups of MBA students from Harvard and Henley. The external evaluation validated those pieces of work, and further helped us understand our programme perception, design and roll-out and its effect on programme participants, and what draws individuals to apply for our programme. It also highlighted the importance of focusing on our selection process.


How did the evaluation add value to the work the Foundation does?



The evaluation validated a lot of our own internal findings in terms of how we structured our programs. In addition, it brought everyone in the Foundation together and it made people feel included in the process. It has served as the basis in which to plan our future strategy and is an often-cited document and reference point to ensure internal alignment as it relates to budgeting and programme adaptations and iterations.


Was there anything unique about Khulisa’s approach?



Khulisa followed a hybridized approach. I’d say at times, it was empowerment evaluation. At times, it was a bit of appreciative inquiry and at times, it was a developmental evaluation. Khulisa was flexible in the approach and we appreciated this. We felt part of the process, and our concerns and input were included as part of the data collection process. This proved to be useful as the analysis provided considered the perspective of the Foundation in the interpretation of the results, however, the team remained independent in how the results were communicated in the end.


The first thing that was very useful from this evaluation process besides reviewing the terms of reference and interrogating the evaluation questions to say ‘Is this what we want?’ was the initial inception workshop Khulisa held with Exco to clarify expectations and narrow the focus of the evaluation from the initial laundry list of questions. This enabled Exco to look at how the evaluation fits in with the grander scheme of things.


Secondly, the evaluability assessment also helped a lot in framing what was possible and what was not, so it was another important exercise for us.


A third thing that Khulisa did, that a lot of organizations don’t, was following an inclusive and participatory process with all stakeholders from the beginning. Many organizations would just dive straight in and would get to the end and say, ‘There was no data to answer this question’. Whereas Khulisa did the evaluability assessment and communicated every step of the way.


This also helped the organization realize what it needs to do in order to ready itself for another evaluation in future. It underscored the importance of data management and knowledge management; and aligned with current projects to enhance the Foundation’s knowledge and data management capability.


The fact that Khulisa had a well-thought through team with youth entrepreneurship specialists really added value to the evaluation.


What did you appreciate the most about working with Khulisa?



From an overall project management experience, it was a breeze. At no stage did I think, ‘This is due and I don’t think we are going to meet the deadline’. That is the feeling you want to have when commissioning an evaluation. The composition of the Khulisa team was perfect and, in the general, the flexibility in methodology and constant communication were what made the evaluation process run smoothly.


You received the evaluation findings and we have workshopped recommendations. What’s next for the Foundation? Where to from here?



We’ve presented the findings to the team, but COVID-19 has been a stumbling block in adopting all of the recommendations. What we all acknowledged is that by the time we get to the status sessions for each program in July/August, that these recommendations would come back into discussion again. Another step is for us to check our Theory of Change again based on the findings and aligning some of the different pieces of work we’ve been engaging on to inform a new M&E Framework for the Foundation.


Can you share any lessons with any organizations thinking about commissioning evaluations?



The starting point is to think about what team composition you want to have in your evaluation. What experts do you want to see?


Secondly, make sure that you allocate budget for an evaluation. As the evaluation was supported and endorsed by the Board, a budget was set aside for the evaluation. This budget was determined by the scale of the evaluation process based on the ToR that was proposed. This was ring-fenced specifically for the evaluation and took into account the methodology that might be deployed.


Whilst the end evaluation questions may have changed, the fact that the planning process accounted for a large-scale evaluation, meant that the Foundation was flexible enough to accommodate that.


The learning for us during this process is that, if you know what strength of evidence you need for decision-making, then it is important to allocate a budget that matches that expectation and be aware that the proposals you may receive in the call for proposals may allocating budgets in line with what you have asked as part of the ToR.


Often this step causes great concerns for organization as the expectations of rigorous evidence does not match the budget that is available. However, this does not mean that you need to stop the evaluation process.


It means that at the organization commissioning the evaluation needs to work with someone with evaluation expertise to determine what may be fit for purpose in terms of the type of methodology deployed for an evaluation. In this way, it opens up the conversation between evaluand and evaluator in terms of what is feasible, appropriate, value for money and credible enough.


Thirdly, you need an evaluation person to be part of the task team or steering committee to screen through proposals and assess what’s worth it and what’s not.


Fourthly, you need a dedicated person inside your organization for the evaluation management and implementation, to make sure the process flows easier.


Lastly, I think what also helped is that our CEO saw the importance and supported the evaluation from the beginning, and I don’t think that’s always common. The fact that senior leadership was on board and that the CEO was driving this evaluation was a key success factor. Therefore, positioning evaluation as a key organizational task that is of interest to everyone, is important. This facilitates buy-in and reduces the friction that may be experienced by organizations when going through the process. It also helps the evaluator coordinate their evaluation activities better.


One Response

  1. It’s nearly impossible to find educated people on this topic, however, you sound like you know what you’re talking about! Thanks

Leave a Reply

Khulisa

Khulisa