It’s a message we’ve heard time and time again: technology and big data are the way forward in the 21st century. But what happens when we introduce extensive, rigorous (and primarily quantitative) data collection and analysis into the development sector?
Donors – particularly in the international development sector, where some of the most prominent donors are large institutions and corporations based in the West – have high expectations regarding program effectiveness. They want every quantitative data point, every case study, and every interview imaginable in order to know that their money was well-invested. However, this can pose a huge stress on organizations, particularly those that do not have the human capital or financial resources to collect the data points to an “adequate” degree.
Two of donors’ biggest fears are 1) spending money on interventions that don’t work, and 2) not finding out an intervention doesn’t work until years later. Rife in the history of the development sector are horror stories of programs having been tested, scaled up, and replicated based on the positive results exuded by participants, only to find out a decade or so later that the positive outcomes were not attributable to the program at all.
As of now, the most feasible way to verify program effectiveness is through data that is collected by organizations working on the ground, or independent evaluators. And while there is nothing wrong with donors requesting technical data, it must be asked for within the context of what resources the program has to spare.
Picture this: you are an employee at a small development NGO. You may have a myriad of interests, but you were hired to implement a program. However, donor deadlines are fast approaching and your implantation duties switch, and you find yourself pounding away on a laptop, trying to develop and translate questionnaires that adhere to the evaluation metrics set forth by this donor, that also make sense to your participants. As you look through all of your files, you take note of your upcoming deadline.
One donor has asked for quarterly reports, another for monthly reports, and a third for 6-month and yearly reviews. All three of the donors have different metrics by which they expect you to evaluate the program for.
What do you do? You know this information is important, but there aren’t enough hors in the day, nor does your organization have the capacity to turn all of this information out in time, even though you needed the money to make your program a reality.
So, you make the difficult, but understandable, choice. You get the numbers that you can and leave the rest. It’s not possible, you say. The demands are too high, given what we have to work with.
Due to a lack of data, or misinterpretations of what is needed, etc. the program is determined to be unsuccessful. If an NGO is working with a repeat donor, they would hopefully have cultivated enough of a relationship to know exactly what is being asked of them, and the donor will know what is a feasible ask given the timelines. But this is an ideal scenario, and often doesn’t play out in reality.
Rinse, and Repeat
It doesn’t have to be this way. It has become a common rule of practice in development to always listen to the community, and trust their knowledge. However, we have yet to give local NGO workers the same credit when it comes to data collection. By establishing common rules around expectations for data, we can move toward a system that is more trusting, more transparent, and provides positive results for all involved.