The donors of the world invested $132 billion last year in global development projects, but most donors and implementers are still unable to account for results. And most development workers in the field still don't have the tools they deserve. The solution is not to keep trying to re-create commercially available software in-house.
Some history: A few years back, before we made DevResults, we were a web development shop.
Much of our work involved building custom systems for development organizations. Project budgets would often include a line item for a computer system to keep track of the work. The budget for these systems varied widely, but was usually in the six figures. One system I was shown - for a single project in a single country - had cost two million dollars to build.
These systems were expensive, but at least they were thoughtfully designed and easy to use.
Just kidding! They were none of those things. They were terrible and people hated to use them. And it seems like the more expensive they were, the worse they were; the two-million-dollar thing was particularly bad.
And even the systems that weren't all that terrible, were thrown away when the project ended, only to start the cycle all over again when the next project came along.
What made all these software projects fail?
Why did all of these expensive efforts end up going nowhere? Every unhappy software project is unhappy in its own way, but there are two common factors that I'd like to point out.
1 Good software takes a long time and can't be rushed
Software projects almost always take longer than expected. It's been shown again and again that you can't hurry things along by spending more money and hiring more programmers. "Agile" development can get you more realistic expectations, but not fundamentally shorter schedules.
Another true fact about software is that the first versions are generally disappointments. It takes many cycles of prototyping, testing, and going back to the drawing board before an application or a new feature are really mature.
One thing these systems all had in common is that they were bound to a typical development program's lifecycle of five years or so. In the best of circumstances, work would start as soon as the project was awarded; but often they weren't started long after they were already needed, because time that could have gone into producing software instead went to building political support, finding money, and getting the project through the bureaucracy.
A project that starts out under unrealistic time pressure is often already doomed. When a programmer tries to do too much in too little time, the result is sloppy and un-maintainable code. Software that starts its life this way rarely gets better.
2 Organizations that are good at managing aid are rarely good at software.
Good software development teams are rare to start out with, and they're not often found in non-technical organizations.
NGOs don't design their own customized computers or phones or pens or clipboards or projectors or cars and then contract to have them built from scratch.
If you want a phone, you might want something Apple designed, not something described in a 40-page RFP and built in 6 months.
Software is not so different. If you're not a software company, it's usually a bad idea to try to build software.
This logic seems obvious, but it's ignored often enough in government IT departments that the Office of Management and Budget has had to enshrine it in a series of rules and policies that specifically promote commercially available software and cloud-based systems.
A better alternative: Software as a service
We figured that we could do better by everyone. Instead of throwing together a disposable single-use system, we'd build a platform that we'd keep improving over the years. Instead of baking project-specific logic into the system, we'd make everything configurable, so that it could be used in any context.
And instead of burdening one project with the entire cost of creating and maintaining the software, we'd change the model so that lots of customers could share a lower cost, and pay annually just for the time they needed it.
This model has worked out really well - well enough that we're now starting to see other companies providing software to the development community as a subscription-based service.
Will development organizations learn from history?
So now that there's a lively ecosystem of stable, robust, field-tested commercial systems available, surely no one is still trying to reinvent the wheel in-house, right? Surely development organizations have realized that they're free to focus on development, and don't need to try to be software shops as well, right? Surely we've at least learned that lesson?
Spoiler alert: Nope.
Organizations at all scales are still partying like it's 1999, announcing plans to build their own results data systems from scratch, with bewilderingly optimistic schedules.
The latest example: Last summer a large organization we're all familiar with released an RFQ for a performance management system. The solicitation explicitly called for a system to be prototyped and built from scratch. The scope of the proposed system was essentially DevResults' feature list.
And the timeframe? The system is due six months after the contract award.
So there are programmers out there, as I write this, who are endeavoring to re-create DevResults from scratch in six months.
This is a lot harder than it looks
Here's the thing: Everything looks simple from the outside - especially when it's described in broad strokes in an RFP.
But the specific problem that I've been working for years to solve and my colleagues have been working for years to solve is much more complicated than it looks on the surface.
I'm going to really go into the weeds now, and you can safely skip this list of bullets if you're convinced.
But if you think that something like DevResults can be whipped up in a few months, here's just a small sampling of the problems that we've wrestled with in the process of building DevResults:
Disaggregation Most indicators required by donors are disaggregated – not only by standard demographic dimensions (gender, age), but along program-specific dimensions that need to be customized for individual programs: by crop, by ethnicity, by HIV status, etc. How do you users define these dimensions? How are standard dimensions shared between indicators?
Computed indicators Many indicators are defined as functions of other indicators: There are numerator/denominator ratios, sums, and more complicated formulas. What does the UI look like for allowing users to define these formulas? How do you ensure that formulas are valid, and don’t result in circular definitions? What attributes and disaggregations of the underlying indicators pass through to the computed indicator?
Indicators drawing from raw data Much indicator data is drawn from granular datasets. For example, multiple training indicators might draw from a set of training logs. How do you incorporate the granular data directly into the system, allowing the tabulated and aggregated results to feed directly into indicator data? How would staff define these relationships through the UI?
Geographic roll-up Indicator data is reported at different levels of geographic specificity: Some data is reported at the national level, some is reported at sub-national levels (provinces, districts, counties, oblasts, etc.), and some is reported at the level of specific facilities with precise geographic coordinates. How do you make it possible for decision-makers to roll data up to the level of granularity they need? How do you build enough geographic intelligence into the system so that it automatically knows, for example, that Metuge Health Center is in Pemba District, Cabo Delgado Province, Mozambique?
Approval workflows for data submission Incoming data needs to be reviewed by staff before it is accepted, and there needs to be a permanent record of the submission and approval process. Who does the reviewing? Where does the data live in the meantime? How is everyone in the process notified as data is submitted, returned with questions, re-submitted, or accepted?
Defining who does what, where, when Every implementing mechanism has a technical scope and a geographic scope. Both may change over the life of a project. For example, one year a project may be doing HIV treatment work in one province, while doing prevention work in two other provinces; the next year the project might expand its treatment work to cover all three provinces. The system needs to have this information in order to provide submission forms that make sense. How do you make it easy for managers to keep these mappings up to date, especially when dealing with dozens of indicators and hundreds of locations?
Mechanics of data submission Development results data is collected in a variety of different ways: Some users will input data directly to the web interface; others will use offline mobile data collection tools, like iFormBuilder, Magpi, or ODK. Most will want to download Excel templates that are generated by the system according to their specific reporting requirements (the indicators they report on, in the places where they work). How do you ensure that all of these users have a smooth workflow and an interface that makes sense for their needs?
This list isn’t even close to comprehensive – I could go on. The point I'm trying to make is that it’s simply not possible to build, in a few months, from scratch, a system that addresses all of these requirements and that is fast, secure, reliable, and user-friendly. But this is the bare minimum of what’s required for a reporting system to be adopted in the field.
And we have this functionality ready to go, off the shelf. With all its imperfections, DevResults is the organization-wide standard for several giants in this field.
And later this year, instead of apologizing for missing another arbitrary deadline from someone's fantasy world, we'll be announcing features that are really going to make people's jobs easier: One-click IATI output. Robust beneficiary tracking. Anonymization of personal information. Easy visual analytics and more useful dashboards.
So what?
I want to see this problem solved right, for everyone who works in development. My father worked for USAID, and I grew up in Latin America surrounded by people trying to move the needle towards health and prosperity. I know that it really sucks for development workers in the field to try to do their jobs with crappy tools - whether they're making do with Excel and email and lots of copying and pasting, or they're beating their heads against unwieldy home-grown systems.
And I'm proud that me and my team have come up with one way of making things easier for the people doing the good work.
It's frustrating to see this problem space trivialized this way. This isn't a simple problem to solve. If it was, it would have been solved thirty years ago, around the time my father was teaching himself about databases so he could account for the results of his water projects in South America.
Neither donors nor implementers need to try to be software shops any more. The public interest is better served by investing in off-the-shelf systems and strengthening a robust market of software firms that are focused on the needs of global development workers. If DevResults isn't a good fit for you, that's fine; there are other options out there.
But please don't be the latest development organization to decide to reinvent this particular wheel.