You should subscribe to our blog! RSS feed Email

Proposal: Universal Indicator Library

There are lots of obstacles inhibiting the flow of development data. One important one is the lack of common standards for performance indicators. We would like to start just by creating a clearinghouse of existing indicator lists, and then using crowdsourcing tools to cross-reference these.

Apples to apples

It's difficult, even with a relatively small and self-contained universe (like a single international NGO, or a single USAID mission) to agree on standard metrics - let alone across different donors or NGOs.

There have been several attempts at doing this, none of them really successful. Large donors like USAID and the World Bank have created standard frameworks of indicators (the F Framework and the World Development Indicators, respectively), but even within those institutions, it's been hard to strike a balance between making indicators specific enough to be useful and making them generic enough to be shared.

And no one has cross-referenced these large libraries of indicators - which you'd have to do in order to compare (for example) USAID's progress with the World Bank's. Consider for example these two indicators: Are they the same?

3.1.6.4-4 Percent of children 12-23 months old who have received measles vaccine by 12 months
USAID's F Framework

SH.IMM.MEAS Immunization, measles (% of children ages 12-23 months)
World Bank World Development Indicators

The IATI standard

The International AID Transparency Initiative (IATI)'s data standard includes a simple schema for reporting results on indicators. But the schema doesn't even include any sort of unique identifier that would enable comparing results from on a single standard indicator. So while IATI makes it easy to compare financial flows, it's currently practically useless to compare results.

Practitioners at all levels are resistant to having standard indicators imposed on them, and there's no one organization that's really in a position to sort out all the different frameworks out there and decide what's worth measuring and how the metrics should be defined.

A universal, crowd-sourced repository

Before trying to standardize, a very useful step is simply to provide a comprehensive database of indicators used by different organizations. Each indicator will be given a neutral, numeric universal indicator ID.

At the beginning, the indicator definition will include:

  • Name
  • Definition
  • Denominator, if applicable
  • Unit
  • Display format (whole number, decimal number, currency, percentage, rate per X)
  • Possible disaggregation dimensions and values
  • Organization(s) using the indicator, along with code(s) they use to refer to it

Eventually the indicator record could also include:

  • Specific IATI activities using the indicator, along with the geographic scope of their reporting
  • Geolocated results data reported on the indicator by these activities

Crowdsourcing Tools

The repository will be self-governed using social software techniques similar to those used by the Stack Exchange family of Q&A sites and the Discourse forum platform.

In general:

  • Users will build a reputation score on the site by making contributions that other users consider useful.
  • Users with higher reputation scores have more privileges on the site, with the most respected users acting as de factor moderators.

Specific privileges might include (from the lowest reputation required to the highest):

  • Voting indicators up or down
  • Bookmarking the indicators that their organization uses
  • Commenting on indicators
  • Responding to comments
  • Voting comments up or down
  • Adding indicators to the repository
  • Noting related indicators
  • Flagging duplicate indicators
  • Tagging indicators (e.g. Health, HIV, ARV)
  • Adding alternative codes used by organizations to refer to an indicator
  • Creating new tags
  • Merging duplicate indicators
  • Rewording indicator definitions and titles

Benefits for the community

  • The universal ID provides a common point of reference, which is essential in order for interchange standards like IATI to be useful for reporting results.
  • Certain indicators will naturally rise to the top, providing a starting place for standards to emerge from practitioners, instead of being imposed from below.
  • As with Stack Exchange, all content on the site would be available for sharing, adaptation and reuse under a Creative Commons license.
  • The software for the site itself could be open-sourced as well.

A first step

It would be lovely to have a single, de-duplicated list of indicators that everyone agreed on. That's unlikely to happen anytime soon. Meanwhile, I think just having a globally unique identifier for indicators is a crucial first step.

Would this be useful for your organization? Please let us know what you think in the comments!