Skip to content Skip to navigation

Using data to inform philanthropy decision-making

Stanford professor Jenna Davis explains using a counterfactual approach to assess the effect of a district-scale safe water strategy.
April 3, 2020

Credit: NASA

Africa and Europe at night.

Story also available in French: Utiliser les données pour éclairer la prise de décisions philanthropiques

Effective philanthropy depends on regular access to data and information to inform strategic decision making. For many foundations, the practice of capturing, using and applying data and information is broadly called Measurement, Evaluation and Learning (MEL). Stanford’s Program on Water, Health & Development (WHD) is the Strategy MEL partner for the Conrad N. Hilton Foundation’s (Foundation) Safe Water strategy, which invests in efforts aimed to increase access to reliable and affordable safe water services in several sub-Saharan African countries. In this role, researchers from WHD conduct regular MEL assessments, and are working to develop data and information to inform decision making by the Foundation’s Safe Water team.

In this Q&A,  Jenna Davis, a professor of civil and environmental engineering at Stanford and director of Stanford’s Program on Water, Health and Development discusses the counterfactual approach the team is undertaking to inform its Strategy MEL assessments.

What does it mean to be a Strategy MEL partner?  

Our role is to assess how the Hilton Foundation is delivering its strategy to achieve impact: it is not looking to evaluate the impact of any individual grant or program. In practice, individual grant evaluation is the responsibility of the Foundation, its grantees and country partners, and reflects the diversity of programs and approaches they have decided to fund. And, importantly, our role started after the Foundation started making grants under their new strategy. 

We are not interested in duplicating evaluation work individual grantees are already doing, or plan to do. However, we do want to track the rate of change in water services in each of the 12 focus districts of Burkina Faso, Ethiopia, Ghana, Mali, Niger and Uganda. We also seek to understand whether any change that does happen in the focus districts is part of a wider trend — which is why we need to compare districts using a counterfactual.

What is a counterfactual, and how can it contribute to effective philanthropy practices? 

We often want to know how much a community has benefited from a particular program or policy, compared to how it would have fared without it. The challenge is that the community can’t serve as its own control group — it either receives the program or it doesn’t. So, we have to find some other point of comparison that represents what would have happened in the community if the program had not been implemented. We refer to that as the counterfactual.

Through its Safe Water strategy, the Hilton Foundation is funding efforts to improve water supply services for households, schools and health care facilities in 12 districts across six countries in sub-Saharan Africa. We refer to these as the Foundation’s “focus” districts. The Foundation wants to know if water access is expanding faster in these districts as a result of their investments. 

One way to answer that question is to compare the rate of change in water access for each of those 12 districts in the years before and after the Hilton Foundation began investing. The problem with that approach is that we would have to assume that any increase in the rate of expansion is the result of Hilton’s involvement, when in fact it could be due to external factors – maybe it is due to a general uptick in a country’s economy, or a technological advance that reduces the cost of service provision. Our approach will track how water service levels are faring in the focus districts and in comparison districts that do not currently receive Foundation support. In this way, our analysis will ‘net out’ any broader trends that might affect water service delivery.

What approach are you using to select comparison districts? 

We started by laying out the broad dimensions that we believe determine the rate of change in water access. This includes everything from population density and household wealth to groundwater depth and the availability of low-cost surface water. After a careful review we ended up with a total of 15 dimensions, or factors, that we binned into three main categories: the demand for improved service, the cost of service provision and other contextual features that facilitate service improvements (such as existing infrastructure, accessibility or regional economy.)

Next we identified one or more specific measures to analyze each of these factors. For example, social scientists use different measures to determine household wealth, such as income, land tenure, or even the type of materials used for roofing on a house. We identified potential measures based on a review of scholarly literature to determine how others define these measures, and from our experience working on water services for several decades. Then we explored existing data that we identified as reliable from dozens of different sources, drawing from both national and international levels. It was an iterative process, where we adapted measures given limitations to data at local and national levels. 

Further, in an effort to hold some institutional features constant, we limited each country’s dataset to districts located in the same region as the focus districts. Once we compiled all the data, we computed a “similarity score” for each candidate comparison district that indicates how close it is to the relevant focus district across all 15 factors. We identified five candidate comparators for each focus district. Currently we’re arranging discussions with experts in each country so we can share these results and incorporate their feedback into the final selection.

How is this approach unique from other evaluation work you’ve done?

Our approach reflects what is feasible and practical given the many constraints presented by working in real-world conditions. For example, to make the strongest possible case for impact using evaluation, we would use a managed experiment. That means we would identify a set of households (or communities, or districts) and then randomly assign an intervention to, say, half of them. They would become our “treatment” group, and the rest would form the “control” group. Then we would collect information from both groups, before the intervention and again afterward. Comparing how the two groups change over time would tell us what incremental difference the intervention has made in the treatment group. This is what researchers mean when they say they’re conducting a randomized controlled trial or “RCT.” 

We are not doing this for a number of reasons. For one, the Safe Water strategy is not investing in a single ‘intervention’: it is a constellation of activities involving many organizations and approaches. RCTs often require a level of control that is unrealistic to expect in everyday life. Second, while RCTs can generate compelling evidence of impact, they tend to be expensive, and in development they often do not yield a simple conclusion or determination for, say, a foundation on what to invest in. Finally, many of the interventions we work on — such as infrastructure projects — can be difficult to assign randomly. That means the results from an RCT may not reflect what a decision-maker could reasonably expect if a policy were actually implemented.

The Hilton Foundation wants evidence that is both credible and reflects the real world in which they invest. That’s why we are incorporating some elements of a managed experiment, including multiple rounds of data collection and having a group of comparison districts (these are our “controls”). The reason we are putting so much effort into identifying good comparison districts is because the “intervention” of the Foundation’s investment was not randomly assigned. 

What advice do you have for other funders who are interested in incorporating measurement, evaluation & learning into their strategy and investment cycles?  

In my experience, it is still the exception rather than the rule for the water community to take this question of developing a credible counterfactual seriously. I do think that both funders and implementing organizations are very interested in knowing what impact they are having. But generating evidence takes both forethought and resources. My high-level advice would be to incorporate learning goals into the regular rhythm of work-planning. Identify what evidence is essential to inform priority decisions, and have someone — whether inside the organization or as a collaborator — develop a plan for generating evidence in a cost-effective way throughout the investment cycle. Equally important, be sure to build in time for reflection about that learning into your work plan, so that it actually has the potential to improve your investment approach. That, I think, is at the heart of effective philanthropy.


Go to the Selecting Comparison Districts Page  >>>

Read the Selecting Comparison Districts Briefing  >>>

Watch a narrated video presentation about selecting comparison districts.

 

Watch the narrated video presentation with French subtitles below:

Contact Information

Rob Jordan
Associate Editor, Environment and Sustainability, Woods Institute
rjordan@stanford.edu