Money and good intentions are not enough to fight poverty effectively. We also need data about what works and what doesn’t.
- Philanthropies often give away their money to projects without really knowing if they are successful.
- Microloans, for instance, are not effective at increasing income on average for the poorest people on the planet.
- Social scientists have begun to marshal the tools of big data to find out what works and what doesn’t. The goal is to turn philanthropy into a science, where money gets directed to programs for which there is strong evidence of their social effectiveness.
- Evidence-based programs are no panacea for poverty, but they are an important step forward.
You can’t make money without money. That was the exciting and intuitively obvious idea behind microloans, which took off in the 1990s as a way of helping poor people out of poverty. Banks wouldn’t give them traditional loans, but small amounts would carry less risk and allow entrepreneurs to jump-start small businesses. Economist Muhammad Yunus and Bangladesh’s Grameen Bank figured out how to scale this innovation and won the 2006 Nobel Peace Prize for their work.
The trouble is that although microloans do have some benefits, recent evidence suggests that on average they increase neither income nor household and food expenditures—key indicators of financial well-being.
That a program could be celebrated for more than 20 years and lavished with money and still fail to help people out of poverty underscores the paucity of evidence in antipoverty programs. Individual Americans, for instance, spend $335 billion a year on charity, yet most people give on impulse or a friend’s recommendation—not because they have evidence that their giving will do any good. Philanthropies also often give money to projects without really knowing if they are successful.
Fortunately, we are living in the age of big data: decisions that used to be made on instinct can now be based on solid evidence. In recent years social scientists have begun to marshal the tools of big data to ask the hard questions about what works and what doesn’t. The goal is to turn philanthropy into a science, where money gets directed to programs for which there is strong evidence of their effectiveness.
I learned about microloans in 1992, on what was supposed to be a short detour from a career in hedge funds. As a 22-year-old intern in El Salvador for one of the largest microlenders, I was struck by how little the organization knew about their effect on clients—usually women—and the local economy.
They knew that many customers were coming back for more loans and saw “client retention” as proof of their success. Why else would customers keep borrowing if it was not helping? But the microlenders did not have any serious evidence that the loans were helping women get their families out of poverty. When I asked about evidence on impacts, I was directed to a perfunctory questionnaire. I wondered: maybe repeat borrowing is not good if the client’s business does not continue to grow. Perhaps true success would be to provide one loan to help someone in need and then down the road to discover the borrower to be stable enough not to need another.
Here was a huge nongovernmental organization pulling in large grants to help the poor, with no real measurement of whether their efforts were working. For-profit businesses have benchmarks to know how they are performing, but most donors are not accustomed to asking charities about their results. Sometimes they ask what proportion of money goes to overhead, but that number is mostly meaningless. The question that needs to be asked—and that needs to be asked every time someone writes a check to a charity or a government commits to a multimillion-dollar aid project—is, Will this actually work to alleviate poverty? In other words, how will people’s lives change, compared with how their lives would have changed without the program?
This question knocked me off my Wall Street track and into graduate school for economics. One of my professors, Michael Kremer, had just started conducting randomized controlled trials to learn what programs work to help kids stay in school and improve the education they receive. He was borrowing this method from health and other sciences—randomly assigning schools to either receive a particular resource (the treatment group) or remain as they would have been otherwise (the control group) and then comparing school performance across these two groups.
His approach gave me an idea about how to return to the microlending questions that had brought me to academia in the first place. When I presented my questions and described a simple experiment that could address them, I thought that I was proposing a side project, not a dissertation. I had just finished reading complicated papers for two years, papers that often tackled empirical questions with fancy econometrics, and I assumed a dissertation must do the same. But I still remember Kremer’s response: ask an important question and do not worry about whether your method is complicated and demonstrates “smarts.” Just worry about answering the question well.
So off I went in my fourth year of graduate school to South Africa to set up my first experiment on the question of whether microlending is effective. I trained a team that would seek individuals who wanted a loan from a microlender. Of the ones who qualified, I randomly assigned them into treatment and control groups and provided the lender with the list of those assigned to treatment. The lender would approach them and offer them loans. It seemed fairly straightforward.
Instead the research project failed miserably. Each time I passed names to the lender, it would take months for them to find the potential client, and sometimes they never would. And then the lender poached my best team member, killing my best shot at gathering more people for the project.
It turns out to be difficult for academics at universities to carry out studies far away with the level of detail that good scientific trials require. You need reliable staff on the ground who understand the science but who also have the social skills to work with partners and manage field operations.
By 2002, as I was starting out as a professor, I founded a nonprofit called Innovations for Poverty Action (IPA) to help fill these knowledge gaps in finance, health, education, food, and peace and postconflict recovery. IPA connects my curious number-crunching academic colleagues at the Massachusetts Institute of Technology, Yale University, and the like, with a trained staff of more than 500 people working in 18 countries on randomized controlled trials. We have now conducted upward of 500 trials. A chief insight has been that simple interventions that take human behavior into account can have outsized effects. Putting chlorine dispensers right next to water sources, to make it easy to remember and publicly observable, increases use of clean water sixfold. Adding a simple bag of lentils to a convenient monthly immunization camp for families in India roughly sextuples rates of full immunization for kids (while making the entire process cheaper because more families show up). And cheap and simple text message reminders can be effective in helping people accomplish their goals, from saving money to completing their medication regimens. Naturally not everything works. We must figure out what works and what doesn’t.
We have also learned that information is only part of the solution. Having strong relationships with local governments, nonprofits, businesses and banks keeps the academic experts working on questions that matter and gets answers into hands of the people who can use them.
Over the years microloans kept nagging at my colleagues and me. Fifteen years after my first study attempt in South Africa, we now have seven randomized trials completed on traditional microloans and one on consumer lending back in South Africa. The seven projects are spread out around the world and have been conducted by different researchers with similar research designs: in Bosnia and Herzegovina, Ethiopia, India, Mexico, Mongolia, Morocco and the Philippines. These studies found some benefits of microloans, such as helping families weather hard times, pay off goods over time and even make small investments in businesses. But there was no average impact on the main financial well-being indicators—income and household and food expenditures. To the chagrin of microloan critics, there also were no big negative effects.
So what does work to increase income for the world’s poorest?
We just recently studied another program that addresses some of the shortcomings of microloans. One sad failure of many programs (including microloans) has been in reaching the poorest of the poor—known in the field as the ultrapoor. They live on less than what $1.25 would buy in the U.S. a day, and they account for more than a billion people, or one seventh of the world’s population. The things keeping them poor are usually complicated enough that no one individual fix is going to help, but one program being run in Bangladesh by BRAC, the world’s largest nonprofit organization, and a few other places stands out. It saw extreme poverty as a complex problem deserving of a complex solution. Its “graduation” approach, designed to move the extreme poor out of their current conditions, offers a package of six items:
- A “productive asset,” that is, a way to make a living (livestock, beehives to make honey or supplies to start a simple store).
- Technical training on how to use the asset.
- A small, short-term regular stipend, to meet immediate needs for daily living so the individual does not have to sell the asset while learning how to use it.
- Access to health support, to stay healthy enough to work.
- A way to save money for the future.
- Regular (usually weekly) visits from a coach, to reinforce skills, build confidence and help participants handle any challenges they encounter.
The Ford Foundation and Consultative Group to Assist the Poor in Washington, D.C., came to me with an ambitious idea: test an identical program, implemented by different organizations in multiple places. We ended up conducting similar studies in six places: Ethiopia, Ghana, Honduras, India, Pakistan and Peru. What we found was unprecedented—everywhere the program worked, it worked well. When we came back a year after the program had ended, we found the impact had lasted: people had more money to spend and food to eat. When we calculated the costs (labor, asset costs, transportation and overhead) as compared with the benefits, the overall returns were positive in five out of six countries—ranging from 133 percent in Ghana to 433 percent in India. In other words, every dollar invested in India yielded $4.33 more food and spending for ultrapoor households.
The one exception was Honduras, where the productive asset most used by the local organization—chickens—was an outside breed that was not resistant to local disease and so became sick and died. This was a humanitarian failure, but it demonstrated that the asset is an essential component of the program. Remove that component, and the other five components did not generate positive impacts on their own. As the programs are expanding in Ethiopia, India and Pakistan, we hope to learn more about how to make this program work better, either by reducing costs or by improving the services.
There is no panacea in the fight against poverty. Even a graduation program for the ultrapoor, which is ready to scale and yields an excellent return for a charitable buck, is not going to transform the ultrapoor into car-buying middle-class households. The vision statement for Innovations for Poverty Action is appropriately modest: more evidence, less poverty. We are not going to end poverty, but with proper evidence we can make important strides.