Analyze This

Demonstrating that giving makes a difference is one of the biggest challenges facing philanthropy. In business, profit provides a good (though not perfect) guide to whether your activities are working. But in philanthropy there is no such standard unit of measurement for impact – though plenty of efforts are under way to change this. Here’s the good news: a new organisation may soon take the lead in turning these disparate efforts into a comprehensive framework of performance analysis for philanthropy and the organisations it funds.

The creation of ANA, the Association of Non-profit Analysts, was proposed on May 19th by Martin Brookes, the chief executive of New Philanthropy Capital, a British philanthropy research firm. It was enthusiastically endorsed by most of the 200 attendees at the “Valuing Impact” conference in London, hosted by NPC and Germany’s Bertelsmann Foundation (which, incidentally, is hatching ambitious plans to promote better performance data and research for non-profits/philanthropy).

Brookes cited the precedent of the Investment Analyst Society founded in Chicago in 1925, which spread to other financial centres and eventually resulted in the creation of the profession of Chartered Financial Analyst, with standards of best practice and a qualification for those who wanted to analyse firms for a living. Brookes hopes that being a non-profit analyst one day will be similarly professionalised.

The importance of rigorous analysis of the impact of non-profits/philanthropy was underscored by the man in charge of America’s most widely used performance rater, Charity Navigator – slogan “your guide to intelligent giving”. In a speech at the conference, Ken Berger said that sometimes he cannot sleep for worrying that Charity Navigator’s ratings (of up to 4 stars) “may do more harm than good”. Its stars are awarded for “financial resilience”, which largely means the ratio of costs to money raised. This is widely recognised to be a lousy measure of effectiveness:, as anyone in business knows, it is costs (such as spending on recruiting the best talent, marketing etc) that often make success possible.

In “Forces for Good: The Six Practices of High-Impact Non-Profits”, Leslie Crutchfield and Heather McLeod Grant note that many of the non-profits rated most highly by their peers score very poorly on Charity Navigator, precisely because they are willing to invest in being successful. Yet, as Dan Pallotta points out in his terrific attack on traditional charity, “Uncharitable”, many leading charities who know it is misleading nonetheless trumpet a four star rating from Charity Navigator on their website and marketing material.

So well done, Ken Berger, for being so honest, and for committing to improve the quality of Charity Navigator’s ratings. He says it will not be as hard as many people think to create useful measures of impact. The biggest problem may be the lack of cooperation from non-profits themselves, not least because shockingly few of them actually collect meaningful data on their own performance. Berger recently asked the 100 biggest charities with a four star rating to provide him with performance data, and only 10% did.

Hopefully, this will start to change, with the leadership of Berger, Brookes, Bertelsmann and the soon to be ANA. There is no time to lose. Especially in these tough economic times, the need to know whether charitable money is being put to good use could not be greater or more urgent.

3 replies on “Analyze This”

From my own experience after the tsunami, I’d say that rating aid agencies based on the amount spent on administration is doing more harm than good. It places far too much importance on administration costs as an indicator of aid agency quality when in reality that number tells you nothing about the impact, usefulness, and quality of aid agency projects.

To ensure that charity ratings are high aid agencies feel pressured to implement projects that have inherently low percentage spent on administration – such as construction projects or handing out student scholarships – regardless of whether or not these projects are really the greatest need. Aid agencies also are more likely to skimp on hiring qualified staff and to spent too little time in the field learning about the local realities before implementing a project.

I understand that charity watchdogs are in a difficult position as the only reporting required by the federal government is the IRS I-90 form, and that’s not even required for religious charities. And as Ken Berger pointed out, far too few aid agencies really evaluate their work, and if the evaluation is not positive then they are often deep sixed or kept for “internal us only”.

Unfortunately, the average donor does not understand all of this. With so many charity watchdogs – not just Charity Navigator – focusing on administration costs, and with no other readily accessible way to judge an aid agency, many people I speak with really do think low administration costs are a key indicator of the quality of an aid agency.

Ideally, aid agencies would be evaluated on the quality and appropriateness of their work. However, barring that, I would much rather see aid agencies rated on their financial transparency – do they make their expenditures available to both donors and aid recipients, are recent audit findings available on request.


I blog frequently and I genuinely thank you for your ?nformation.This content has really peaked my interest.I am about to book mark your website and keep check?ng for brand new information regarding once weekly.

Comments are closed.