In Part I of this two-part article, we summarized the features and criteria of six of the best known "charity raters" including Charity Navigator, Better Business Bureau, and others. Many readers added thoughtful comments and noted lesser-known raters as well; we encourage you to read Part I and the posted comments.
Here in Part II, we offer advice to nonprofits on managing their ratings, and comment on the impact of the raters as a whole.
Just two weeks after we published Part I, an unusual three-some of Guidestar, Charity Navigator and the Better Business Bureau issued an unexpected but welcome joint statement " denouncing the "overhead ratio" as the sole measure of nonprofit performance." (emphasis added) (overheadmyth.com) The statement also defends overhead to an extent: "Overhead costs include important investments charities make to improve their work: investments in training, planning, evaluation, and internal systems -- as well as their efforts to raise money so they can operate their programs. When we focus solely or predominantly on overhead ... we starve charities of the freedom they need to best help the people and communities they are trying to serve."
Despite getting widespread and welcome play in the nonprofit press and blogosphere, the Overhead Myth campaign seems to have gotten little traction in the mainstream press and its intended audience. And the campaign is almost impossible to find on either the Charity Navigator or the BBB's Wise Giving site; one might almost think they are burying it on purpose. Even more importantly, Charity Navigator has not said it will change its proprietary (and secret) formulas for analyzing a nonprofit's finances.
Overhead Myth campaign
The Overhead Myth campaign does not address the profound flaw in all the rating sites: none of them focus on impact or effectiveness. And when they do mention effectiveness, they see it almost exclusively through a lens of factories producing human services. We believe that effectiveness is also about long-term advocacy campaigns, strengthening community institutions, prevention of harm, and the inspiration of the spirit: all goals that resist results metrics just as surely as time resists the metrics of weight or inches.
We applaud Guidestar for making data available and letting readers apply their own assumptions, and we applaud GreatNonprofits for giving a vehicle for ordinary people to speak out about a particular nonprofit.
The net on the rest: the rating sites are here to stay, and they will continue to perpetrate untested assumptions that inappropriately hurt individual nonprofits and the nonprofit community as a whole. We all have a responsibility to raise questions about them publicly and in our boardrooms and newsletters. And at the same time, we each have to protect ourselves in their arenas.
Why should you pay attention to your own rating?
We couldn't find good data on how many people or who is using the rating sites. It's generally thought that the large brand-name nonprofits such as the Red Cross or the Heart Association are most frequently viewed on the rating sites. (Most of the sites tend to rate only larger or only national nonprofits.)
Journalists reporting on a nonprofit will often look up what is said about it on the rating sites -- after all, it's an easy way to get information. The ostensible purpose of most sites -- to inform donors -- probably only "works" when a donor is already interested. For example, a person who receives a mail appeal from, say, Partners in Health or OXFAM, and who feels inclined to make a donation, might look them up on one of the rating sites before doing so.
What's wrong with the financial metrics used?
Almost all the sites rely heavily on analysis of financial information found on Form 990, there are some serious drawbacks to doing so:
- Form 990 does not allow inclusion of non-cash (in-kind) donations as income, although such donations are allowed in audited statements if certain guidelines are followed. As a result, nonprofits such as UNICEF, food banks, hospices and community theatres that obtain great amounts of donated food and services appear to be "inefficient" or as having overly high overhead on Charity Navigator and other sites.
- Generally Accepted Accounting Procedures (GAAP) allows allocation of joint costs on items such as newsletters that contain both educational material and a donation form. Charity Navigator re-assigns 100% of such joint costs to fundraising, making many nonprofits appear to spend far more on fundraising than they do.
- The 990s are important but flawed sources of data. As just one example, one study showed that 50% of 990s had mathematical errors of greater than $5,000.
What's wrong with the rating sites as a whole?
When a nonprofit is extremely badly managed or is run by crooks, the charity raters are typically the last to know. Paradoxically, the raters wait for the New York Times to identify bad apples, and then they jump in to call the apple rotten. The Central Asia Institute (Three Cups of Tea) was found to be crooked by 60 Minutes at a time when it boasted four star ratings (the top available) from the raters. Susan G. Komen for the Cure had only the highest ratings on Charity Navigator -- and still does -- because pinkwashing, elimination of funding to Planned Parenthood, its widespread reputation as an exploitive and mean-spirited place to work, and the strong scent of private benefit are not part of Charity Navigator's metrics.
Stanford Social Innovation Review sums it up: "Our review of methodologies indicates that these sites [Charity Navigator and other quantitative rating sites] individually and collectively fall well short of providing meaningful guidance for donors .. .
"The major weaknesses are threefold:
- "They rely too heavily on simple analysis and ratios derived from poor-quality financial data
- "They over-emphasize financial efficiency while ignoring the question of program effectiveness
- "They generally do a poor job of conducting analysis in important qualitative areas such as management strength, governance quality, or organizational transparency."
So given all this, what do we do?
Six tips to manage your ratings
1. Look up your organization on all the key sites. It's important to know what they are saying about you, or that you aren't listed.
(While you're at it, set up a Google Alert with your organization's name to get emails when something about your organization gets onto the web.)
2. If you think your rating is unfair and inaccurate, file a protest and ask for a correction. We've heard from some readers that they have gotten reviews revised after months and months of arduous efforts; some have never even gotten a response. But at least you have your protest letter. If a donor, board member or the press ask you about your poor rating, you have your protest letter to show them.
Post your protest letter on GreatNonprofits as well. Don't forget that readers can click-through to GreatNonprofits reviews directly from the reviews on other sites.
And the raters should have to respond. All of them describe themselves as valuing accountability, so make them accountable to you.
3. Make a minor effort to keep your financial metrics within the conventional bounds. We often hear about the "charities with the lowest percentage of administrative costs." Perhaps we should call them the "charities with the best accountants."
If you have an auditor, go over the system of allocation he or she is using. It may be a simple matter, for example of re-classifying the executive director's salary from 100% administration to 50% administration and 50% program (which it probably is!).
If you don't have an audit, talk with the person who prepares the Form 990. Discuss how functional expenses are reported (the functional expense section is where the raters take much of their "efficiency" data). Many smaller nonprofits mistakenly assume, for instance, that 100% of the executive director's salary or the rent are "management and administration." In fact, you can allocate both partially to admin and partially to programs based on time analysis and square footage use.
4. Pay attention to the "Program Accomplishments" section of Form 990. With more people going to the 990s through Guidestar and the rating sites, it's more important to use that space to talk about impact and effectiveness.
5. Make a minor effort to get some good reviews on GreatNonprofits. Ask your staff, board members, volunteers, audience members, and clients. If you have a gallery or a theatre, set up a table with a computer and ask people to take a moment and write a review. And if you are a client, audience member, or staff or board member, take a moment to help your nonprofit by writing a quick review.
6. Consider boycotting Charity Navigator and the other sites, even if you have a good rating. When you publicize your good rating, you add undeserved legitimacy to the rating sites.
If you feel you simply must have some kind of seal or official-looking medal on your site, use the one from Guidestar. It has a nice official look but it doesn't state that it has rated you. We like Guidestar because it provides information about nonprofits and lets readers draw their own conclusions.
The good news is that donors rarely make a decision to give or not to give based on a charity rating. As evidence, just look at the two largest areas of individual giving: churches/congregations and universities. People give to their alma maters and their churches for reasons in a universe completely separate from the one where the charity raters reside.
And a last thought: isn't it ironic that recent history has shown us that the much-vaunted rating agencies in the for-profit world not only failed to predict the failure of the the big Wall Street firms, but actually contributed to the onset of the great depression by misleading the market? And so why do we think rating agencies are a good idea for nonprofits?
Our thanks in particular to the many readers who posted comments to Part I of this article, to the readers who allowed us to interview them (anonymously), and to Gayle Gifford in particular.
See also in Blue Avocado:
The Ratings Game, by Stephanie Lowell, Brian Trelstad, & Bill Meehan, in Stanford Social Innovation Review