As any nonprofit client or prospective client who has worked with me can tell you, I am committed
to nonprofits providing documented proof of legitimacy to donors.
I don't work with or support scammy appeals or
organizations, and I qualify my clients on many of the same criteria that
donors use. Why? Because if they can't meet minimum reporting requirements, they aren't going to find
funding. I do a lot of research to ascertain if they have a shot at succeeding
with their funding plans.
Unfortunately, some of the research resources available to
me don't cover most of the clients I serve.
I'm talking about the so-called charity rating or evaluation
sites/organizations. Most of these organizations do not rate any charity that
files anything other than the 990. They don't accept audited financial
statements, annual reports, 990EZ's and certainly not the 990N, the so-called
postcard.
In effect, that means that if your revenues aren't already
well above six and even seven figures, you aren't going to get rated.
For instance, one of the best known and most highly
respected evaluation organizations is Charity
Navigator (CN). I use them where applicable, and I support their mission to
bring transparency to the world of charitable giving, so I definitely don't
have an axe to grind with them. And you can access information about some smaller
charities, you just can't get a rating for them.
CN's rating criteria is an excellent example of the
shortcomings of the methods used to rate public charities. They don't rate
every public charity, not even close to it. They rate about 7500 charities as
of July 2014, according to their own reporting. The National Center for
Charitable Statistics (NCCS) lists
over 1 million 501(c)(3) IRS-approved organizations in the U.S. as of 2010.
You can't even get a rating from these ratings organizations
unless your nonprofit meets certain criteria. For example, on CN's website they list this
information relative to qualifying revenue dollar amounts and length of time in
business required to receive a rating:
"We do not
evaluate organizations that file the Form 990-EZ. The Form 990-EZ requires less
financial reporting than the Form 990, and as such, we would lack important
data needed in our analysis.
Sources of Revenue: Because
our goal is to help individual givers, we evaluate only those charities that
depend on support from individual givers. Specifically, we require public
support to be more than $500,000 and total revenue more $1,000,000 in the most
recent fiscal year. And we do not review charities that receive most of their
funding from government grants, or from the fees they charge for their programs
and services.
Length of Operations: We require 7 years of
Forms 990 to complete an evaluation."
In theory, I don't have a problem with revenue or time in
business being used as qualifiers. The 990 is a legal document that implies
that the data is certifiably reliable, and that's a good thing. It's not only
the criteria that the rating sites use, it's the criteria used by both grantors
and individual donors as well.
Substantial existing revenue and longevity certainly impact
the effectiveness and scalability of programs, so I understand why it is
important to both rating organizations and grantors or donors. It's good in
theory, as far as it goes.
When theory meets
reality
Between theory and practice there often exists a wide gulf. In
practice, these ratings almost guarantee that substantial funding for smaller
nonprofits is, if not nonexistent, certainly drastically curtailed.
Again, it isn't the concept of ratings I oppose. Anything
that protects donors from the all-to-prevalent con artists using charity as a
cover story is a good idea. But it allows for some serious gaps that put both
smaller nonprofits and the general public at risk.
As someone who works with both start-up and small but
established nonprofit clients, I can attest that concept doesn't serve to
maximize diversity in problem solving.
In general, small nonprofits aren't particularly attractive
grantee prospects until they have been around for awhile. Typically about three
years is the point at which they are somewhat stable and have results to
display. That still leaves a big gap between three and seven years, at
precisely the point where the nonprofit needs to be actively growing.
Obviously, these rating favor larger charities. But is
larger always better?
Large organizations tend to be cumbersome for front line
staff working at the grassroots level. There are so many layers of bureaucracy
to penetrate that some of the best solutions to problems never make their way
into the communities or demographic that the parent organization purports to
serve.
Still, size does matter. The larger the organization, the
larger the revenue line gets and that supposedly equates to greater effectiveness.
For charities working at the local level, that simply isn't true. In a way,
when we only donate to the big, visible and well funded charities, we may in
fact be perpetuating rather than solving problems.
At the same time, we want to know that the money we donate
isn't going to pay for a fancy car or a private island in the Bahamas for the
CEO or the charity founder. The only somewhat objective way we have to assess
legitimacy is through the rating organizations. It's a vicious cycle. You can't
get rated without money, and you can't get money without being rated.
Why should we care
about some little grassroots charity?
All industries need new ideas and fresh approaches. The
communications industry did not develop the personal computer, the shoe-box
sized mobile phone or the smartphone. They exist because someone saw a need to
approach old problems in new ways. Every big company was once a small company.
Not every charity should survive, any more than any other
small business should survive. Be that as it may, there is a huge gap between 7500
and 1,000,000-plus. Somewhere in that gap there are good small organizations
with the potential to become much greater forces for good.
If not this way, then
how?
There has to be a better way to evaluate start-ups and
smaller organizations for effectiveness than just their revenue.
When revenue is the main criteria for even selecting a
charity for review, it does a disservice to everyone.
If a small or newer charity is taking in $50,000 and growing
that figure by 10% Y-O-Y and achieving maximum impact with that money, it may very well have discovered or
implemented an approach that can be repeated elsewhere if it can expand. It
can't build capacity without more money, and it can't get more money if
prospective donors can't research it. Capacity building grants are usually not
something that are awarded without considerable research by the grantor and
most of them start with the ratings organizations.
Instead of growing, many
of these smaller NPO's struggle at the same level year after year, never realizing
their full potential. They can't hire better, more effective staff or attract
capital because from a ratings standpoint they simply don't exist.
If the goal of charity evaluators is only to keep existing
organizations in business and stifle competition for already scarce support
dollars they do a very good job. If they exist to help donors make informed
decisions about supporting new blood and better ways to solve problems, I rate
them at less than one star.
What's the answer?
There has to be a way for these evaluators to include a
"small organization" component.
Perhaps they could reduce or eliminate the revenue
requirement in favor of an effectiveness rating for smaller organizations.
Smaller nonprofits could receive a rating like
"effectiveness increasing, effectiveness adequate or insufficient
effectiveness". The financial reporting might be accomplished by accepting
audited or accountant's reviewed financials or annual reports together with the
990EZ. Or even put them on a progress watch list. If they continue to report
stable performance for a period of time, they could receive a "OK to
donate" stamp.
What are your thoughts on the matter?
No comments:
Post a Comment