Wednesday, June 26, 2013

How to start a successful nonprofit

Thinking of starting a nonprofit?  Good for you!  Successful nonprofits provide goods and services that may be simply unavailable to various target populations in the world of macro solution strategies.

The key word is successful. People don't intentionally start nonprofit organizations intending to fail. Still, many do fail, or they fail to grow enough to actually achieve their mission. Here are some strategies to make sure that your nonprofit is not one of them.

1. Have a plan. Failure to plan is the number one reason all new business ventures fail, and the nonprofit skeleton or framework is the same as it is for any other business. Start with a strategic plan and be honest when you assemble the facts and financial data for the plan. This is the time to exchange the rose-colored glasses for a microscope. Don't gloss over any problems or threats to success. Everything else will depend on how well you do this one step. If you don't know how to gather the data, or don't know what data to include, spend the time and/or money to find someone who does. A strategic plan is more than a template. It is up to you to make it a useful document. Step one of the plan should be step two in this article.

2. Assemble an effective board. The board is responsible for governance and guidance. It is not and should not be the staff that does the day-to-day tasks after the initial formation of the nonprofit. An effective board should be passionate about mission delivery but it should also include people who have some expertise in the nuts and bolts processes of running a nonprofit or business.

3. Formulate a way to meet your financial needs via a funding plan. Know from the beginning that even after you receive your 501(c)(3) determination letter, grant funding is going to be an unlikely source of funding for the first two or three years. You can't help anyone if you can't pay the bills.

4. Hire a qualified executive director/CEO. This should be the only employee that the board directly hires or engages. This person is the general contractor, if you will, for the organization. The ED is both the public face of the organization and the person responsible for the operational success of the mission. Typically, many founders try to be both a board member and the ED. While that may be viable for a few months, it is a conflict of interest emotionally, and it generally results in neither job being done well. Even if the ED is initially a volunteer, have clear performance expectations and communicate them well. 

5. Market your mission. No one is going to support you if they don't know about you. Hold an open house or an event to involve your community in your goals.

6. Do frequent assessments. Your strategic plan should have outlined the steps for success. Where are you in terms of achieving those steps?
   
7. Don't ignore negative trends. It is very tempting to focus only on the good things. Seemingly minor issues can quickly grow into major ones that can destroy your organization. If a problem seems to be present at every board meeting, it is probably is a major issue. No problem should be present for more than two meetings without raising concern. Be proactive in addressing  the issue and expect results, not excuses.

8. Be involved without resorting to daily micro-managing. That might seem to be in direct opposition to number seven, but it really isn't. If you created an effective strategic plan you should have developed job descriptions and goals for your staff that will support the mission. If you review those goals often you will know if something is wrong. Until then, try to let the people you have engaged do their job
.

Following these steps will put your nonprofit on the road to success.  If you have questions, feel free to email me at granthelp@ida.net.  

Wednesday, June 19, 2013

AHA! - It costs money to run a nonprofit.

GuideStar, Charity Navigator, and the BBB Wise Giving Alliance recently put out a letter cautioning donors against placing too much emphasis on artificially low administrative costs. It seems that they finally recognized that ranking the effectiveness of nonprofits by how little they spend to keep their business operations running might be harmful to the health of nonprofits.

The problem is, they have spent many years carefully and repetitively convincing  (dare I say brainwashing?) donors that there was some magic percentage that nonprofits should not exceed for administrative costs. It might be just about too late for these revered ranking organizations to have their "AHA" moment.

The truth is, just like any other business, nonprofits have to have a certain amount of money to keep their operations going, and that number, or more properly the ratio of administrative (overhead) to program costs, is going to differ according to the operational needs.

Nonprofits with  large staffing requirements, expensive equipment like trucks or warehouse racking or who require large buildings  to house and distribute goods are going to spend a higher ratio of their funds to maintain those than a nonprofit that runs a say, a suicide hotline in the basement of someone's home. Both should be equally evaluated for funding according to how effectively they accomplish their mission.

Very few businesses can operate efficiently and effectively and still keep overhead below 35%. All the ranking organizations accomplish by encouraging reporting of those expenses at 10 or 20% of revenue is to encourage creative bookkeeping. The result is that many good, useful and effective nonprofits go under because they simply don't have the infrastructure to maintain the organization at even minimum operating efficiency.

Another truly irritating metric that is often evaluated is whether key staff salaries are "too high". Too high compared to what?  It is undeniably true that some charitable organizations are simply a way for the founders and their cronies to get paid an exorbitant amount of money. But to say that the CEO of a nonprofit with 100 million dollars in revenue is overpaid just because of the dollar amount of their salary is ludicrous. If the CEO of a comparably sized for-profit is making $500K a year, why should the CEO of the nonprofit be expected to make a third of that figure?

If the salaries of nonprofit key staff are considered as a part of the whole, they often make in the aggregate less than 3% of the total revenue figure. Try hiring a competent CEO, CFO and COO for a for-profit and stay at that level. The turnover and retraining costs at nonprofits are the subject of much hand-wringing. Perhaps if the staff was compensated at levels comparable to for-profit industry standards and evaluated based on results instead of cost, that wouldn't be such a problem.

No one is advocating that nonprofits should spend the bulk of their donations and financial donor support on overhead. That's as ridiculous as spending too little. There is a point where overhead, including fund raising costs do exceed an acceptable level when compared to results.  But starving any operation of reasonable operating capital is a sure way to put it out of business, and far too many nonprofits are facing that reality every day.


I'm ecstatic that these three organizations have finally recognized that their effectiveness measurement metric is flawed.  Hopefully they can apply some common sense to their operations and provide better donor guidance reporting in the future.

Thursday, June 13, 2013

Outcome Measurement on a budget

Several people have contacted me regarding what software I would recommend for measuring outcomes.  Several of them indicated they thought I didn't understand that small nonprofits have small budgets.  I definitely do understand cost parameters. This doesn't have to be an expensive process.

Every mission focus and program is going to have different goals, and there are a myriad different testing standards for each.  In the example I gave in the post "Measuring outcomes means more money for you" the nonprofit actually used the state testing criteria, and they got that (free) from their Department of Education website. You will have to decide what data collection method will produce the most credible results.  For instance, if you deal with child hunger and nutrition, you might want to download the USDA nutrition tables (again-free) and model your data collection to collect results that have the same headings. You may have to pay someone to administer standardized tests if said tests require some sort of certification of the results. Sorry, but not everything is free.

Other folks asked about how to present (report)  the data.  Again, there are several data reporting software packages out there, and yes, some of them are rather expensive.  You don't need to spend a lot of money to design a viable report.  You just have to understand the concept of concrete data, define the data report points, and capture the data. 

Here is a simple example that incorporates tables you can easily create in any document program. Hopefully this takes some of the angst out of outcome reporting on a budget.

Easy Outcome Measurement Example

Goal - Description
Applicable Program
Target Population
Goal met %
Goal not Met
Notes






Improve Reading Score to grade level after 13 week program is completed

Youth Reading Skills Tutoring - Session IA June -Sept 1999
 20 -  low-income Youth - Aged 6 to 12
84%
16%
See Data Measurement Results









































































































Sample of Data Capture Sheet at end of program

Participant ID
Pre-Test Reading grade level, using State Competency test XX-XXX
Required Grade Level
Post Tutoring grade level
Improvement Goal Met





YTP-0613-0101
3rd- 4 months
5th- 8 months
5th -  7 months
Yes, within margin
YTP-0613-0102
4th-3 months
6th - 8 months
6th -  8 months
Yes
YTP-0613-0103
5th -8 months
6th - 8 months
6th-1 month
No

Monday, June 10, 2013

Why do you need audited financials?

This is a common question that I answer for prospective clients several times a month. 

1. Audited financials provide assurance that you have an actual bookkeeping system, have properly and honestly entered data, and have approved financial controls in place. That helps to reassure granting agencies that you will use their funds properly.

2. Audits are not cheap. If you can afford annual audited financials, you are more likely to be financially solvent and organized enough to meet other grantor criteria.

3. Financial statements give the grantor an idea of your organizational costs vs. program costs.

4. Financials provide a rough idea of what other types of income you are receiving. No grantor wants to see that grants are the sole source of support for your nonprofit.

5. Grantors or any other donor should use due diligence, (verify that your agency is a financially solid and effective agency) to decide if you are a safe place to invest their funds. Organized granting agencies, be they government or private, are required to do due diligence. Audited financials are part of that due diligence.

Many of the questions I get also ask why their in-house bookkeeper or an accountant on the board can't do the audit. The purpose of the audit is quite simply, to be sure you are telling the truth, the whole truth and nothing but the truth regarding your financial position, to ensure that you have financial control procedures in place, and that those controls are actually used. Therefore, audited financials must be done by an accountant who is removed from the day-to-day operation of your agency to provide objectivity.

The financial statements also provide the input to decide if you should be filing the long form 990, or the so-called postcard (actually an online form) format. It makes filing the 990 much easier. The accountant that does the audit can also file the 990, although there is usually a separate charge for doing so.

Very rarely, some granting agencies will accept an accountant's review letter for a comparatively new nonprofit that is less than three years old. This also has to be done by an outside accountant or accounting firm and requires much of the same raw data from you. 

Audits typically take from two to three months to complete for most agencies, although ones for large nonprofits can take much longer. This isn't a case of picking up the phone and hiring a firm to produce audited financials in a week. As soon as you close your current fiscal year books, you should plan on starting the audit. If you don't have a formal bookkeeping procedure, most firms will not even bid on the job. And by all means check the firm or individual's qualifications. You definitely want someone who has done nonprofit audits before.

When requested by the grantor, audited financials are not optional. If you don't have them, you will not be funded, and that may be the reason you can best understand about necessity for the process.

Monday, June 3, 2013

Measuring outcomes means more money for you.

If you have been in the nonprofit field for even a short length of time, you have probably had at least one grant turned down for "failure to provide sufficient outcome measurements". So what is different in reporting results today versus say, ten years ago?

There is considerably more emphasis being placed on verifiable results now. If the recession did nothing else, it made everyone aware that there isn't any money to waste. Grantors not only want to hear a good story, but they want to know they got the best possible results for their investment in your mission.

 Let's do a hypothetical study of the difference between then and now, using a remedial tutoring program as an example. The "old" way of reporting program results was more about how many people you served than what long term gains were achieved by the participants. This is sometimes referred to as "head count" results.
 
The old way might have been to present some figures related to how many children in say grades three through six in a certain school system were not reading at grade level. This was the basis of the statement of need.

Two case studies of results reporting
The goal of the program might be stated like this. "The XYZ Reading Improvement program will teach 40 children how to read more effectively by incorporating phonics into a remedial reading program."

To "prove the results" the old way,  the organization might say something like " In FY 20xx we presented the program to  40 children at risk of failing a class due to poor reading skills. By the end of the program the children  reported that they enjoyed reading more and were able to sound out new words themselves without asking for assistance. Report cards showed improvement in reading comprehension by every student, and grade equivalency improved at least one grade level for all students."   Sounds pretty good, right? 

Wrong, at least when measured against today's standards for judging success.
   
Today, you need to be much more precise in documenting both the before and after results, and ideally you will do some sort of follow-up to see if the  student improvement was maintained one, two or three years into the future.

Your new outcome-based success explanation might look like this.

Our program enrolled 40 fifth-grade students in January of 20xx. The children were selected through referrals from the (County) District case worker from county-wide fifth grade classes. Upon enrollment we tested the children's reading comprehension based on the (state) reading equivalency test as used by the (county) school district. The test was administered by Mrs Doe, who is a state-certified testing moderator.

Our initial results as shown in the accompanying table showed that 13 children were reading at a third-grade level, 22 were reading at a first semester fourth-grade level, and 5 were reading at a second-grade level. The children were divided into five eight-person groups with one group meeting on each weekday for one hour in the school library conference room.

In addition, we conducted interviews with each child separately and asked them to tell us what part of reading was the hardest for them. All of them commented that the words were hard, and they just skipped over the words they didn't know. When asked if they asked for help to pronounce the words 80% said they preferred not to ask so they wouldn't "look stupid". All of them said they hated to read out loud. The beginning test results are shown as Part One of Exhibit B.

During the ten-week session, we started with phonics, helping the children to understand how letters sound, why sometimes they sound different, and how to sound out the letters when they are combined into a word. Each week we tested the children with a list of ten words that they had never seen before(see representative sample). Every child had to read out loud at least five minutes a week.

In addition, all of the children showed poor comprehension levels, and this was confirmed by asking the parents to provide comments as shown on student report cards or during parent-teacher conferences. These were compared with the student's teacher evaluations for the purpose of understanding whether parents were able to understand the challenges the children faced. In short, because the children were skipping words they couldn't read, they did not understand what information was being presented.

Following the lesson plan (exhibit A) we tested the children again at the end of the ten weekly sessions. 37 had improved their reading skills by at least one grade level, and all of the children had improved their comprehension skills by 38 to 64 per cent, as shown by the graph in Exhibit B, Part 2.

Using interviews and class discussions, we asked the children to tell us whether reading was any easier for them now. Their collected responses are shown in Exhibit C.

In December of 20xx at the end of the first semester of the next school year,  36 out of the original 40 students were contacted (four children had left the county and could not be contacted) and retested to see if they had retained the material and techniques taught the previous year. All of the children were testing at their grade level, and all reported that they now either enjoyed reading or that reading wasn't as hard as before they took the class. See Exhibit D for the one-year test results. 

The Obvious Differences

Example one is a general statement not backed up with facts. At best it presents anecdotal  commentary, and basically says "we got paid to teach 40 children and we did that."

Example two presents a clear view of not only why the program was needed, but statistical evidence based on testing that can be used to both justify the need and prove that the program had a positive impact on the children not just during the program, but on into the next school year. The qualifications of the test administrator show that the tests were standardized, relative to the actual school environment, and not subject to being designed to make the nonprofit look good. Note the constant references to charts and graphs. While not absolutely necessary, presenting complex information in graphical formats makes it very easy for the grantor to see that they invested their money wisely, and that means this nonprofit may very well receive support again.

Does method two require a lot more investment of time and probably money?  Absolutely. The problem has to better defined, the methods more detailed, and above all the results must show some degree of real ongoing change in a condition. Nonprofits that can't step up and embrace better outcome reporting are the ones that will be out of business very quickly.


Every program can have provable results. Think about what you want to accomplish, and then design a way to prove the results and preferably, show that the results provided some sort of ongoing improvement in the problem. No one wants to  keep throwing good money after bad, and it's your job to show grantors that their money was not wasted