If you have been in the nonprofit field for even a short
length of time, you have probably had at least one grant turned down for
"failure to provide sufficient outcome measurements". So what is
different in reporting results today versus say, ten years ago?
There is considerably more emphasis being placed on
verifiable results now. If the recession did nothing else, it made everyone
aware that there isn't any money to waste. Grantors not only want to hear a
good story, but they want to know they got the best possible results for their
investment in your mission.
Let's do a
hypothetical study of the difference between then and now, using a remedial
tutoring program as an example. The "old" way of reporting program
results was more about how many people you served than what long term gains
were achieved by the participants. This is sometimes referred to as "head
count" results.
The old way might have been to present some figures related
to how many children in say grades three through six in a certain school system
were not reading at grade level. This was the basis of the statement of need.
Two case studies of results reporting
The goal of the program might be stated like this. "The
XYZ Reading Improvement program will teach 40 children how to read more
effectively by incorporating phonics into a remedial reading program."
To "prove the results" the old way, the organization might say something like " In FY 20xx we presented the program to 40 children at risk of failing a class due to
poor reading skills. By the end of the program the children reported that they enjoyed reading more and
were able to sound out new words themselves without asking for assistance. Report
cards showed improvement in reading comprehension by every student, and grade
equivalency improved at least one grade level for all students." Sounds pretty good, right?
Wrong, at least when measured against today's standards for
judging success.
Today, you need to be much more precise in documenting both
the before and after results, and ideally you will do some sort of follow-up to
see if the student improvement was
maintained one, two or three years into the future.
Your new outcome-based success explanation might look like
this.
Our program enrolled
40 fifth-grade students in January of 20xx. The children were selected through
referrals from the (County) District case worker from county-wide fifth grade
classes. Upon enrollment we tested the children's reading comprehension based
on the (state) reading equivalency test as used by the (county) school district.
The test was administered by Mrs Doe, who is a state-certified testing
moderator.
Our initial results as shown in the accompanying table showed that
13 children were reading at a third-grade level, 22 were reading at a first
semester fourth-grade level, and 5 were reading at a second-grade level. The
children were divided into five eight-person groups with one group meeting on
each weekday for one hour in the school library conference room.
In addition, we
conducted interviews with each child separately and asked them to tell us what
part of reading was the hardest for them. All of them commented that the words
were hard, and they just skipped over the words they didn't know. When asked if
they asked for help to pronounce the words 80% said they preferred not to ask
so they wouldn't "look stupid". All of them said they hated to read
out loud. The beginning test results are shown as Part One of Exhibit B.
During the ten-week
session, we started with phonics, helping the children to understand how
letters sound, why sometimes they sound different, and how to sound out the
letters when they are combined into a word. Each week we tested the children
with a list of ten words that they had never seen before(see representative
sample). Every child had to read out loud at least five minutes a week.
In addition, all of
the children showed poor comprehension levels, and this was confirmed by asking
the parents to provide comments as shown on student report cards or during
parent-teacher conferences. These were compared with the student's teacher
evaluations for the purpose of understanding whether parents were able to
understand the challenges the children faced. In short, because the children were
skipping words they couldn't read, they did not understand what information was
being presented.
Following the lesson
plan (exhibit A) we tested the children again at the end of the ten weekly
sessions. 37 had improved their reading skills by at least one grade level, and
all of the children had improved their comprehension skills by 38 to 64 per
cent, as shown by the graph in Exhibit B, Part 2.
Using interviews and
class discussions, we asked the children to tell us whether reading was any easier
for them now. Their collected responses are shown in Exhibit C.
In December of 20xx at
the end of the first semester of the next school year, 36 out of the original 40 students were contacted
(four children had left the county and could not be contacted) and retested to
see if they had retained the material and techniques taught the previous year. All
of the children were testing at their grade level, and all reported that they
now either enjoyed reading or that reading wasn't as hard as before they took
the class. See Exhibit D for the one-year test results.
The Obvious Differences
Example one is a general statement not backed up with facts.
At best it presents anecdotal commentary,
and basically says "we got paid to teach 40 children and we did
that."
Example two presents a clear view of not only why the
program was needed, but statistical evidence based on testing that can be used
to both justify the need and prove that the program had a positive impact on
the children not just during the program, but on into the next school year. The
qualifications of the test administrator show that the tests were standardized,
relative to the actual school environment, and not subject to being designed to
make the nonprofit look good. Note the constant references to charts and graphs.
While not absolutely necessary, presenting complex information in graphical formats
makes it very easy for the grantor to see that they invested their money wisely,
and that means this nonprofit may very well receive support again.
Does method two require a lot more investment of time and
probably money? Absolutely. The problem
has to better defined, the methods more detailed, and above all the results
must show some degree of real ongoing change in a condition. Nonprofits that
can't step up and embrace better outcome reporting are the ones that will be
out of business very quickly.
Every program can have provable results. Think about what
you want to accomplish, and then design a way to prove the results and
preferably, show that the results provided some sort of ongoing improvement in
the problem. No one wants to keep
throwing good money after bad, and it's your job to show grantors that
their money was not wasted.
No comments:
Post a Comment