Rebecca Vermillion Shawver, MPA, GPC
The Evolution of Evaluation: Part 2
In Part 1, I discussed the fact that the evolution of evaluation is creating quite a stir within the grant community. Vast changes have taken place. What was once acceptable is no longer. In this part, I will address the changing duties of applicant organizations, the key relationship between evaluation and program logic models, and how logic models will be required more and more often by federal funding sources.
Just a Reminder
As a reminder, the biggest differences in the new federal evaluation requirements center on the change in philosophy. Under the new Uniform Grant Guidance, there is an intensified accountability of both performance indicators and fiscal accountability.
In past years, the federal government claimed that both were the foci of their program evaluations and audits. However, government reviewers seldom scrutinized an awardee’s performance of contracted deliverables. Now, they will no longer be willing to ignore noncompliance. As grant recipients, we will be expected to meet full compliance with what we promised to perform in our original grant applications.
Changing Obligations of Grantees
By being more forthright about what they want to see in proposals, in many ways, the federal government is making our jobs easier. Federal funding agencies are no longer leaving much doubt as to what they expect grant professionals to tell them before an award is made and what we will be expected to comply with after an award is made. In essence, this really is a step forward in leveling the playing field amongst applicants.
Choices of Program Strategies
We are now being required to clearly describe and define what delivery methodologies we will employ in our proposed programs and why these specific strategies were selected.
- Evidence of the effectiveness of the selected strategies – If we want to include our own past strategies and activities, now we will need to document past proven successes with verifiable data in order to justify the use of these strategies.
- Identification of the best practices – We will need to be up-to-date on what are the most successful and effective best practices in our respective fields. New programs will need to be developed around these proven strategies. We need to remember that the federal government is not typically a funding source that supports unproven hypotheses. References and citations of published reports will be expected to support decisions to include particular strategies.
- Plan for replication of successful strategies – We are now being expected to plan for future replication of our successful programs and the specific components that have made them so. This means that we will need to name potential partners for future replication and/or internal expansion plans.
- Ambitious but Attainable - Just recently while I was attending on online webinar for Talent Search reviewers, I heard the words “ambitious but attainable” repeated several times. We were instructed that in this year’s competition, if applicants’ projected outcomes are not both, that section will be given a score of zero. This is just one example of how serious the federal government is getting regarding the prudent use of taxpayers’ dollars.
- Cost efficiency and effectiveness of strategies – New proposals will need to describe and compare the cost associated with the strategies we have selected, as well as discuss comparison costs of other possible strategies.
With a new emphasis on cost savings and increased “bang for their bucks,” the federal government will be looking much more closely at our requested budgets and how they relate to our program implementation plans. Thus, we will each need to create budgets that are directly related to the achievement specific benchmarks and deliverables.
A few years ago, the U.S. Department of Labor required that a logic chart be included in a training grant that my college was a partner of. This chart linked the budgetary requests to specific outcomes and activities. This would have allowed the department to fund only one or two of our outcomes because the chart provided them with the information needed to know exactly how much each outcome was expected to cost. Luckily, our collaboration was fully funded.
We are now being required to include both grant management and fiscal management plans in our grant proposals. In fact, to be in compliance with the new Uniform Grant Guidance rules and regulations, applicants must have written procedures in place before applying for funds. In other words, we will need to have approved written procedures separate from the summary of our implementation plans that we will include in our proposals.
I realize that most of us are groaning at the thought of writing a formally approved set of fiscal and management plans. However, having these written documents actually support an organization’s ability to stay in compliance with private, foundation, state, and local government regulations. So even if your organization doesn’t receive federal grant dollars, you need to have written procedures in place and readily available for any auditors that may come your way.
Program Logic Charts
If you aren’t a lover of program logic charts, it’s time to get to know them better and learn to at least like them. Federal funding agencies are embracing them more and more each day – and requiring them in one form or another to be listed in grant proposals.
Besides, once you get to know how they connect your program outcomes and deliverables to your budget to your evaluation plans, you will most certainly see their value. They simply tie everything together in one neat package that is easily understood by your administrators, coworkers, and most importantly, your funders.
So, what are the benefits of program logic charts to you, me and our organization? Well, in my opinion, the most important benefits include:
- They illustrate the causal links between program activities and project outcomes and benchmarks.
- They strengthen relationships between project outcomes and implementation strategies
- They clarify and support evaluation methods and results.
- They provide a direct tie between program strategies and budgetary requests.
Below, I have provided examples of three different types of program logic charts that my office and our partners have used in recent grant proposals. Each is quite different in format because they were designed to address the specific needs of the funding agency to which we were applying. Note that some of them are actually based on forms that were provided by the funder as a required attachment.
Example 1 is a simple logic chart. Note that there are no linkages between activities and outcomes to the budget. While this one is quite common, it will not meet the criteria of most federal funding agencies since the implementation of the new Uniform Grant Guide.
Simple Program Logic Chart
|What you need for the program||What you will do to bring about desired changes||Measure-ments of activities||Initial changes in behavior or achievements||Changes that are the focus of the program||Usually changes outside the time period of the grant||Overall goal|
Money and inputs
Outcomes, success indicators,
Example 2 is the chart that I most commonly have used. Note that it has a formula inserted into the objectives fields. I have used this formula for more than twenty years. I learned it from a United Way of Indianapolis workshop that I took when I worked in the social service field.
It has proven immensely valuable because it contains a space for every required component of a comprehensive objective statement. Using it makes it nearly impossible not to write an objective that your funders will be pleased with.
More Common Program Logic Chart
|Program Goal Statement - ________________________________________ __________________|
|1. Outcome Objective: By ___, a minimum of ___ (__%) of program participants will have ________ as measured by ____________.|
|1.1 Process Objective: By ___, a minimum of ___ (__%) of program participants will have ________ as measured by ____________.|
Measurement Tools & Data Collection
|1.2 Process Objective: By ___, a minimum of ___ (__%) of program participants will have ________ as measured by ____________.|
Measurement Tools & Data Collection
Example 3 is based on a chart that the U.S. Department of Labor provided a few years ago for inclusion in a collaborative job training grant for which my college was a partner. I have modified it slightly to fit more general needs of any federal funding agency. Note that it is much more complicated at first glance. But upon closer examination, you will find that it includes all the pertinent information to show the relationship between requested funds and specific strategies and outcomes.
Program Logic Chart for Federal Proposals
|Strategy 1: Milestones and Costs||Year 1:|
|Strategy 2: Milestones and Costs||Year 1:|
If your organization is funded by any federal grant programs (or you hope to be in the future), I strongly recommend that you review these examples and learn how to complete them. I’m confident that your peer reviewers will be impressed by your ability to clearly provide all the needed information in one concise and clear table.
The UGG also addresses what types of data should be included in our evaluation plans. And while we all know that we must include formative and summative components, it is becoming clearer that the federal government wants more from us than a cursory overview of program impacts and documented best practices.
Below are some general evaluative measures that you should consider when creating your future evaluation plans. It is certainly not an inclusive listing. It is just a starting point to get us all thinking about how we will address the increased demand by federal agencies for more meaningful evaluation plans and reports.
- Quantitative measurements
- What information will be collected?
- How often will data be collected and reported?
- How will the data collected be used to measure specific performance measures?
- Will the data document the success of specific implementation strategies employed by your program? If so, how?
- Who will manage and analyze the data? Will they be impartial evaluators?
- How will the results be used to improve future program impact and to document the value of replication of program strategies?
- Qualitative questions
- Will you develop specific questions that pertain to the overall impact of your program? If so, what questions will you address in your analysis?
- What format will you collect qualitative information?
- Participant surveys or self-reports?
- Partner surveys?
- Focus groups?
- How will results be used to improve future outcomes and replication of program successes?
In Part 3, I will cover common risk factors that could increase the likelihood of your agency being audited, common grant myths not to fall for, and the top audit findings that you need to avoid.