Skip to content

Generic filters
Exact matches only
Search in title
Search in content
Search in excerpt

Elena Harman, PhD

About Elena

Nonprofit Evaluation: How to Shift the Conversation from ROI to True Impact

You pursued development work to help a nonprofit you love keep the lights on and deliver programming to change the world. Your skill is in building relationships and communicating with individuals and nonprofits about why the nonprofit you love is amazing and why they, too, should support it. Everything is going well—you are doing a great job of securing funds for your nonprofit. All of a sudden, you start to see questions on your grant applications like, “What are the measurable outcomes your nonprofit is expecting to achieve?” and hear questions from donors such as, “How will I know my dollars are making a difference?”

The Great Nonprofit Evaluation Reboot: A New Approach Every Staff Member Can Understand, by Elena Harman, PhD (2019, CharityChannel Press)

Elena Harman, PhD is the author of The Great Nonprofit Evaluation Reboot: A New Approach Every Staff Member Can Understand, (2019, CharityChannel Press)

Ugh. Evaluation has thrown a wrench in your approach. But it doesn’t have to be all bad. With a few tweaks to how you think about, talk about, and use evaluation, the suggestions here can transform evaluation from a thorn in your side to a powerful tool that sets you apart from all other funding requests.

Below are my top tips for making evaluation work for you. I’ll include tips for answering tricky funder and donor questions, ways to incorporate evaluation in your external messaging, how to think about evaluation in funder requests, and some best practices for reporting your progress.

Read Between the Lines

Grant application and donor questions about evaluation can be a nightmare. Here are a few more examples of the many prompts that can rattle your cage:

  • What is your overall approach to evaluation?
  • How do you measure impact?
  • What are some examples of key evaluation results that demonstrate your impact?
  • Can you give me clear and specific anticipated outcomes that would result from my contribution?
  • How much of my donation will go towards programming versus overhead?

It’s hard to figure out what is being asked for overall, let alone what the differences between each question are. The first thing to recognize is that funders and donors are not evaluation experts. They know impact matters, but aren’t great about framing questions to ask about it.

The good news is, you are the expert in your organization’s work. You’ve seen the impact you’re having and the lives you’ve touched. If you provide high-quality, substantive responses—even if they are not quite answering the question that was asked—it can go a long way toward setting your funding request apart.

Remember, even though funders and donors often ask you to quantify things that are too hard (or impossible!) to measure, those questions have a common goal to understand how strategic and functional an organization is. What they really want to understand is how you use evaluation strategically to inform and improve your programming. Tell that story. Talk about how evaluation is integrated into your organization and how the findings are integrated into your program planning.

Conversations that focus on how much social good a program yields from a certain number of dollars are an uphill battle because the world is not that simple. You get stuck trying to wedge data into a formula that just doesn’t quite fit right. In my experience, the most successful evaluation conversations steer the conversation away from return on investment. When you focus on what you are learning, you bring more authenticity to your organization’s story, building trust with funders and donors alike.

Here are three areas of “low-hanging fruit” to spark your creativity about how to incorporate evaluation into your fundraising messaging. While they apply easily to individual donors, they work for conversations with funders too.

Support an Emotional Plea with Evaluation

Emotional staff pleas and client testimonials are unlikely to disappear from individual donor asks. But they aren’t enough. As donors increase in sophistication and younger impact-focused donors increase their charitable contributions, we are seeing a higher degree of skepticism about these emotional messages year after year. Once you’ve established the need for your services through emotions, use evaluation evidence to justify that you are the right organization to address the issue and demonstrate that you are making progress. The ideal message for individual donors includes both: emotions get you in the door, and evidence seals the deal.

Emphasize Evaluation as a Part of Programming

Sometimes I get pushback from development staff who don’t want to talk about evaluation because individual donors don’t like the idea that some of their money goes to non-programming expenses. My contention is that evaluation is a programming expense. Evaluation helps make sure the program is improving and achieving what the donors are supporting you to do. And that’s the message to use with individual donors: evaluation is a critical piece of our program, just like staffing and facilities. Without evaluation, we won’t be able to deliver on the expectations you have for us.

Be Prepared to Provide Some Light Evaluation Education

Some individual donors have taken the return on investment question a little bit too far. And unfortunately, being on the leading edge of how nonprofits use and communicate about evaluation requires some ongoing education for those who aren’t quite there yet. The overarching response to any pushback you get from individual donors about the specific return on their investment is that social impact doesn’t work like that. Complex issues require complex evaluation and messy work generates messy answers. Take this opportunity to help your donors understand a little more about how you think about evaluation. Be sure to emphasize the ways in which you do use evaluation strategically and the information that you can share about the impact of their investment when combined with other investments.

But What about Those Grant Applications?

At its core, evaluation is about learning and generating information to drive program improvement and monitor what difference the program is making. But when asked to identify “measurable indicators of impact,” our brains move toward things that are easy to measure, which is not the same as what really matters. The same thing happens when funders ask nonprofits to list expected outputs and outcomes. Does the number of people served really capture the impact of your program? What about the changes in the lives of those that you’ve served—where is that captured?

Avoid the Indicator Rabbit Hole

When you see a request for specific measurable indicators, or to distinguish between outputs and outcomes, don’t take the bait! Start first with figuring out what you really want to learn about your program and how evaluation can help with that. What do you wonder about your program at night? Which components of the program matter most? Which are you least sure about? Start by developing key evaluation questions to keep the focus of an evaluation on what matters. To help you get started, I cover key evaluation questions in by new book, The Great Nonprofit Evaluation Reboot: A New Approach Every Staff Member Can Understand.

Also, remember to stay high-level when crafting evaluation language for a proposal. Focus on your key evaluation questions and a general sense of what types of data will help answer those questions. Avoid promising specific methods and results, especially the kitchen-sink approach of listing everything that you could do.

Align Evaluation Expectations and Resources

I see a chronic mismatch between the evaluation nonprofits want and the resources they have. Evaluation is not free, whether done by an external evaluator or internally by the organization. Internally run evaluations cost the organization more in staffing time, whereas external evaluations require contracting dollars. And the two are not mutually exclusive: external evaluations still require staff time, most often for planning and data collection, and it can be wise to engage external support to build capacity for internal evaluation.

Consider the capacity of your team, both their evaluation expertise and their bandwidth, before deciding which route to go. The rule of thumb is that evaluation should be 10 percent of the program budget—not the grant budget—or $5,000, whichever is larger. I regularly hear, “But Elena, we don’t have that kind of money for evaluation,” to which I say, “If it is important enough to do, it is important enough to evaluate, and part of your role is advocating for the resources your nonprofit needs.” Funders are asking you for evaluation, so it’s time to ask them to put money behind it.

Treat Evaluation as a Team Effort

Evaluation is a team sport. It cannot be planned, executed, or reported well alone. Be sure to involve program staff, grant managers, your executive director, and perhaps even your board in shaping your evaluation plans. The most important people to collaborate with are those responsible for implementing the program, as well as those charged with making sure the evaluation happens. You need to understand what capacity they have for evaluation, and they need to understand what is being promised in the grant. I also strongly recommend that you consult with an evaluator before submitting any plans. Evaluators can provide an important “reality check” on whether your plans and your resources are aligned.

You’ve Got Their Investment: How Do You Report Your Findings?

Once the funds are successfully delivered, you’ll want to share what you’ve learned in your grant report, donor communications, or annual report…and here come the evaluation questions again.

At this point, there is pressure not only to convince funders that you are worthy of funding, but that their support makes a difference: the framing of grant reports presupposes positive evaluation results! It feels like a losing battle: no evaluation has entirely good findings but presenting less-than-ideal evaluation results feels the same as saying that your organization is not effective.

More good news: By shifting the return-on-investment conversation to one of continually improving your impact and programming, you have set yourself up for successful reporting.

Every evaluation has results to be proud of and results for you to learn from, and that is exactly the point: negative results are not a bad thing. I know it doesn’t feel like that when you first see the data. You’ve worked hard and believe in the work you are doing. Finding out that things are not going as well as you had hoped can be soul crushing.

After you have seen the data, take one day before writing anything about it for a report. Then, with your program and leadership colleagues, capitalize on the opportunity that less-than-perfect results present: to learn, grow, and improve your programs. Because at the end of the day, funders do not want or expect perfect evaluation results. Funders want to hear what didn’t work and how you will learn from it in the future.

One simple way to start is by transforming your data with context. A report no-no I see constantly in nonprofit grant reports is what I call “data vomit.” Data vomit is when you take a bunch of random data and evaluation findings and throw them at random throughout a report without any context or interpretation. This technique puts the burden on the reader to determine why those numbers matter. This also often prevents the nonprofit from sharing the negative results, which is such a missed opportunity to share how your program is learning and improving.

A better strategy is what I call a “data sandwich,” a concept I introduce in my book. A data sandwich has three parts: a conclusion, the supporting data, and a pretty picture. First, share one sentence about what you took away from the data, which is the conclusion. The second sentence should support your conclusion with data. The last element is a pretty picture related to your finding so readers who are visual learners or who want to see the full supporting data can do so easily.

Data Sandwich

Data Sandwich

The data sandwich highlights the message and allows you to frame your insights to show what you are learning. It is a practical and transformative practice for making your written evaluation results as impactful and authentic as possible.

Make the Shift Towards Learning

When you infuse evaluation into your work, you improve the impact of your organization and present a compelling case to funders. So, as a fundraising professional, what are the three most important things you can do to advance your impact and make friends with evaluation?

  1. Push back on indicator/output/outcome/ROI questions. Shift your conversations, both internally and externally to focus on what you are learning and how you are improving.
  2. Own your negative findings. You need negative findings to improve your work and show how functional and strategic your organization can be. Use data sandwiches to add context to what you’ve learned and build a stronger relationship with your funders and donors.
  3. Advocate for appropriate evaluation funding. Remember, evaluation should be integrated with any of the work you do to ensure that your work can continue to improve and make an impact.

Copyright © 1992-2019 CharityChannel LLC.


Leave a Comment