Secondary Menu

Making Sense of Summative Evaluation: Three Tips for Making Those “Strings” Work in Your Favor

Summative evaluation (a complement to formative evaluation), answers the question, “How are people’s lives (or communities, or animals, or the environment) different because of our program?” Some people refer to summative evaluation as impact or outcomes evaluation. Its purpose is to determine if your program is effective in making meaningful change for participants over the short and long-term.

Summative evaluation looks at the changes among program participants or community conditions. Typically, these include changes in knowledge, attitudes, and behaviors. In practical terms, it can tell us:

  • If participants who quit smoking are still non-smokers one year later.
  • The change in students’ aggregate body mass index scores after a fitness program.
  • The decrease in stray or abandoned animals picked up by municipal agencies.

Sadly, summative evaluation is usually what administrators are referring to when they bemoan the “strings” attached to grant funding. But every nonprofit should implement comprehensive summative evaluation regardless of their funding sources. Measuring outcomes shouldn’t be considered “strings”; they should be standard operating procedure for nonprofits.

Commonly, nonprofits that do not implement formal evaluation systems suffer in fundraising compared to better-prepared organizations that can concisely and compellingly communicate the meaningful results of their programs. How else could you document how well your programs help people in the ways you think they do?

Elements of Summative Evaluation

Summative evaluation can be explained most easily using a logic model—with which many grant writers are familiar. A logic model is a tool nonprofit leaders, program managers, and grant writers use to illustrate the relationship between program activities and intended outcomes. These models provide direct and systematic ties between the goal, inputs, participants, and meaningful outcomes. These charts clearly illustrate a program’s plan and activities for team members, funders, and donors. Additionally, logic models are invaluable communication tools when advocating through public channels (such as social media, in-person events, publications, etc.).

If you google the words “logic model,” you’ll find a variety of templates in Excel and Word available on the Internet. While there are some minor differences, most include the elements that I’ve listed below. Please note: you can gather a number of the elements included in the logic model through evaluation, which may seem to have no place in this discussion. However, formative and summative evaluation are complementary. Formative evaluation ensures we’re allocating resources appropriately and identifies areas for program improvement. Thus, I have identified elements of both below.

Inputs: Costs, time, and personnel

This is an element of formative evaluation and is assessed by piloting the program before full-scale implementation. This section of the model answers the question, “What resources were used to implement the program?”

Activities: Workshops, training programs, case management, outreach, etc.

Another element of formative evaluation, this is simply a description of your project plan. In what activities did the participants engage through the program? This section of the logic model provides concise, step-by-step descriptions of the activities in your program and how participants move through them.

Participants (also called outputs) : number of people, their characteristics

This is where summative evaluation begins. Outputs are just the beginning of your evaluation, and many writers depend on them far too much in grant proposals. Outputs tell us how many people participated in your program or how many brochures you distributed. They do not, however, tell you how your program made changes that brought meaningful results to your constituents.

For example, let’s say we recruit fifty people to participate in a smoking cessation class. That sounds great, right? But what does that really tell us? From this output, we know that fifty smokers showed up in a room on the first night of class. How many finished? How many people who finished the class actually quit smoking? And, perhaps more importantly, how many of those who quit smoking didn’t resume smoking within six months, one-year, or two years after completing the class? These measures are the summative ones that provide meaningful points for evaluation.

Reactions of participants: degree of interest, likes and dislikes

Satisfaction is not an inherently strong measure for summative evaluation, but it is important to formative evaluation. Using participant surveys of satisfaction tells us how recipients reacted to a program. You can use these results to modify the program and to improve your ability to meet participant needs. You can also use satisfaction survey results as a summative evaluation measure if one of your outcomes objectives is to attain a certain level of participant satisfaction with the program. Again, it’s not an inherently strong measure unto itself but in tandem with other changes can be revealing. You can find an example of an online participant satisfaction survey here.

Changes in knowledge, perceptions, attitudes, skills, and behaviors: short-term outcomes

Some grant professionals refer to the evaluation of short-term outcomes as impact evaluation. This is an assessment of how a program or intervention changed participants’ knowledge, perceptions, attitudes, skills, and/or behaviors immediately following the implementation of a program’s activities. These changes are generally assessed in a pre- (before activity), post- (immediately following the activity), and retention testing (six months or more post-activity). Here are some examples of how you can assess short-term outcomes.

  • Knowledge – Multiple choice tests (such as proficiency testing in schools)
  • Perceptions and Attitudes – Surveys with Likert scales (such as used in online quizzes where participants rate statements as “Strongly Agree” to “Strongly Disagree” generally using scales offering five to seven choices)
  • Skills – Observed assessments (such as a driver’s test or physical display of skills to attain CPR certification)
  • Behaviors – Most often self-reported behavior changes (such as quitting smoking), but may be observed under controlled conditions (such as in drug rehab programs)

As grant writers, we use hypotheses to craft outcome objectives or statements that anticipate what we anticipate will result from planned activities. For example, an outcome for a smoking cessation program could be as follows: by December 31, 2014, at least fifty people who complete the 12-week smoking cessation course will quit smoking for at least one year following the program as evidenced by pre-, post-, and retention testing.

Social, economic and environmental change : mid-term and long-term outcomes

This is the point at which the term “meaningful results” comes into play. Mid-term and long-term outcomes describe the enduring changes that result from participation in a program. For example, mid-term outcomes for a health project may assess changes in body mass index, cholesterol levels, or hemoglobin A1C levels in diabetes management. (These measurements are called indicators.) Related long-term outcomes look at the broader, population-based changes to public health that result from more people having a lower body mass index or regulating their blood sugar.

Creating an Effective Summative Evaluation Feedback Loop

Creating a logic model may be viewed as a simple process much like painting by numbers. But there is a lot of creativity that can (and should) go in to the process of developing a comprehensive model that addresses the summative evaluation process. The international NGO World Vision has developed simple summative evaluation measures that facilitate clearer internal and external communication. As CEO Larry Probus say, “ Measures can motivate.”

Their process consists of three steps:

  • Everything in the evaluation plan ties to the mission. When they decided to revise their evaluation plan, World Vision started with their mission statement and boiled down their work to its finest point, creating a logic model goal “to improve access to clean water for five million people in need over five years.” If what is in your evaluation plan is not easy for a reader to tie directly to your mission, you should eliminate it.
  • Knowing your capacity and limitations makes for more reasonable evaluation plans. Looking at what is feasible in a given local or in a given time period can help nonprofit professionals make better evaluation plans. Is it feasible to collect written pre-/post-/retention tests from a population of people who are homeless? What are the best methods to assess knowledge among a transient student population? Is it feasible to collect extensive data over three to five years? Be reasonable and adjust your methods accordingly. Based on local capacity, World Vision developed the following indicator: access to clean water means “having a protected clean water source available twelve months of the year within a thirty-minute round-trip walk from a person’s household.”
  • Craft outcomes objectives that are meaningful . This should be obvious, but sometimes nonprofits are bogged down in the minutia of running programs. World Vision scrapped all their old outcome objectives and replaced them with three concise, easy to visualize objectives:
  • Reduce daily walk for clean water to fifteen minutes each way.
  • Reduce incidence of diarrhea and dysentery.
  • Decrease the number of girls that can’t go to school because they spend all day collecting water for their families.

World Visions’ simple but elegant summative evaluation measures are a good example of how to do summative evaluation right.

Formative and summative evaluations form the basis of any comprehensive evaluation plan. Consider how other organizations use these methods and research best practices in evaluation from government sources and private foundations. Use logic models because they are helpful tools for discussion, planning, and strategic visioning. Then apply the principles to your programs, and use the results of your evaluation for both fundraising and continuous quality improvement.

Heather Stombaugh

About the Contributor: Heather Stombaugh

Heather Stombaugh, MBA, CFRE, GPC is a nonprofit expert with 16 years of experience in leadership, program development, marketing, and fundraising. Heather practices and teaches integrated grant seeking, a process by which the strategy and tactics of fundraising, marketing, and grant seeking are coordinated to increase engagement, donations, and grant awards. Heather believes nonprofit writing requires an astute blend of art (pathos) and science (logos) to compel giving. Using this philosophy, Heather has secured more than $73 million for nonprofits across the country.

Heather is a nonprofit expert for About.com’s Nonprofit Charitable Orgs Channel, CharityHowTo, CharityChannel, and Thompson Interactive. She presents nearly every month as a plenary or keynote speaker at conferences around the country. She is the Vice Chair of the national Grant Professionals Foundation (as well as the chair of the Marketing and Impact Survey committees), the President of the Board of Baskets of Care (Toledo-based), the Scholarship Committee Chair of the Association of Fundraising Professionals of Northwest Ohio, and a member of the Social Value United States (national) Board. As an active member of the Grant Professionals Association, Heather serves a peer-reviewer, the editor of the GPA Weekly Grant News, a webinar presenter, and a regular regional and national conference presenter. She is one of fewer than 400 Grant Professionals Certified (GPC) in the United States and one of less than 50 professionals in the world who holds both the GPC and CFRE certifications.

Through her clinical research experience, Heather is a published author in the peer-reviewed medical literature with publications in the Journal of Emergency Medicine, American Surgeon, Prehospital Emergency Care, Journal of Community Health, and Journal of Nursing Administration. Heather’s experience in clinical research informs the Grant Professional Foundation’s national Impact Survey, which led in 2015 to the publication of Journal of the Grant Professionals Association article titled, “Collective Value, Collective Power: One and Four-Year Comparative Analyses of National Grant Professionals Impact Survey Data”. Heather served as a Certified Clinical Research Professional for ProMedica Health System for five years.

Heather has committed her professional life to working with nonprofits, but she believes service starts at home. From creating an HIV prevention program as a Bowling Green State University freshman to helping coordinate her extended family’s team for the Findlay Walk MS, Heather teaches her children that service is a part of daily life. Her continuous dedication made her the 2012 Social Change Scholar at Walden University, where she completed her MBA in Leadership in 2014 (magna cum laude). Heather graduated with honors with a Bachelor’s degree in Education/Health Promotion from Bowling Green State University, where she has served as a mentor for the Honors Program.

Heather lives in Northwest Ohio with her husband and their two ginger-haired children, where they raise a flock of heritage birds and harvest bushels of vegetables and herbs on a mini-farm. In her spare time, Heather loves reading science fiction, learning about her family’s genealogy and genetic history, and watching documentaries about anthropology and archeology. Connect with Heather on LinkedIn.

, ,

No comments yet.

Leave a Reply

Pin It on Pinterest