Secondary Menu

Six Steps to Effective Program Evaluation: Step 5 – Plan Your Methods

I just attended the 2016 European Evaluation Society conference in Maastricht, the Netherlands. One of the themes focused on designing evaluation methods to fit the purpose of the evaluation.

It may sound obvious, but many evaluations are not as useful as they could be because their methods don’t fit their purpose.

(For this article, I use the word “methods” in the nontechnical sense of what exactly the evaluators will do to conduct the evaluation.)

Better-fitting Methods Provide More Useful and Convincing Results

Too many evaluations fail to provide useful and convincing results because they don’t use the right kind of methods to answer the important evaluation questions.

One example is an evaluation I saw for a nonprofit organization of its program to help people increase their home safety. Organization leaders wanted to know two things:

  • Was the program was having the effects they thought?
  • What would help to strengthen the program?

Yet they chose to conduct a comparison group study, which could only address the first question of measuring effects. They did not include methods that can reveal ways to strengthen their program, like key-person interviews. So they didn’t get useful answers to the second, very important, question.

To get convincing and useful results, your methods must also fit a realistic understanding of how your program functions. For example, a while back I saw some studies about a health care policy that increased costs in the first couple of years, but led to cost savings after a few more years. Studies that looked at cost effects only for the first two years showed only the initial cost increases. Only longer-term studies were able to show the policy’s true effects of cost savings.

Illustration - Six Steps to Effective Program Evaluation: Step 5 – Plan Your Methods, by Bernadette Wright

Strengths and Weaknesses of Six Frequently Used Methods

When you need a solid evaluation to get practical, relevant information for managers and funders, no one method works for all purposes. Each study has its strengths.

Here’s a list of strengths of six frequently used evaluation methods, to help you determine which method(s) might be right for you. For more information about these methods, see our “Evaluation Methods Cheat Sheets.”

Surveys

Surveys are good at…

  • Answering many evaluation questions at once
  • Getting perspectives of all stakeholders
  • Quickly collecting information from a large number of people

Administering surveys two or more times, such as before and after a new intervention, can show changes in knowledge, abilities, or beliefs.

Focus Groups

Focus groups are good at…

  • Answering many evaluation questions at once
  • Getting perspectives of all stakeholders
  • Getting in-depth understanding of topics
  • Getting input from a small group who can be brought together

Key Informant Interviews

Key informant interviews are good at…

  • Answering many evaluation questions at once
  • Getting perspectives of all stakeholders
  • Getting in-depth understanding of topics
  • Getting perspective of key persons with knowledge of your program and topic

Observation

Participant observation (evaluator participates in program) is good at…

  • Gaining insider perspective on people’s experiences
  • Collecting information in situations when people would be uncomfortable with an outside observer

Unobtrusive observation (“fly on the wall”) is good at…

  • Gaining insight into contexts and processes behind outcomes
  • Discovering any differences between stated behavior (from surveys or interviews) and actual behavior
  • Exploring topics that people can’t easily report

Review of Program Documents

A review of program documents is good at…

  • Using existing data
  • Not requiring recruitment of research participants
  • Using qualitative data (e.g., narrative reports), quantitative data (e.g., test scores), or both

Randomized Experiments

Randomized experiments (also called randomized field trials or randomized controlled trials) are good at…

  • Testing the effects of a single, isolated intervention on a single, isolated outcome
  • Showing how much impact something had on its expected effects
  • Showing that what’s being tested (and not something else) caused that impact

Three Places to Find Methods and Data

Here are some resources you can use to help you get good results and free online resources for your continued exploration.

Methods Using Your Existing Data

I’ve found that many organizations have data they’ve already collected that they haven’t used. They may lack the time or technical expertise to conduct in-depth analyses of the data.

Before embarking on new research, a useful preliminary step is to inventory your existing data. Make a list of all the data you’ve collected and what questions they address. These data sources may include, for example, responses to past surveys your organization may have conducted, feedback forms, program datasets, or staff narrative reports.

You may be able to conduct statistical or qualitative analyses of your own data to help answer some of your questions. Using existing data is often faster and less expensive than a new data collection.

Related Research

Another place to find evaluation methods to consider is your review of related research. Past studies that explored related questions can show what methods others used, the benefits and challenges they encountered in using those methods, and researchers’ recommendations for future studies.

Evaluation Resources

Below are a few websites that provide free resources on evaluation and research methods.These are another good place to look for finding methods that might work for you, including conventional and lesser-known but often valuable methods.

  • The BetterEvaluation website provides information on evaluation approaches, options, themes, and resources. The website is an international collaboration to share information about evaluation methods and approaches.
  • The Free Resources for Program Evaluation and Social Research Methods website provides links to many freely available resources on protecting the public, research and evaluation methods, and related topics. It is an ICAAP (International Consortium for the Advancement of Academic Publication) supported site.
  • A group of American Evaluation Association (AEA) members are developing a publicly available clearinghouse of resources on evaluation capacity building. Tom Archibald’s October 4, 2016, blog post on the AEA website provides links to some evaluation capacity building resources they’ve collected.

Whatever method(s) you use, make sure that each of your evaluation activities is useful for fulfilling your purpose. We get better results by looking at the data through different methodological lenses! If you need help, an external consultant with expertise in evaluation can help you shape an evaluation that works for you.

Bernadette Wright

About the Contributor: Bernadette Wright

Bernadette is Director of Research & Evaluation at Meaningful Evidence, LLC, where she helps nonprofit organizations with program evaluation and measurement so they can use that information to increase the success of their programs and the communities they serve.

For two decades, Bernadette has managed and conducted research for non-profit, government, and business organizations in health care, aging, education, and other fields.

She is author of over 50 publicly available client reports/peer-reviewed papers. She also writes guest posts for blogs such as the Foundation Center of Washington, DC blog and the American Evaluation Association AEA365 Blog.

Bernadette also frequently presents at national and local workshops and meetings, such as a Center for Nonprofit Success workshop on Program Evaluation in Washington, DC.

She was recognized for conducting an “Exemplar Evaluation” at the 2015 American Evaluation Association Conference in Chicago and is recipient of a “Best Paper” award at the 2015 Association for Business Simulation and Experiential Learning Conference in Las Vegas.

Bernadette is an active member of the American Evaluation Association and its local affiliate, Washington Evaluators. She earned her PhD in Public Policy/Program Evaluation from the University of Maryland in 2002.

,

No comments yet.

Leave a Reply

Pin It on Pinterest