What Does it Take to Turn a Good Idea into Good Outcomes?
Every call for proposal or request for proposals asks for good ideas, innovative ways to create desired outcomes. Every proposal that is submitted to a funder outlines what the applicant thinks is a good idea and a description of how the organization plans to implement that good idea. So, what percent of good ideas actually become good outcomes? Would you be surprised to find out that research has found that only 10-30% of good ideas become good outcomes? Are you interested in finding out more about what your organization can do to increase the odds?
When I first heard the following statistics, I was up in the middle of the night thinking about them.
- In business, change initiatives that are heavily dependent on people (reengineering, TQM, culture change) fail 80-90% of the time.
- Up to 70% of the failures in business are not due to poor strategy or a lack of good ideas, but to flawed execution.
- About 10% of what is taught in training gets transferred to the job. (R. W. Rogers, 2002) With coaching, this percentage goes up to 95%. (Joyce and Showers, 2002).
While those statistics aligned with my experience, I hadn’t realized it was so universal. I thought if things are so ineffective, why don’t we do things differently? Aren’t we all focused on asking the wrong questions like how can we improve training, when a better question is why aren’t we providing coaching?
The National Implementation Research Network (NIRN) is helping people think about how we can do things differently which will result in higher success rates. This group researched implementation across sectors and has published their findings in a monograph called Implementation Research: A Synthesis of the Literature. (http://www.fpg.unc.edu/~nirn)
This report outlines what is known about implementation, and suggests a framework for successful implementation that is based on seven “drivers” of implementation. The seven drivers are pre-service training, consulting and coaching, staff performance evaluation, decision support data systems, administrative supports, systems interventions, and focus on recruitment and selection of staff and participants. These are displayed in the following graphic:
A key focus of the document is to apply what is known about successful implementation to the design and implementation of new programs or system changes. There is also focus on applying this knowledge to the adoption of evidence based practices in service delivery systems. When you read the descriptions of what does not work when implementing a change or new program, it sounds familiar – because it sounds exactly like most implementation plans. What this meant to me is that when our reliance on things that don’t work is so universal, it is no wonder that there aren’t better outcomes when implementing something new. Here are a couple of important bullets on what does not work when done alone:
- Dissemination of information by itself does not lead to successful implementation (research literature, mailings, promulgation of practice guidelines)
- Training alone, no matter how well done, does not lead to successful implementation
It is not enough to have a good idea or plan to adopt a program or best practice. You must have a successful implementation or there will not be good results. Effective intervention practices + Effective implementation practices = Good outcomes for consumers. While this might sound obvious, it is not generally reflected in the practices of the social sector when designing and implementing programs. In fact, someone told me recently that when they asked their evaluator to measure outcomes they were told – “We know the program works.” Well they knew that it had worked in other places, but to leap to the conclusion that because of this it was working at their site – was to completely ignore the fact that implementation matters.
Another piece of information which explains why traditional implementation plans are not so successful is the data from the learning pyramid. According to this research the average retention rates with different methods of teaching are as follows: Lecture 5%, reading 10%, audio-visual 20%, demonstration 30%, discussion group 50%, practice by doing 75%, and teach others/immediate use of learning 90%. This also explains why the dissemination of information and telling people things does not result in good outcomes. To make the critical link from retention to using information is the research on coaching.
Extensive research has found Coaching to be effective. See the table below.
Significant finding: Of teachers who attended the same training, 95% of those who also received Coaching used what they learned on the job while they were teaching compared to between 0-5% of individuals that did not receive Coaching. There is extensive research on coaching in a variety of fields that shows similar success in behavior change and using training on the job as a result of coaching.
Some factors that contribute to the inability of people to apply what they’ve learned in training to their jobs WITHOUT coaching are as follows:
- Doing something new is difficult
- Asking someone to stop doing something they are proficient in – (even if the results aren’t good) and asking them to do something they are not proficient in i.e. something they are just learning is also difficult
- Newly-learned behavior is:
1. crude compared to performance by a master practitioner,
2. fragile and needs to be supported in the face of reactions from consumers and others in the service setting, and
3. incomplete and will need to be shaped to be most functional in a service setting
So how can we apply this information to our efforts to make change or develop and implement new programs or practices successfully?
- Recognize that traditional efforts are not going to be effective the majority of the time, and
- Explore the inclusion of elements that have been proven to increase effectiveness.
If you’d like to know more about using the seven drivers of implementation you can read the Monograph.
In addition, you can explore adapting a set of questions that are based on these drivers when you plan, design, and implement changes. Attached is the link to this tool that was developed by the Kentucky Division of Mental Health and Substance Abuse. Feel free to post your comments on this tool.
Implementation Research: A Synthesis of the Literature (Fixsen, Naoom, Blasé, Friedman, and Wallace) 2005 University of South Florida
We'll inform you about just-published articles, our upcoming books, professional Summits, live interviews, webinars, and more
If you are an experienced nonprofit-sector practitioner who wants to share your expertise by contributing down-to-earth articles written in a conversational style, we'd love to hear from you!
Request Permission to Reprint This Article
Copyright © 1992-2019 CharityChannel LLC.