Let’s Use Florence Nightingale’s Secret Weapons
Why Nonprofits Should Visualize Their Data
As nonprofit professionals, you and I would do well to borrow some strategies from Florence Nightingale’s playbook. She skillfully deployed two “secret weapons” that nonprofit managers and fundraisers today also can use to great effect.
Florence Nightingale Deploys Secret Weapons
Florence Nightingale became a nurse to serve others yet realized she could more effectively provide care with the help of data. Working with a statistician named William Farr, Nightingale analyzed mortality rates during the Crimean War. They discovered that most of the soldiers had died not in combat but rather by “preventable diseases” caused by bad hygiene.
She decided to make her case for more sanitary conditions with pictures “to affect thro’ the Eyes what we fail to convey to the public through their word-proof ears.” To do so, she invented the polar area chart, a variant of the pie chart. Each slice of the pie showed deaths for one month of the war, growing larger if the deaths increased, and color-coded to show the causes of death (blue: preventable, red: wounds, black: other). Clearly seeing the importance of hygiene, the queen and Parliament quickly set up a sanitary commission and death rates fell.
See page for author [CC BY 4.0], via Wikimedia Commons
Secret Weapon #1: Data
Nightingale’s first “secret weapon” was data. Nonprofits have it in abundance, but it is often packed away in databases and spreadsheets, collecting virtual dust.
Most nonprofits are lousy with data: participant data, program data, financial data, sales data, fundraising data. Nonprofits are drinking from a fire hose, and the water pressure is building. We are scrambling just to find enough bandwidth to store our data. But once stored, we often ignore it, making the data a secret even to ourselves.
We may pay lip service to “evidence-based practices” or “data-driven strategies” or even borrow acronyms like ROI (return on investment) and KPI (key performance indicator) from the for-profit world. But, if pressed, many nonprofit managers admit that they are not data people. They care about the people and the programs and glaze over at the site of a spreadsheet.
Let’s face it. Most of us become glassy-eyed when asked to make sense of a spreadsheet. And there should be no shame in that. We are wired that way. (More on our wiring in a minute.) Right now, let’s consider some other reasons nonprofits do not make good use of their data:
- Data Naiveté or Aversion. Nonprofit staff members, even those with many years of experience and track records of success, often have a data aversion. Many candidly admit —or sometimes proudly proclaim—that they are not “numbers people.”
- Time Crunch. Staff members do not have time for data. They are struggling to stay afloat, to submit the next proposal, to maintain programming, to address the huge and varied needs of their clientele, to cultivate their donors. Digging through data is on the backburner.
- Fear. Some understandably worry what their data will show. They fear that they won’t be able to control the story, that the data will be taken out of context, that funders will withdraw support based on data.
- Dirty Data. Many nonprofits have low-level staff entering data into management information systems or spreadsheets. Others have multiple staff members entering data. The result can be dirty data, data that is inaccurate because it has not been entered consistently. For example, if a participant is entered twice into a database, once as Michael Smith and again as Michael B. Smith, then tracking this participant’s progress is going to be difficult.
- Wrong Data. While many nonprofits have data on their participants and financials, they often lack data to show their impact. A tutoring program may not have their participants’ school grades or test scores. An employment program may not have data on former participants’ wages overtime.
- Disconnected Data. Rather than having a central management information system, small nonprofits may store their data on separate Excel spreadsheets. Michael Smith’s demographics might be on one sheet and his attendance in various programs on other sheets making analysis of the relationship of, say, age to program participation impossible.
These hurdles are understandable. But they stand in the way of significant progress. We ignore our data at our own peril. Our data is a valuable weapon in our arsenal (or, if you prefer, tool in our toolbox) because it provides the only way of knowing, really, if what we do every day is worthwhile.
Sure, we have our gut feelings, which should not be discounted. But your gut might say it’s all worthwhile and mine might say otherwise. Data brings us to a clearer shared understanding to which we can add our own knowledge, experience, and wisdom before making decisions.
Secret Weapon #2: Our Visual Superpowers
Nightingale’s second weapon was the visual superpowers of the queen, the parliament, and the public in general.
Our visual system has evolved, over millions of years, to process images essentially in parallel. We don’t read the Mona Lisa from top to bottom and from left to right. We take it all in together and understand, almost instantly, that this is a picture of a woman in front of a landscape, sporting a dark dress and an inscrutable smile.
Words and numbers, which only appeared within the last few thousand years, require us to scan individual characters one at a time, recognize them, and piece them together into words or values and then sentences or equations.
Data is encoded in words and numbers making it difficult for us to extract the stories they can tell. However, if we use visual elements (like bars, pie slices, and sloping lines) to encode the data, the story can come into focus much more quickly.
Further, if we apply what is known about how humans process visual cues to our data visualizations, they are even easier to digest. For example, humans can more accurately discern positions along a common scale than angles, which is why it is much easier to compare the lengths of several bars on a bar graph than to compare the size of slices in a pie chart.
Data Viz Capitalizes on Our Secret Weapons
Data visualization, or data viz for short, capitalizes on both of our secret weapons and helps us to overcome at least some of the obstacles to data use. It makes good use of our visual superpowers and thus helps us to overcome data naiveté or aversion.
Data viz also greatly speeds up our processing of data and so addresses, in part, the time crunch.
Moreover, data viz is a quick way to assess how dirty, disconnected, or irrelevant our data is. If the picture doesn’t look right or complete, then we need to do something to improve our data. Maybe that means collecting fewer data elements but doing it more accurately. Maybe we need to survey a small group of participants, visitors, or audience members to understand our impact better.
Fear of the consequences of using data is trickier to address. But we must assess this cost against the cost of NOT stopping to consider our data.
Every organization has its orthodoxies, but not all of them are true. Some organizations have beautifully depicted logic models showing how they assume their inputs lead to their outputs. These models might even be based on evidence, gathered from other organizations, about what works.
At the other end of the spectrum are organizations that run more on instinct. The bottom line is that we don’t know if our assumptions (regardless of how they are articulated/depicted) are correct in our particular organization or community until we look at the data. In other words, we can use the scientific method and put our assumptions to the test. If we cannot find evidence (aka data) that sufficiently refutes our assumptions, we can feel encouraged that we MIGHT be right — as long as new data doesn’t come along and undermine our beliefs.
Progress — in organizations and, indeed, in human history — often starts with the concession that we might be wrong. As Yuval Noah Harari suggests in Sapiens: A Brief History of Humankind, the scientific revolution was the point in history when “humankind admits its ignorance and (as a result) begins to acquire unprecedented power.” On a more modest scale, we can start asking questions like: What would we expect to see in the short and long runs if our programs work how we expect them to? And then we can look for data that either supports or refutes our expectations.
Admittedly, asking questions and collecting data to answer them can be time-consuming. But visualizing the data, as noted, speeds up the process. And if what we see is not the story we wish to tell, we can be assured that many funders just want to see us using data for continuous improvement rather than looking for ways to use data against us. Moreover, we can use data about the problem to boost funding as Florence Nightingale did more than 150 years ago.
From Florence Nightingale to Today’s Inner City Schools in Chicago
You may have heard of City Year. It places young adults, who are Americorps members, in national service to help students and schools in high-poverty communities.
As recently as 2014, City Year realized it wasn’t making good use of its data. This national organization had plenty of data but was using it mainly to report to funders and for high-level decisions.
However, the corps members, on the ground, in the schools, had no easy way to get their hands data and thus no way to track their progress with students and make small but timely shifts in strategy with individual students.
Their solution? To get access to school data, to integrate it with City Year’s data, and then to make it visible to corps members when they needed it.
One type of data that City Year collects is called Time on Task (ToT), which is the amount of time that members spend working with students. City Year Chicago has corps members working with students at 21 schools in the city.
These members regularly review charts which clearly show how much time is being spent with each student and when. With this information visible to them, they can identify trends and connect them with possible causes.
For example, at one school, they noticed that ToT dipped every Monday. Corps members connected those dips to school attendance dips on Mondays and found new ways to secure more ToT during different parts of the week.
Tools for Making Data Visible
There are plenty of software programs out there to help you do visualize your data. Excel is perhaps the simplest to use and you may already own it.
Other programs, such as Tableau and Qlik Sense, allow you to create interactive visuals. This means you can drill down into and explore your data. If, for example, you see an overall downward trend in participation in your program, you might want to see if this trend holds for subgroups of participants such those in certain age groups or from certain geographic areas. Tableau and Qlik Sense have free versions of their programs that you can use as long as you store your data and visuals on their servers. (Note, however, that you can make your data and charts invisible to anyone who doesn't have the URL.)
Start with What You Have
A simple line graph, showing progress over time toward a goal will make your data perceptible and will cause you and your colleagues to ask questions, both about your program and your data. Is the data accurate? What more data do you need to better understand the trends you see? What is going on in your program or your community or your field that might be affecting the trend? Such questions can strengthen your resolve to get new or better data or to make changes to your program.
Test Your Assumptions
Tracking progress toward established goals is a good first step to understanding trends and whether they hold for subgroups. But it will not tell you if your goals are the right ones. As noted, your goals may be based on evidence or assumptions.
For example, you may aim to keep kids in a tutoring program for thirty days based on evidence from a study, which showed that kids who stay in tutoring for at least thirty days are more likely to see their grades increase. Or you may assume that thirty days is the bare minimum for a program to have an impact. Either way, you won’t know if it’s all playing out as planned in your program until you test it.
We all know that correlation does not equal causation. Just because something occurs with something else, doesn’t mean that one caused the other. If you do a dance and then it rains, that’s not enough evidence to conclude that the dance caused it to rain. Even if it rains almost every time you dance, it could be that something else is causing both the rain and your dancing. Perhaps a drop in barometric pressure causes your joints to hurt and you dance to loosen them up while the same drop in pressure causes rain. It’s a silly example. But you get the point.
Nonprofits (and everyone else) often make erroneous claims based on correlation. We might maintain that participation in our employment training leads to higher wages over time. Well, maybe. But perhaps employment in our city is on the rise and affecting everyone, not just participants in your program. Or maybe our program tends to attract participants who are quite motivated to find jobs and would do just as well without the program.
Correlation is necessary but not sufficient to prove causation. Indeed, causation is a very high bar. Even carefully designed studies can rarely produce incontrovertible evidence of causation. You must have three conditions: 1) correlation (two factors co-occur), 2) precedence (the supposed causal factor comes before the supposed effect factor in time), and 3) no plausible alternatives. This third condition is the trickiest. It involves ruling out other causes for the observed effect.
Short of hiring researchers to design and conduct rigorous (and usually expensive) studies of your work, you can at least consider plausible alternatives. When you observe something good or bad happening in your organization consider possible causes both within and outside of your organization’s control. If possible, try altering just one factor and collect data over time, chart it, and see if a trend changes. Don’t assume. Rather: explore.
Your data is never going to be 100 percent accurate and complete. Although you can improve the visibility of your data, you are always going to be navigating in a bit of fog. Some of the problems with nonprofit data discussed above will persist and no amount of visualizing will make things completely clear.
So, consider applying the Marines’ 70 percent rule. According to the Harvard Business Review, the 70 percent rule advises us “to get enough data so that you are 70 percent confident in your decision, and then trust your instincts.
If you have less data, you are making a close to random decision. If you wait until the data is perfect, the opportunity to make a decision that has impact probably passed you by.” In nonprofits, the data is never going to be perfect. Most of us know this and thus are not waiting for data to make decisions.
The challenge is to recognize and then deploy our secret weapons—the data we already have and our visual superpowers for understanding data—so that we are not operating completely on instinct.
Thank you, Florence Nightingale. You still inspire!