We often use numbers to communicate about health and death: How many people died in a flood. What percentage of a population is up to date on their COVID-19 vaccines. How long it takes for patients in emergency departments to be admitted to a hospital.
I am interested in our uses of numbers and evidence in ethics: how we choose what to measure and how to communicate these measurements, what our choices and numbers reveal or obscure, what alternative ways there are to interpret those numbers, and what alternatives to numbers we might want to consider.
As a quick illustration, the AIDS quilt is a great example of alternative ways to communicate and humanize data. Consider browsing the interactive online version, scrolling around, zooming in and out, reading the images and texts (click to open in a new tab). What is revealed or obscured by zooming in or out? What can we learn from the heterogeneity of patches, and the themes that unite them? How would you go about analyzing the quilt as a source of evidence? How might this frustrate the ease with which we appeal to numbers?
Below, I narrate two examples of this research, though simplifying quite a lot. The first concerns data around Medical Assistance in Dying (MAID). The second concerns measuring success in suicide prevention.
If MAID and suicide are tough topics for your to read about, I also recommend this blog post by John Worrall (click to open in a new tab), which is less death-heavy. I think it does a good job of showing how our choices in communicating numbers can express very different things, and how they might affect the choices we subsequently make.
First: what do percentages in MAID data communicate about available supports?
In Canada, Medical Assistance in Dying (MAID) has been legal for several years. In the news, you might have seen stories about people who chose MAID because they could not get accessible housing for their disabilities, or because disability supports are otherwise inadequate. In response to these stories, many philosophers and bioethicists point to available data, suggesting that these stories are rare outliers.
At the time of my writing, the most recent data come from the Canadian government’s annual reports, from which we have data up to 2021. You can click here to see the recent annual MAID reports (opens in a new tab). Note: new reports have been published since I originally wrote this page; the same arguments apply to them.
In the latest report, we’re told that 43% of people who died from MAID in 2021 reported needing disability support services; and of that 43%, only 4.2% did not receive disability support. 4.2% of 43% is about 1.8% total. Let’s make the math even easier and round that down to 1%, assuming that there was even more support available than they suggest. The available data suggests that only 1% of people who received MAID both needed and did not receive disability support. Perhaps it sounds like those stories might reflect rare cases.
A similar story is told for palliative care. The report says that only 16.8% of those who died by MAID did not receive palliative care, and of those “88.0% had access to these services.” Another way of saying that is of those “12.0% did not have access to these services.” 12% of 16.8% yields about 2% of all those who died from MAID. Perhaps 2% again seems small. Indeed, the report concludes that this result “supports other findings that palliative care continues to remain both available and accessible to individuals who have received MAID.”
There are a several important questions to ask here. Let’s look at two:
1. What do 1% or 2% represent?
Percentages can be a useful way to make sense of big data. But they can also make it harder to see the fuller picture. 1% is a small portion of a given population, but we need to know how big that population was in the first place.
In this case, we’ve been told that 10,064 people died of MAID in 2021, and the percentages are based on 9,950 of those. Let’s crunch some numbers quickly.
1% of 9,950 rounds up to 100 people (if we use the original 1.8% now we’re looking at about 180 people). That’s 100 people who needed disability support, who did not receive disability support, and who died from MAID in 2021. Take a moment to think about what 100 people looks like, to think about 100 people you know, to think about how big a room needs to be to accommodate 100 people. If 100 people who died from MAID in 2021 needed disability support but did not receive it, does that feel as small or negligible as 1% did? If it does still feel small, how many people would need to die without having access to the support they need for that number to feel “big”?
For palliative care, our number was 2%. By the same maths, we end up with about two hundred people who did not have access to palliative care in case they needed or desired it. Does that still sound like palliative care is “available and accessible”? If you still think so, again, what numbers would we need in order for this to feel more significant? I offer this as a genuinely open question, but one we need to reflect on seriously.
Consider one last example, in case the above don’t quite sit. The data for 2021 tells us the “nature of suffering” for 9,950 of those who received MAID. This list includes things like “loss of ability to engage in meaningful activities” (86.3%) and experiences or concern about “inadequate control of pain” (57.6%). But it also includes “perceived burden on family, friends or caregivers” (35.7%), and “isolation or loneliness” (17.3%).
35.7% is only a bit more than a third of the respondents. 17.3% is even smaller: less than one in five people. But we’re calculating over thousands of people. This means that more than 3,500 people who died from MAID felt that they were a burden on their family, friends, or caregivers, and more than 1,700 considered themselves to be suffering from isolation or loneliness. These too are things that would benefit from forms of social support. To be clear, these are very unlikely to be the only types of suffering for a given person. But hopefully we can see how “more than a thousand people” starts to feels quite different from “17.3% of people” and “more than three thousand people” probably feels different than “35.7%” too.
The above is not meant to convince you about MAID one way or another. We would need far more evidence than this. My point is that the ways that we communicate percentages and populations can likely influence how big these gaps in available supports feel.
2. What was the quality of support for those who could access it?
Above, I quoted the report as saying “This result supports other findings that palliative care continues to remain both available and accessible to individuals who have received MAID.” The next sentence is really important: “However, this result does not offer insight into the adequacy or quality of the palliative care services that were available or provided.” The same problem applies to disability supports (even though the report doesn’t say this): even if 87.4% of people who needed disability support did in fact receive it, we can’t conclude anything about the quality of the support available to them from how many people accessed them alone.
Importantly, some stories we’ve heard about people being forced to choose MAID are not just about lack of access to supports, but the quality of the support available. The death of Archie Rolland is one case that received a lot of attention about palliative care. Even if you do think these are statistical outlier cases, it’s worth reflecting that these numbers reflect actual people, actual deaths. We’re forced to appraise this data in context, as people, as deaths, as failures in access to available supports. Again, my current point is just that we need to be critical and cautious about what these numbers denote or represent.
Ultimately, none of the above tells us how worried we should be, if at all. That would require that we first settle on other questions, like “how many is too many,” as well as how much we trust any of the data, stories, or evidence we’re given. Nonetheless, I think the above examples help us to see the ways that different numbers, while describing the same things, can suggest quite different stories. In my research, I’m interested how philosophers and bioethicists use and communicate with numbers, what these choices might communicate or obscure, what assumptions we are leaving unaddressed, and what numbers or alternatives to numbers we should be looking for.
Second: How do we measure “success” in suicide prevention?
Where I live, there is a bridge called the Prince Edward Viaduct, or more often just called the Bloor-Danforth bridge. Until the early 2000s, the bridge was an international hotspot for suicide deaths by jumping, second only to the Golden Gate Bridge. In response, the City of Toronto approved the installation of a bridge barrier called the Luminous Veil. The Veil was designed to physically prevent people from jumping or falling over the side of the bridge, and thus to prevent suicide deaths. Here’s a picture I took a few years ago, showing some of the infrastructure:

Was the Veil successful? This depends on what we think our goals are meant to be, and thus what we measure or count as “success.”
The initial goal of the Veil seemed to be: stop people from dying by suicide at this specific bridge. If so, then we can measure success as a decrease in suicide deaths at this bridge. If that’s our main measurement, then yes, the Veil was absolutely successful. As of my latest knowledge, only one death has been recorded since the installation of the Veil.
But perhaps those people who would have died at this specific bridge just went to a different bridge, or found some other way to die. If so, reducing deaths only at this bridge would not seem proof of success overall. I know of at least one confirmed story where this happened, someone just driving over to the next bridge and dying there, while people in the neighbourhood have recounted others to me too. (This worry about deaths still occurring even appears in the Barenaked Ladies song “War on Drugs.” They discuss the Veil in the later stanzas, though they call it a “net” for creative purposes).
So, we might set a different goal: stop people from dying by suicide at this specific bridge, and without increasing suicide deaths at neighbouring bridges or other means. While this is a bit harder to measure, researchers suggest that the Veil both decreased local suicide deaths and did not (in the long term) increase deaths at other bridges. You can read an open access study about the Veil by clicking here (opens in a new tab). So, it still seems like the Veil is successful.
Here’s where my worry comes in. It is possible to decrease the number of deaths, without actually decreasing the presence and prevalence of suicide generally. That is, if our goal is to tackle suicide as a general phenomenon (and this is a big “if” that itself needs interrogating!), I do not think that death should be our only measurement of “success.”
Here’s an illustration, using unrealistic numbers just to get the point across. Imagine that 100 people jump or fall from a bridge that is 100 metres tall. All 100 of them die. It seems that jumping from that bridge is very lethal, so we decide to block it off with a barrier. The next 100 people who really want to die by jumping must go to the second tallest bridge, which we can imagine is only 99 metres high. 99 of them die. That’s still quite lethal, so let’s block it off too. When we get to 50 metres, 50 people die, and so on all the way down. Again, this is an unrealistic case, and these numbers are just for illustration.
We can keep restricting the lethal means of dying until people are left with far less lethal means of dying, and thus continuously reduce the death rates. But 99 metres is still a long way to fall, and it is very unlikely that the person who survived walked away without consequence. Falls from even relatively low heights can lead to permanent brain injuries, organ failure, paralysis, and chronic pain. And while the death rate continuously declines in our example, the number of injuries likely increases, while the number of attempts to die by suicide have not changed at all. We have not addressed the underlying problems, nor have we measured other possibly undesirable outcomes.
We should not just look at death data when measuring the “success” of suicide interventions. At a minimum, we must also attend to morbidity data: to the other nonfatal outcomes associated with suicide.
We can look at this one more way. Another way people sometimes measure success or efficacy is in terms of money: how much will it cost to build and maintain a barrier, and how much money do we save by averting deaths. Call this an “cost-effectiveness” approach. You can see an open access article estimating the cost-effectiveness of a barrier on the Golden Gate Bridge by clicking here (opens in a new tab).
To simplify a bit, we’re measuring how many years of life we potentially save, and how much those years are worth to people in terms of dollars. The study above concludes that we would save enough lives to offset the costs of a barrier. But the study does not calculate the costs associated with nonfatal outcomes, like long term hospitalization, rehabilitation and therapy, pain medication, loss of employment, etc. So while the study concludes that it would be cost-effective, this is based on an assumption that the lives saved will be just like all the other lives.
(For keen readers, the above economic article does briefly mention “disability adjusted life years,” but seems to measure them only in terms of years of life lost: note the arithmetic).
To be clear, my point is not that these types of barriers are necessarily ineffective, expensive, or otherwise inappropriate. Nor am I suggesting the conclusions of the studies are ultimately wrong. They could be right for all we know. My point is that the ways we communicate our goals and priorities will shape how we measure “success,” whether we find our interventions to be successful, and ultimately what we know or obscure.
Click here to return to the Research page.
This page is subject to regular revisions. Please contact me before citing.