Many doctors believe the reason most of their patients fail to get better is that they simply don’t do as they are told. They complain that their patient population just won’t listen to reason. The problem may not be your patient population, it may be the way you are talking to them.
Let’s say you have a patient who is at high risk for a multitude of disastrous consequences from their years of not exercising, smoking, and eating fast food. They like cheeseburgers the way that LeBron James likes LeBron. You recommend lifestyle changes and perhaps a medication to help reduce their risk.
“Yeah, I’ve heard of that drug,” the patient says. “It made Jimmy go crazy and kill himself last year, so there is no way I’m going to take that.”
You explain to him what the latest research shows us about reducing their risk of heart disease, diabetes, stroke, etc, but your patient wants to focus on the infinitesimal risk of a side effect that a neighbor may or may not have experienced when taking the recommended medication.
What are the chances that you will convince this patient to look at the data from the literature and that you will be able to convince them to take this drug? I would hazard to guess that my dog has a better chance of winning the gold medal in figure skating in the 2022 Olympics, and she doesn’t even like to wear sequins.
Faulty Reasoning: Media Coverage of New Medications Can Be Risky Business
How many times have we heard a news report about a “breakthrough” medication that promises to save us all by reducing the risk of some terrible disease? The media reports it, and most doctors take it with a grain of salt. However, our patients often take these reports at face value and come into the office demanding a prescription for this new miracle drug.
Because we have all studied statistics at some point in our medical careers, most of us are at least somewhat familiar with the terms relative risk and absolute risk. We may not pay as much attention to it now as we did before we took our Board exams, but it is critical information when we discuss drug studies with our patients. The absolute risk tells us the overall odds of developing a disease, but the relative risk is the amount of risk compared between two groups.
The relative risk is much less helpful in making decisions for most patients, but it is the one that is almost always reported because it tends to be a higher percentage (since the total numbers of people are lower, it is easier to come up with a higher percentage).
Let’s say the news reports on a drug that cuts in half the risk of being stricken with the malady of unicorn flu. Your patient saw the news, looked up the signs and symptoms of unicorn flu online, and decided she doesn’t want to take the risk of getting hit by this bug.
She wants you to write the prescription for her and her entire family so that they won’t have to worry about it.
Before you grab that pen (more likely mouse or electronic tablet) to write that prescription, you should be asking yourself the crucial question, “What is the risk for this patient of actually getting this disease I’ve never heard of?”
If the risk in the general population is only 1 percent (absolute risk = 1 percent), and the drug decreases that risk by 50 percent (relative risk), then the new risk is 0.5 percent. We’ve effectively decreased the overall risk by a whopping 0.5 percent, not 50 percent.
Most news outlets are not going to bother reporting a reduction of risk of less than 1 percent, and most people are not going to get too excited by such paltry risk reductions. So what ends up happening is that the news almost always reports the higher numbers, and patients (and many doctors) have a hard time asking the right questions. So your patient is unlikely to get unicorn flu, but very likely to get confused.
Survival Bias — The Reason Patients Make Bad Health Decisions
Let’s talk about diet for the next example of why our patients get confused. Imagine you are at a party with dozens of people. You see a couple that you haven’t seen in years, and they look great. They have lost weight and appear to be in good health.
You are dying to know their secret, and so you ask them how they did it. They proceed to tell you about this great new diet plan called the Frozen Leprechaun Diet where they eat only Lucky Charms cereal, and they work out daily by playing freeze tag throughout their house.
Surprisingly, they each have lost about 40 pounds with this diet, and they have kept the weight off for over a year.
After hearing about their success, you may be tempted to try this diet too, but before you stock up on Lucky Charms, you would do well to remember the concept of survival bias. You see, the rest of the people at the party you went to tried the same diet, but instead of losing weight, they actually gained weight.
The only reason you didn’t talk to them about their failure is because you were too busy talking to the only two people at the party for which the diet actually worked, and you ignored the overweight people who were not so excited about talking to you about green clovers, blue diamonds …
Finally, a Reason to Love Anecdotal Evidence
Most doctors inherently know that making decisions on anecdotal evidence is generally bad medicine, yet we do it more than we would care to admit.
Let’s say you have been seeing a ton of patients recently with infections that require antibiotics. You are aware that there are potential side effects (like turning their skin blue) associated with prescribing this medication, but the benefits seem to outweigh the risks.
If you have three patients in a row who develop this supposedly rare side effect, you might think twice about converting a fourth patient into a character from Avatar.
I ran into a similar situation the other day — not the whole Avatar thing, but just a case of faulty reasoning on my part. I was visiting with a pharmaceutical rep about a new asthma medication, and he asked if I had tried their drug on any patients yet.
I replied no, but I knew that one of my partners had just started two of her patients on the medication and that I was waiting to see how her patients did before I started any of my own.
Logically, I realize that a study with an n of 2 does not allow for a fair assessment for efficacy one way or the other, but it is a human flaw that is hard to get past even for a physician who should know better.
Anecdotes are a very difficult thing to overcome in medicine. They are terrible predictors of future outcomes, but the reality is that if you are a human, you are susceptible to falling for their undeniable charm.
Because it is such an inherent human characteristic, why not use it to our advantage? Instead of spending so much time talking to our patients about data and research trials, why not speak to them in a language that they will understand?
Let’s see if anecdotes can work for us, rather than against us, for a change. Tell them all about how other patients did with the recommendations you are making. Tell them all about how you were successful in getting into shape, sleeping better, or … whatever else you need to tell them about in order to get your message across.
The point is that patients are much more interested in hearing success stories than they are in hearing about journal articles. That doesn’t mean we shouldn’t read, analyze, and discuss those journal articles. It just means that they may not always translate well at the bedside when you are trying to convince your patient that the planned course is the right one.
Whether it’s discussing unicorn flu, Lucky Charms, or blue skin, it’s obviously best to make recommendations based on the statistical data, but it may be more effective to talk to our patients using the language they understand — anecdotes.