This is a guest post from independent medical investigative journalist Jeanne Lenzer. She is a former Knight Science Journalism Fellow and a frequent contributor to BMJ, and has published works in The Atlantic, The New York Times Magazine, Discover, The New Republic, and other outlets.
When doctors recommend tests, drugs or surgeries to prevent bad outcomes (think cholesterol-lowering agents to prevent strokes or cardiac stents to prevent heart attacks) they tap into our deepest sense of what constitutes commonsense: An ounce of prevention. Catch it early. A stitch in time.
It can’t be a bad thing to catch problems early, can it?
Unfortunately, one of the toughest things to explain is why detecting some illnesses at their earliest stages can cause more harm than good. Take this example: Since elevated cholesterol is associated with a higher risk of cardiovascular disease, doctors often prescribe drugs known as statins to people with elevated cholesterol levels in the hopes of reducing their risk of a heart attack or stroke.
Here comes the part that’s tough to explain – because it is so counterintuitive: Statins only help individuals who already have had a heart attack or stroke (with a few exceptions, and more on that later).
Of course, this makes no sense to most people. Isn’t the whole point of taking cholesterol-lowering agents to prevent a heart attack? Why should anyone wait until after a heart attack or stroke to begin taking a drug designed to prevent a heart attack or stroke?
The answer rests with disease creep and the simple statistical quirks that come with it. In the past, doctors treated diseases that caused symptoms. But now we have tests and imaging machines that can detect risk factors and illnesses in their earliest stages. Like cholesterol. Elevated cholesterol is not a disease. It doesn’t cause symptoms. It is a risk factor. People with high cholesterol levels are somewhat more likely to develop a heart attack or stroke, but they are at far less risk than individuals who already have cardiovascular disease. This is the definition of disease creep: when pre-conditions or risk factors are treated as if they are the same as the actual disease state.
Here’s a thought experiment (with purposefully exaggerated numbers) to help understand this puzzle: Imagine a group of people who have the rare but awful Disease A, which is so terrible that all of its victims will die. Now imagine the discovery of Wonder Drug X, which cures half of the patients with Disease A. Unfortunately, Wonder Drug X does have a pretty bad side effect profile – it’s a very powerful drug, after all – and 10 percent of people who take it will die from liver failure. Despite this worrisome side effect, Wonder Drug X is truly an advance for patients with Disease A: For every 200 patients with the disease who are treated, 100 will now survive and only 10 of the 100 survivors will die of the drug’s side effects. That means 90 more people out of 200 will survive thanks to Wonder Drug X.
But now imagine a different group of 200 people, who don’t actually have disease A, but instead have a genetic marker which “is associated with” Disease A. In this scenario only 1 in a million people in the general population will get disease A. If you have the genetic marker, the risk ismuch higher, such that 2 of these 200 people will develop the disease at some time in the future. The genetic test gets highly promoted – “find out your risk early, because we now have a treatment that works, and the sooner you’re treated, the better!” There is a tiny grain of truth to this – of the 2 people identified by the genetic test, 1 (50%) will now be saved by Wonder Drug A. This might sound just as good as before; here’s a group of people with 10,000 times (!) the risk of the general population to develop a uniformly fatal disease. Surely it’s worth taking a drug that can cure that disease in half the cases, isn’t it?
Read the complete article here