Pages

Saturday, May 13, 2017

"...medicine is quick to adopt practices based on shaky evidence but slow to drop them once they’ve been blown up by solid proof."

When Evidence Says No, but Doctors Say Yes - The Atlantic: "When you visit a doctor, you probably assume the treatment you receive is backed by evidence from medical research. Surely, the drug you’re prescribed or the surgery you’ll undergo wouldn’t be so common if it didn’t work, right? For all the truly wondrous developments of modern medicine—imaging technologies that enable precision surgery, routine organ transplants, care that transforms premature infants into perfectly healthy kids, and remarkable chemotherapy treatments, to name a few—it is distressingly ordinary for patients to get treatments that research has shown are ineffective or even dangerous. 

Sometimes doctors simply haven’t kept up with the science. Other times doctors know the state of play perfectly well but continue to deliver these treatments because it’s profitable—or even because they’re popular and patients demand them. Some procedures are implemented based on studies that did not prove whether they really worked in the first place. Others were initially supported by evidence but then were contradicted by better evidence, and yet these procedures have remained the standards of care for years, or decades. 

 Even if a drug you take was studied in thousands of people and shown truly to save lives, chances are it won’t do that for you. The good news is, it probably won’t harm you, either. Some of the most widely prescribed medications do little of anything meaningful, good or bad, for most people who take them. 

 In a 2013 study, a dozen doctors from around the country examined all 363 articles published in The New England Journal of Medicine over a decade—2001 through 2010—that tested a current clinical practice, from the use of antibiotics to treat people with persistent Lyme disease symptoms (didn’t help) to the use of specialized sponges for preventing infections in patients having colorectal surgery (caused more infections). Their results, published in the Mayo Clinic Proceedings, found 146 studies that proved or strongly suggested that a current standard practice either had no benefit at all or was inferior to the practice it replaced; 138 articles supported the efficacy of an existing practice, and the remaining 79 were deemed inconclusive. (There was, naturally, plenty of disagreement with the authors’ conclusions.) 

Some of the contradicted practices possibly affect millions of people daily: Intensive medication to keep blood pressure very low in diabetic patients caused more side effects and was no better at preventing heart attacks or death than more mild treatments that allowed for a somewhat higher blood pressure. Other practices challenged by the study are less common—like the use of a genetic test to determine if a popular blood thinner is right for a particular patient—but gaining in popularity despite mounting contrary evidence. Some examples defy intuition: CPR is no more effective with rescue breathing than if chest compressions are used alone; and breast-cancer survivors who are told not to lift weights with swollen limbs actually should lift weights, because it improves their symptoms. A separate but similarly themed study in 2012 funded by the Australian Department of Health and Ageing, which sought to reduce spending on needless procedures, looked across the same decade and identified 156 active medical practices that are probably unsafe or ineffective. The list goes on: A brand new review of 48 separate studies—comprising more than 13,000 clinicians—looked at how doctors perceive disease-screening tests and found that they tend to underestimate the potential harms of screening and overestimate the potential benefits; an editorial inAmerican Family Physician,co-written by one of the journal’s editors, noted that a “striking feature” of recent research is how much of it contradicts traditional medical opinion...

A 2007 Journal of the American Medical Association paper coauthored by John Ioannidis—a Stanford University medical researcher and statistician who rose to prominence exposing poor-quality medical science—found that it took 10 years for large swaths of the medical community to stop referencing popular practices after their efficacy was unequivocally vanquished by science. According to Vinay Prasad, an oncologist and one of the authors of the Mayo Clinic Proceedings paper, medicine is quick to adopt practices based on shaky evidence but slow to drop them once they’ve been blown up by solid proof...

In 2007, after a seminal study, the COURAGE trial, showed that stents did not prevent heart attacks or death in stable patients, a trio of doctors at the University of California, San Francisco, conducted 90-minute focus groups with cardiologists to answer that question. They presented the cardiologists with fictional scenarios of patients who had at least one narrowed artery but no symptoms and asked them if they would recommend a stent. Almost to a person, the cardiologists, including those whose incomes were not tied to tests and procedures, gave the same answers: They said that they were aware of the data but would still send the patient for a stent. 

The rationalizations in each focus group followed four themes: (1) Cardiologists recalled stories of people dying suddenly—including the highly publicized case of jogging guru Jim Fixx—and feared they would regret it if a patient did not get a stent and then dropped dead. The study authors concluded that cardiologists were being influenced by the “availability heuristic,” a term coined by Nobel laureate psychologists Amos Tversky and Daniel Kahneman for the human instinct to base an important decision on an easily recalled, dramatic example, even if that example is irrelevant or incredibly rare. (2) Cardiologists believed that a stent would relieve patient anxiety. (3) Cardiologists felt they could better defend themselves in a lawsuit if a patient did get a stent and then died, rather than if they didn’t get a stent and died. “In California,” one said, “if this person had an event within two years, the doctor who didn’t [intervene] would be successfully sued.” And there was one more powerful and ubiquitous reason: (4) Despite the data, cardiologists couldn’t believe that stents did not help: Stenting just made so much sense. A patient has chest pain, a doctor sees a blockage, how can opening the blockage not make a difference?"

...At the same time, patients and even doctors themselves are sometimes unsure of just how effective common treatments are, or how to appropriately measure and express such things. Graham Walker, an emergency physician in San Francisco, co-runs a website staffed by doctor volunteers called the NNT that helps doctors and patients understand how impactful drugs are—and often are not. “NNT” is an abbreviation for “number needed to treat,” as in: How many patients need to be treated with a drug or procedure for one patient to get the hoped-for benefit? In almost all popular media, the effects of a drug are reported by relative risk reduction. To use a fictional illness, for example, say you hear on the radio that a drug reduces your risk of dying from Hogwart’s disease by 20 percent, which sounds pretty good. Except, that means if 10 in 1,000 people who get Hogwart’s disease normally die from it, and every single patient goes on the drug, eight in 1,000 will die from Hogwart’s disease. So, for every 500 patients who get the drug, one will be spared death by Hogwart’s disease. Hence, the NNT is 500. 

That might sound fine, but if the drug’s “NNH”—“number needed to harm”—is, say, 20 and the unwanted side effect is severe, then 25 patients suffer serious harm for each one who is saved. Suddenly, the trade-off looks grim. Now, consider a real and familiar drug: aspirin. For elderly women who take it daily for a year to prevent a first heart attack, aspirin has an estimated NNT of 872 and an NNH of 436. That means if 1,000 elderly women take aspirin daily for a decade, 11 of them will avoid a heart attack; meanwhile, twice that many will suffer a major gastrointestinal bleeding event that would not have occurred if they hadn’t been taking aspirin. As with most drugs, though, aspirin will not cause anything particularly good or bad for the vast majority of people who take it. 

That is the theme of the medicine in your cabinet: It likely isn’t significantly harming or helping you. 

“Most people struggle with the idea that medicine is all about probability,” says Aron Sousa, an internist and senior associate dean at Michigan State University’s medical school. As to the more common metric, relative risk, “it’s horrible,” Sousa says. “It’s not just drug companies that use it; physicians use it, too. They want their work to look more useful, and they genuinely think patients need to take this [drug], and relative risk is more compelling than NNT. Relative risk is just another way of lying.”"


No comments:

Post a Comment