Sunday, May 14, 2017

Same stuff.


Andre wore it better.


Wait for it...






Conundrum.


"...the scientific reality is that it’s futile to treat children as blank slates with no predetermined characteristics. Biology matters."

The futility of gender-neutral parenting - LA Times: "Offering kids the opportunity to pursue what they’d like, freed from societal expectations, is an undeniably positive thing — whether it has to do with toys, clothing, or their future aspirations. But the scientific reality is that it’s futile to treat children as blank slates with no predetermined characteristics. Biology matters. A large and long-standing body of research literature shows that toy preferences, for example, are innate, not socially constructed or shaped by parental feedback. Most girls will gravitate toward socially interesting toys, like dolls, that help social and verbal abilities develop. Most boys will gravitate toward toys that are mechanically interesting, like cars and trucks, fostering visuo-spatial skills...

One recent study, published in Infant and Child Development, showed that these preferences emerge as early as nine months of age — before children are developmentally aware that gender differences exist, at around 18 months. Another piece of evidence comes from studying girls who were exposed to high levels of testosterone prenatally, in the case of a genetic condition called congenital adrenal hyperplasia, or CAH. Girls with CAH tend to be gender nonconforming, and will prefer toys that are typical to boys, even when their parents offer more praise for playing with female-typical ones. This speaks to the vital role of hormones in developing gender preferences and sex differences in behavior, more broadly. 

We also see the same trend in our primate cousins, including rhesus and vervet monkeys. Young female monkeys gravitate toward dolls while male monkeys prefer wheeled toys, despite the fact they aren’t encouraged by other monkeys or their caregivers in their choices. In the face of scientific data, the gender-neutral movement nevertheless continues to gain momentum. Indeed, its adherents took heart in a study published last year in the Proceedings of the National Academy of Sciences, which touted the idea that the brains of women and men are identical. If so, that would offer support to the theory that gender is an artificially created, outdated concept. However, an immense body of neuroimaging research has shown brain differences between the sexes. One meta-analysis of 126 studies found that men have larger total brain volumes than women. Men also show greater white matter connectivity running from the front to the back of the brain, while women have more of these connections running between the two hemispheres. Additionally, when researchers reanalyzed the same brain data from the “no sex differences” study, they found that it was possible to correctly identify whether a given brain was male or female 73% of the time. But this discovery did not receive much attention from the media, and as a result, the initial study’s misinformation continues to spread...

I hear from many well-meaning parents who raised their children in gender-neutral homes and were surprised to find that they nevertheless gravitated toward stereotypical interests and toys. Little boys who were given pots and pans to play with turned them into makeshift toy cars, complete with self-generated engine sounds. Little girls turned to one another and started playing house. The gender-neutral trend capitalizes on fears that parents have of inadvertently limiting their child’s potential. We want the best for our children; for daughters to grow up to be as competitive for STEM jobs as their male counterparts, and for sons to possess strong social and communication skills. But whether your child leans toward gender-atypical traits will likely have more to do with the prenatal environment —testosterone levels in utero — than a perfectly balanced upbringing. Besides, so long as children are given the option to take part in activities they find interesting, there’s nothing wrong with being gender-typical. Acknowledging inherent sex differences isn’t harmful or sexist; differences don’t necessitate one sex being better than the other."


Saturday, May 13, 2017

What media bias?








Gail Simone's John Wick theory is the greatest thing.

Delusions of Grandeur.


So close.


"...medicine is quick to adopt practices based on shaky evidence but slow to drop them once they’ve been blown up by solid proof."

When Evidence Says No, but Doctors Say Yes - The Atlantic: "When you visit a doctor, you probably assume the treatment you receive is backed by evidence from medical research. Surely, the drug you’re prescribed or the surgery you’ll undergo wouldn’t be so common if it didn’t work, right? For all the truly wondrous developments of modern medicine—imaging technologies that enable precision surgery, routine organ transplants, care that transforms premature infants into perfectly healthy kids, and remarkable chemotherapy treatments, to name a few—it is distressingly ordinary for patients to get treatments that research has shown are ineffective or even dangerous. 

Sometimes doctors simply haven’t kept up with the science. Other times doctors know the state of play perfectly well but continue to deliver these treatments because it’s profitable—or even because they’re popular and patients demand them. Some procedures are implemented based on studies that did not prove whether they really worked in the first place. Others were initially supported by evidence but then were contradicted by better evidence, and yet these procedures have remained the standards of care for years, or decades. 

 Even if a drug you take was studied in thousands of people and shown truly to save lives, chances are it won’t do that for you. The good news is, it probably won’t harm you, either. Some of the most widely prescribed medications do little of anything meaningful, good or bad, for most people who take them. 

 In a 2013 study, a dozen doctors from around the country examined all 363 articles published in The New England Journal of Medicine over a decade—2001 through 2010—that tested a current clinical practice, from the use of antibiotics to treat people with persistent Lyme disease symptoms (didn’t help) to the use of specialized sponges for preventing infections in patients having colorectal surgery (caused more infections). Their results, published in the Mayo Clinic Proceedings, found 146 studies that proved or strongly suggested that a current standard practice either had no benefit at all or was inferior to the practice it replaced; 138 articles supported the efficacy of an existing practice, and the remaining 79 were deemed inconclusive. (There was, naturally, plenty of disagreement with the authors’ conclusions.) 

Some of the contradicted practices possibly affect millions of people daily: Intensive medication to keep blood pressure very low in diabetic patients caused more side effects and was no better at preventing heart attacks or death than more mild treatments that allowed for a somewhat higher blood pressure. Other practices challenged by the study are less common—like the use of a genetic test to determine if a popular blood thinner is right for a particular patient—but gaining in popularity despite mounting contrary evidence. Some examples defy intuition: CPR is no more effective with rescue breathing than if chest compressions are used alone; and breast-cancer survivors who are told not to lift weights with swollen limbs actually should lift weights, because it improves their symptoms. A separate but similarly themed study in 2012 funded by the Australian Department of Health and Ageing, which sought to reduce spending on needless procedures, looked across the same decade and identified 156 active medical practices that are probably unsafe or ineffective. The list goes on: A brand new review of 48 separate studies—comprising more than 13,000 clinicians—looked at how doctors perceive disease-screening tests and found that they tend to underestimate the potential harms of screening and overestimate the potential benefits; an editorial inAmerican Family Physician,co-written by one of the journal’s editors, noted that a “striking feature” of recent research is how much of it contradicts traditional medical opinion...

A 2007 Journal of the American Medical Association paper coauthored by John Ioannidis—a Stanford University medical researcher and statistician who rose to prominence exposing poor-quality medical science—found that it took 10 years for large swaths of the medical community to stop referencing popular practices after their efficacy was unequivocally vanquished by science. According to Vinay Prasad, an oncologist and one of the authors of the Mayo Clinic Proceedings paper, medicine is quick to adopt practices based on shaky evidence but slow to drop them once they’ve been blown up by solid proof...

In 2007, after a seminal study, the COURAGE trial, showed that stents did not prevent heart attacks or death in stable patients, a trio of doctors at the University of California, San Francisco, conducted 90-minute focus groups with cardiologists to answer that question. They presented the cardiologists with fictional scenarios of patients who had at least one narrowed artery but no symptoms and asked them if they would recommend a stent. Almost to a person, the cardiologists, including those whose incomes were not tied to tests and procedures, gave the same answers: They said that they were aware of the data but would still send the patient for a stent. 

The rationalizations in each focus group followed four themes: (1) Cardiologists recalled stories of people dying suddenly—including the highly publicized case of jogging guru Jim Fixx—and feared they would regret it if a patient did not get a stent and then dropped dead. The study authors concluded that cardiologists were being influenced by the “availability heuristic,” a term coined by Nobel laureate psychologists Amos Tversky and Daniel Kahneman for the human instinct to base an important decision on an easily recalled, dramatic example, even if that example is irrelevant or incredibly rare. (2) Cardiologists believed that a stent would relieve patient anxiety. (3) Cardiologists felt they could better defend themselves in a lawsuit if a patient did get a stent and then died, rather than if they didn’t get a stent and died. “In California,” one said, “if this person had an event within two years, the doctor who didn’t [intervene] would be successfully sued.” And there was one more powerful and ubiquitous reason: (4) Despite the data, cardiologists couldn’t believe that stents did not help: Stenting just made so much sense. A patient has chest pain, a doctor sees a blockage, how can opening the blockage not make a difference?"

...At the same time, patients and even doctors themselves are sometimes unsure of just how effective common treatments are, or how to appropriately measure and express such things. Graham Walker, an emergency physician in San Francisco, co-runs a website staffed by doctor volunteers called the NNT that helps doctors and patients understand how impactful drugs are—and often are not. “NNT” is an abbreviation for “number needed to treat,” as in: How many patients need to be treated with a drug or procedure for one patient to get the hoped-for benefit? In almost all popular media, the effects of a drug are reported by relative risk reduction. To use a fictional illness, for example, say you hear on the radio that a drug reduces your risk of dying from Hogwart’s disease by 20 percent, which sounds pretty good. Except, that means if 10 in 1,000 people who get Hogwart’s disease normally die from it, and every single patient goes on the drug, eight in 1,000 will die from Hogwart’s disease. So, for every 500 patients who get the drug, one will be spared death by Hogwart’s disease. Hence, the NNT is 500. 

That might sound fine, but if the drug’s “NNH”—“number needed to harm”—is, say, 20 and the unwanted side effect is severe, then 25 patients suffer serious harm for each one who is saved. Suddenly, the trade-off looks grim. Now, consider a real and familiar drug: aspirin. For elderly women who take it daily for a year to prevent a first heart attack, aspirin has an estimated NNT of 872 and an NNH of 436. That means if 1,000 elderly women take aspirin daily for a decade, 11 of them will avoid a heart attack; meanwhile, twice that many will suffer a major gastrointestinal bleeding event that would not have occurred if they hadn’t been taking aspirin. As with most drugs, though, aspirin will not cause anything particularly good or bad for the vast majority of people who take it. 

That is the theme of the medicine in your cabinet: It likely isn’t significantly harming or helping you. 

“Most people struggle with the idea that medicine is all about probability,” says Aron Sousa, an internist and senior associate dean at Michigan State University’s medical school. As to the more common metric, relative risk, “it’s horrible,” Sousa says. “It’s not just drug companies that use it; physicians use it, too. They want their work to look more useful, and they genuinely think patients need to take this [drug], and relative risk is more compelling than NNT. Relative risk is just another way of lying.”"