The wisdom of crowds of doctors

House’s success rate might not be as high if he listened to his team.

Preventable medical error leads to an estimated 200,000 deaths per year in the US, and many of these deaths are caused by mistaken diagnoses. Clearly, making it easier for doctors to avoid errors should be a priority.

One promising avenue could be collective decision-making: pooling the diagnoses of various doctors and using their joint wisdom to hit on the most likely answer. According to a paper in this week’s PNAS, though, this method is only likely to work if all the doctors in the group have the same level of skill.

Obviously, ethics committees are unlikely to allow a team of researchers to toy with patients’ potentially life-or-death diagnoses. So in order to figure out whether collective decision-making would help with the problem, the team combined real-world data with a computer simulation.

Read 11 remaining paragraphs | Comments

Can we trust peer review? New study highlights some problems

Competitive peer review increases innovation, but it has a dark side.

What peer review shouldn't look like (perhaps minus the cheering fan). (credit: flickr user: Chris Lott)

Peer review is intended to act as a gatekeeper in science. If working researchers deem a paper fit to be published, it should mean that the research is sound, rigorous, and accurate. But an experimental analysis of peer review suggests that it might also end up rejecting high-quality material. The analysis points to high levels of competition as the source of the problem.

Because peer review is a vastly complex system that can function quite differently in various disciplines, researchers Stefano Balietti, Robert L. Goldstone, and Dirk Helbing constructed an experimental game designed to mimic some of the primary features of peer review. Participants were divided into 16 groups of nine people each and tasked with creating a piece of “art” on a computer interface. The pieces could then be submitted to one of three “art exhibitions.”

Each participant was then given three pieces of other people's art to review; pieces that averaged a score higher than five out of ten were accepted into the exhibition. Each group played 30 rounds of the game.

Read 13 remaining paragraphs | Comments

Dissonant tones sound fine to people not raised on Western music

Musical perception is, surprisingly, not shared by all humans.

(credit: Josh McDermott)

Bobby McFerrin demonstrates how Western music lives in our brains.

The notes used in Western music—or, more accurately, the relationships between the notes used in Western music—have a strange power. Bobby McFerrin demonstrated this dramatically by showing that an audience somehow knows what notes to sing when he jumps around the stage. He remarked that “what’s interesting to me about that is, regardless of where I am, anywhere, every audience gets that.”

He’s suggesting that something about the relationships between pitches is culturally universal. All people seem to experience them the same way, regardless of where they're from or whether they have musical training. The question of universals in music perception is important because it can help us determine how much of our perception is shaped by culture and how much by biology. A paper in this week’s Nature reports on the surprising finding that a form of musical perception long thought to be common across all humans might not be so universal after all.

In music, relationships between notes can be used in two different ways. If pitches are played in sequence, the relationships between them are melodic, like the difference between each successive note in "Mary had a Little Lamb." When notes are played simultaneously, like a single strum of all the strings on a guitar or a choir singing, the relationships are harmonic. Different musical traditions have different rules about which melodic and harmonic relationships are permissible.

Read 11 remaining paragraphs | Comments

If you’re worried that stupid people have more kids, don’t be (yet)

A tiny selection against education, but it’s overwhelmed by cultural changes.

(credit: Universal Pictures)

It’s a common perception that less-educated people have more children. The idea causes much hand-wringing and gnashing of teeth over the possibility that human populations might become stupider over the course of generations. But it’s actually pretty difficult to confirm whether there really is a reproductive trend that would change the genetic makeup of the human population overall.

Jonathan Beauchamp, a “genoeconomist” at Harvard, is interested in questions at the intersection of genetics and economics. He published a paper in PNAS this week that provides some of the first evidence of evolution at the genetic level in a reasonably contemporary human population. One of his main findings is slight evolutionary selection for lower education—but it’s really slight, just 1.5 months less of education per generation. Given that the last century has seen vastly increased education across the globe, and around two years extra per generation in the same time period as Beauchamp’s study, this genetic selection is easily outweighed by cultural factors.

There are other important caveats to the finding, most notably that Beauchamp only looks at a very small segment of the global population: US citizens of European descent, born between 1931 and 1953. This means that we can’t generalize the results to, say, China or Ghana, or even US citizens of non-European descent.

Read 10 remaining paragraphs | Comments

True grit may be a false concept

A meta-analysis of 88 studies on grit raises some questions about the concept.

John Wayne in True Grit. (credit: Paramount PIctures)

The concept of “grit” has risen to prominence recently on a wave of publicity for Angela Duckworth’s book, Grit: The Power of Passion and Perseverance. The idea of grit is that success is about more than just natural talent—finding something you’re passionate about and persevering in it is more important than how talented you are to start out with. This can help to explain why people who are highly talented aren’t always successful.

That grit is as important as talent is an inspirational message—in part. One common criticism is that this message leads to a painful amount of self blame in, and prejudice against, people who fail at something. But the concept has snowballed into a simplistic, self-help wrecking ball, and even Duckworth is concerned about how far the idea is being taken.

But is the concept valid to start with? There's a study due to be published in the Journal of Personality and Social Psychology, and an early version has been made available by lead author Marcus Credé. The authors take a close look at the results of multiple studies on grit, pointing out some important problems with the idea. Apparently it doesn’t make as big a difference in success as the hype claims, and it doesn’t seem to be all that different from a concept we’ve known about for a long time: conscientiousness.

Read 17 remaining paragraphs | Comments

The “hobbit” was tiny already by 700,000 years ago

Very old and very small Homo floresiensis-like remains found in Indonesia.

The Liang Bua cave, with excavations in progress. (credit: Liang Bua Team)

The diminutive “hobbit” species, Homo floresiensis, was recently in the news because of a new analysis that suggested the species predated the arrival of modern humans to the region. But the discovery left a big unanswered question: how did the hobbit fit into the human family tree? A discovery described in today’s issue of Nature helps piece together more about the species’ history, shedding light on its ancestry and suggesting that it was present in Indonesia as early as 700,000 years ago.

When first found, the tiny bones discovered on the Indonesian island of Flores were dated to around 20 kya (20 thousand years ago). That date was revised earlier this year, placing them between 100 kya and 60 kya. Since modern humans probably moved through the region around 50 kya—and since other species of humans have tended not to last long once our own species moves into the neighborhood—the older dates helped to resolve the mystery of how the hobbits had lived alongside us for so long. Basically, they hadn’t.

Still, this left a lot of other questions open. How long did hobbits live on the island? If hobbits and modern humans coexisted for even a short period of time, is it possible that they shared some of their genes with us like Neanderthals did? The latter question depends partly on their ancestry—if hobbits descended from Homo erectus, its evolutionary distance from humans would make interbreeding unlikely. The new finding suggests that's the case.

Read 20 remaining paragraphs | Comments

Children with weak future planning are more likely to be involved in crime

Surveillance might be more more effective deterrent than prison, researchers suggest.

If you were asked whether you’d prefer to be given $140 today or $1400 in five years' time, the smart answer is obvious. But immediate cash can be really tempting: perhaps you have expensive car repairs looming or you want to buy a gift for someone—that $140 would do the trick. It can be easy to justify cheating your future self out of $1260 in the face of instant gratification.

The tendency to ignore or discount the value of a future benefit is called "time discounting." Plenty of research suggests that time discounting can be linked to criminal behavior, which is the ultimate example of choosing an immediate reward despite an abstract risk of losses. Although the link is intuitive, it still needs confirmation through empirical research, because so many things that seem intuitive turn out to be wrong.

Investigating this question is a tough gig, though. You could test people’s time discounting behaviors and look at their criminal records, but even if you found a link, you wouldn't know whether the time behaviors led to the crime, or vice versa. Ideally, you need to find a way to test children’s time discount rates, and then wait to see if they get involved in crime as adults (and whether this tendency lasts past the adolescent crime peak).

Read 9 remaining paragraphs | Comments

Simple badge incentive could help eliminate bad science

Researchers share their data more when they can earn a badge for transparency.

Some of the problems within science have been getting much more public attention recently. Psychology’s replication crisis has gained deserved notoriety, but the social sciences are far from alone, with pharmaceutical science being plagued by the same problems.

These issues are the product of a number of underlying problems. A study published recently in the journal PLOS Biology presents evidence showing that a very cheap solution could help to patch up one of them: the sharing of data that underlies published research. It's a solution that isn't out of place in a video game—all it takes is a digital badge or two to encourage researchers to be more transparent.

It’s clear that although the scientific method is still the best tool we have at our disposal, there are some situations where it desperately needs sharpening. One thing that has dulled the tool is the damaging incentives often faced by scientists. They’re under pressure to publish exciting, positive results in order to keep their jobs and be promoted, which can push some people to do awful things like fake data. But on a more mundane level, the pressure often means that only the most exciting, whizzbang studies see the light of day.

Read 12 remaining paragraphs | Comments

Stress may push us towards putting on a tinfoil hat

But the association between stress and conspiracy theories is pretty limited.

There's no one single cause that drives us to strap on a tinfoil hat—instead, a variety of factors interact with each other to push us in that direction. A recent paper in the Journal of Personality and Individual Differences reports a correlation between stress and conspiracy theory belief, suggesting that a very common experience could be one of these factors.

It’s possible that believing in conspiracy theories could make people more stressed, says Pascal Wagner-Egger, a conspiracy belief researcher who wasn’t involved in this paper. “Conspiracy theories are not very reassuring beliefs,” he points out. But the authors of the study think it’s likely that causality runs in the other direction here—that stress makes people predisposed to believing in conspiracy theories.

Untangling the knot

It’s pretty difficult to do an experiment on the causality of conspiracy theories; it’s not like you're likely to get useful information by asking people what they think about the Moon landing, stressing them out, and then asking again. It’s possible that a long-term study could track the development of beliefs alongside stressful life changes like job loss, but longitudinal studies like these are difficult and expensive. That means correlational studies, despite their limitations, are the low-hanging fruit for early exploration.

Read 9 remaining paragraphs | Comments

How cognitive biases contribute to people refusing the flu vaccine

Framing people’s choices differently could boost uptake, economists suggest.

If someone receives the flu vaccine, there’s a better chance they’ll get through flu season without getting sick. But because the flu vaccine isn’t 100 percent effective, they might still end up infected despite the vaccine. To most observers, these two possible outcomes are “not equally salient,” write Frederick Chen and Ryan Stevens, two economists with an interest in vaccine refusal.

When someone gets sick, it’s an adverse event. People take notice of this and use it to predict the likelihood of similar adverse events. When someone doesn’t get sick, that’s, well, nothing. It’s the absence of an event, and that's hard to recognize. “We see when the vaccine fails to protect us," write Chen and Stevens, "but when the vaccine does work, we do not see anything different from our normal state of being.”

The duo thinks that cognitive biases like these are probably playing a role in the incredibly poor uptake of flu vaccines in the US. By tailoring public health messages around known cognitive biases, the economists believe it's possible to improve vaccine uptake. At this point, we don't know whether they're right in their assumptions about the links between these particular cognitive biases and vaccine myths or whether their recommendations would work. Nonetheless, the ideas are interesting and could provide some new avenues for public health research. And given the high national costs of flu, their proposal could turn out to be particularly useful.

Most people will likely be able to recall flu vaccine failures, whether their own experiences of vaccine failure or through annoyed stories told by friends and family. They’ll be less likely to recall cases where the vaccine worked, because they're pretty much impossible to detect. So it becomes easy to overestimate how likely the vaccine is to fail and to consider it just a waste of time or money.

This way of thinking is an example of what's called an availability heuristic. It leads people to overuse recent or salient events when they’re estimating the risk of something (think about how you might involuntarily get nervous about flying straight after a huge airplane crash, even if your brain overrides your gut).

The availability heuristic also underlies more damaging myths, Chen and Stevens think. Some people believe that the flu vaccine actually causes flu, which could arise from people seeing all these visible cases of sickness following the vaccine and constructing a narrative that joins those dots in a particular way (“oh, the vaccine causes the flu!”). That in turns leads to people thinking that people who are pregnant or who have suppressed immune systems should avoid the flu vaccine.

People often don’t get the flu vaccine because they think they’re at low risk for flu. This, Chen and Stevens suggest, could be due to people’s “unrealistic optimism about themselves”—people believe themselves to be above average and think they’re great drivers, for example. They also consider themselves immune to pesky cognitive biases, as the comment thread on any article about cognitive biases will demonstrate. So, the authors write, people may “vastly underestimate their susceptibility by constructing a mental narrative that wholly attributes their influenza-free experiences thus far to their having superior health or genetics.”

Understanding how these biases drive people’s (often unconscious) decision-making processes could steer public health efforts to improve vaccine uptake. For example, campaigns could try to tell stories about people who got the vaccine and then didn’t get sick to balance out the salience of the vaccine failure stories with something more memorable than statistics. Advertisements that ask the audience to consider the vaccine choice made by a relatable person could also lead them to take a less overly optimistic, more objective stance on their own risk of flu infection.

It could also be possible to use cognitive biases to public health advantage by leveraging loss aversion. This is people’s tendency to get far more upset about things they lose than they get happy about things they gain. Because of this, saying “vaccination reduces your risk of flu by up to 80 percent” might be less effective than saying “your risk of getting the flu increases by up to 400 percent if you’re not vaccinated.”

One really important thing that Chen and Stevens don’t discuss is whether people might have different levels of resistance to flu vaccines. For instance, your average Joe might not have given flu vaccines much thought. This person could have a vague awareness that they don't work too well and that he doesn't really get sick anyway. These techniques might work on him, but they likely won't on a hardcore anti-vaxxer whose position is rooted as much in identity as anything else.

Overall, these ideas seem sensible, but the next step now is to study whether they actually work.

Health Promotion International, 2016. DOI: 10.1093/heapro/daw031  (About DOIs).