Ways to counter misinformation and correct ‘fake news’

It’s no use simply telling people they have their facts wrong. To be more effective at correcting misinformation in news accounts and intentionally misleading “fake news,” you need to provide a detailed counter-message with new information — and get your audience to help develop a new narrative.

Those are some takeaways from an extensive new meta-analysis of laboratory debunking studies published in the journal Psychological Science. The analysis, the first conducted with this collection of debunking data, finds that a detailed counter-message is better at persuading people to change their minds than merely labeling misinformation as wrong. But even after a detailed debunking, misinformation still can be hard to eliminate, the study finds.

“Debunking: A Meta-Analysis of the Psychological Efficacy of Messages Countering Misinformation” was conducted by researchers at the Social Action Lab at the University of Illinois at Urbana-Champaign and at the Annenberg Public Policy Center of the University of Pennsylvania. The teams sought “to understand the factors underlying effective messages to counter attitudes and beliefs based on misinformation.” To do that, they examined 20 experiments in eight research reports involving 6,878 participants and 52 independent samples.

The analyzed studies, published from 1994 to 2015, focused on false social and political news accounts, including misinformation in reports of robberies; investigations of a warehouse fire and traffic accident; the supposed existence of “death panels” in the 2010 Affordable Care Act; positions of political candidates on Medicaid; and a report on whether a candidate had received donations from a convicted felon.

The researchers coded and analyzed the results of the experiments across the different studies and measured the effect of presenting misinformation, the effect of debunking, and the persistence of misinformation.

The researchers made three recommendations for debunking misinformation:

– Reduce arguments that support misinformation: News accounts about misinformation should not inadvertently repeat or belabor “detailed thoughts in support of the misinformation.”
– Engage audiences in scrutiny and counterarguing of information: Educational institutions should promote a state of healthy skepticism. When trying to correct misinformation, it is beneficial to have the audience involved in generating counterarguments.
– Introduce new information as part of the debunking message: People are less likely to accept debunking when the initial message is just labeled as wrong rather than countered with new evidence.

The authors encouraged the continued development of “alerting systems” for debunking misinformation such as Snopes.com (fake news), RetractionWatch.com (scientific retractions), and FactCheck.org (political claims). “Such an ongoing monitoring system creates desirable conditions of scrutiny and counterarguing of misinformation,” the researchers wrote.

The music of speech

Researchers at UC San Francisco have identified neurons in the human brain that respond to pitch changes in spoken language, which are essential to clearly conveying both meaning and emotion.

The study was published online August 24, 2017 in Science by the lab of Edward Chang, MD, a professor of neurological surgery at the UCSF Weill Institute for Neurosciences, and led by Claire Tang, a fourth-year graduate student in the Chang lab.

Changes in vocal pitch during speech – part of what linguists call speech prosody – are a fundamental part of human communication, nearly as fundamental as melody to music. In tonal languages such as Mandarin Chinese, pitch changes can completely alter the meaning of a word, but even in a non-tonal language like English, differences in pitch can significantly change the meaning of a spoken sentence.

For instance, “Sarah plays soccer,” in which “Sarah” is spoken with a descending pitch, can be used by a speaker to communicate that Sarah, rather than some other person, plays soccer; in contrast, “Sarah plays soccer” indicates that Sarah plays soccer, rather than some other game. And adding a rising tone at the end of a sentence (“Sarah plays soccer?”) indicates that the sentence is a question.

The brain’s ability to interpret these changes in tone on the fly is particularly remarkable, given that each speaker also has their own typical vocal pitch and style (that is, some people have low voices, others have high voices, and others seem to end even statements as if they were questions). Moreover, the brain must track and interpret these pitch changes while simultaneously parsing which consonants and vowels are being uttered, what words they form, and how those words are being combined into phrases and sentences – with all of this happening on a millisecond scale.

These findings reveal how the brain begins to take apart the complex stream of sounds that make up speech and identify important cues about the meaning of what we’re hearing.

How the brain builds panoramic memory

When asked to visualize your childhood home, you can probably picture not only the house you lived in, but also the buildings next door and across the street. MIT neuroscientists at MIT’s McGovern Institute for Brain Research and a junior fellow of the Harvard Society of Fellows have now identified two brain regions that are involved in creating these panoramic memories.

These brain regions help us to merge fleeting views of our surroundings into a seamless, 360-degree panorama, the researchers say.

As we look at a scene, visual information flows from our retinas into the brain, which has regions that are responsible for processing different elements of what we see, such as faces or objects. The MIT team suspected that areas involved in processing scenes — the occipital place area (OPA), the retrosplenial complex (RSC), and parahippocampal place area (PPA) — might also be involved in generating panoramic memories of a place such as a street corner.

If this were true, when you saw two images of houses that you knew were across the street from each other, they would evoke similar patterns of activity in these specialized brain regions. Two houses from different streets would not induce similar patterns.

“Our hypothesis was ,” Robertson says.

The researchers explored the hypothesis that as we begin to build memory of the environment around us, there would be certain regions of the brain where the representation of a single image would start to overlap with representations of other views from the same scene. They used immersive virtual reality headsets, which allowed them to show people many different panoramic scenes.

Brain scans revealed that when participants saw two images that they knew were linked, the response patterns in the RSC and OPA regions were similar. However, this was not the case for image pairs that the participants had not seen as linked. This suggests that the RSC and OPA, but not the PPA, are involved in building panoramic memories of our surroundings, the researchers say.

In another experiment, the researchers tested whether one image could “prime” the brain to recall an image from the same panoramic scene. To do this, they showed participants a scene and asked them whether it had been on their left or right when they first saw it. Before that, they showed them either another image from the same street corner or an unrelated image. Participants performed much better when primed with the related image.