Ways to counter misinformation and correct ‘fake news’

It’s no use simply telling people they have their facts wrong. To be more effective at correcting misinformation in news accounts and intentionally misleading “fake news,” you need to provide a detailed counter-message with new information — and get your audience to help develop a new narrative.

Those are some takeaways from an extensive new meta-analysis of laboratory debunking studies published in the journal Psychological Science. The analysis, the first conducted with this collection of debunking data, finds that a detailed counter-message is better at persuading people to change their minds than merely labeling misinformation as wrong. But even after a detailed debunking, misinformation still can be hard to eliminate, the study finds.

“Debunking: A Meta-Analysis of the Psychological Efficacy of Messages Countering Misinformation” was conducted by researchers at the Social Action Lab at the University of Illinois at Urbana-Champaign and at the Annenberg Public Policy Center of the University of Pennsylvania. The teams sought “to understand the factors underlying effective messages to counter attitudes and beliefs based on misinformation.” To do that, they examined 20 experiments in eight research reports involving 6,878 participants and 52 independent samples.

The analyzed studies, published from 1994 to 2015, focused on false social and political news accounts, including misinformation in reports of robberies; investigations of a warehouse fire and traffic accident; the supposed existence of “death panels” in the 2010 Affordable Care Act; positions of political candidates on Medicaid; and a report on whether a candidate had received donations from a convicted felon.

The researchers coded and analyzed the results of the experiments across the different studies and measured the effect of presenting misinformation, the effect of debunking, and the persistence of misinformation.

The researchers made three recommendations for debunking misinformation:

– Reduce arguments that support misinformation: News accounts about misinformation should not inadvertently repeat or belabor “detailed thoughts in support of the misinformation.”
– Engage audiences in scrutiny and counterarguing of information: Educational institutions should promote a state of healthy skepticism. When trying to correct misinformation, it is beneficial to have the audience involved in generating counterarguments.
– Introduce new information as part of the debunking message: People are less likely to accept debunking when the initial message is just labeled as wrong rather than countered with new evidence.

The authors encouraged the continued development of “alerting systems” for debunking misinformation such as Snopes.com (fake news), RetractionWatch.com (scientific retractions), and FactCheck.org (political claims). “Such an ongoing monitoring system creates desirable conditions of scrutiny and counterarguing of misinformation,” the researchers wrote.

Music makes beer taste better

Music can influence how much you like the taste of beer, according to a study published in Frontiers in Psychology.

Their findings suggest that a range of multisensory information, such as sound, sensation, shape and color, can influence the way we perceive taste.

The Brussels Beer Project collaborated with UK band The Editors to produce a porter-style beer that took inspiration from the musical and visual identity of the band.

The ale had a medium body and used an Earl Grey infusion that produced citrus notes, contrasting with the malty, chocolate flavors from the mix of grains used in production. This taste profile was designed to broadly correspond to The Editors latest album, In Dreams.

Then, a team of researchers led by Dr. Felipe Reinoso Cavalho, from the Vrije Universiteit Brussel and KU Leuven, designed an experiment to see if the influence of music and packaging design would result in a more positive tasting experience.

They invited 231 drinkers to experience the beer in three different conditions.

The first served as a control group and drank the beer along with a bottle without a label. In this case, they didn’t listen to any specific song.

The second group, testing the influence of packaging, tasted the beer after seeing the bottle with the label.

The third group drank the beer presented with the label while listening to “Oceans of Light”, one of the songs on the band’s latest album which the beer was created to reflect.

Before the test the participants rated how tasty they thought the beer might be. Then after tasting they rated how much they had actually enjoyed the drink.

The results showed that those presented with the label and track reported both greater enjoyment than those presented with the beer and label alone.

Research into the interaction of different sensory information on taste has opened up the way for food and beverage retailers to create a range of novel eating and drinking experiences.

(photo: Dr. Felipe Reinoso Cavalho)

man playing flute

Why we like the music we do

In Western styles of music, from classical to pop, some combinations of notes are generally considered more pleasant than others. To most of our ears, a chord of C and G, for example, sounds much more agreeable than the grating combination of C and F# (which has historically been known as the “devil in music”).

For decades, neuroscientists have pondered whether this preference is somehow hardwired into our brains. A new study from MIT and Brandeis University suggests that the answer is no.

In a study of more than 100 people belonging to a remote Amazonian tribe with little or no exposure to Western music, the researchers found that dissonant chords such as the combination of C and F# were rated just as likeable as “consonant” chords, which feature simple integer ratios between the acoustical frequencies of the two notes.

This study suggests that preferences for consonance over dissonance depend on exposure to Western musical culture, and that the preference is not innate.

For centuries, some scientists have hypothesized that the brain is wired to respond favorably to consonant chords such as the fifth (so-called because one of the notes is five notes higher than the other). Musicians in societies dating at least as far back as the ancient Greeks noticed that in the fifth and other consonant chords, the ratio of frequencies of the two notes is usually based on integers — in the case of the fifth, a ratio of 3:2. The combination of C and G is often called “the perfect fifth.”

Others believe that these preferences are culturally determined, as a result of exposure to music featuring consonant chords. This debate has been difficult to resolve, in large part because nowadays there are very few people in the world who are not familiar with Western music and its consonant chords.

In 2010, Ricardo Godoy, an anthropologist who has been studying an Amazonian tribe known as the Tsimane for many years, asked Josh McDermott, the Frederick A. and Carole J. Middleton Assistant Professor of Neuroscience in the Department of Brain and Cognitive Sciences at MIT, to collaborate on a study of how the Tsimane respond to music. Most of the Tsimane, a farming and foraging society of about 12 000 people, have very limited exposure to Western music. The Tsimane’s own music features both singing and instrumental performance, but usually by only one person at a time.

When asked to rate nonmusical sounds such as laughter and gasps, the Tsimane showed similar responses to the other groups. They also showed the same dislike for a musical quality known as acoustic roughness.

Envy key motivator behind many Facebook posts, but site hurts mental well-being

A new study by Sauder School of Business Professor Izak Benbasat and his collaborators shows that envy is a key motivator behind Facebook posts and that contributes to a decrease in mental well-being among users.

Creating a vicious cycle of jealousy and self-importance, the researchers say Facebook leads users to feel their lives are unfulfilling by comparison, and react by creating posts that portray their best selves.

“Social media participation has been linked to depression, anxiety and narcissistic behaviour, but the reasons haven’t been well-explained,” said Benbasat, Sauder’s Distinguished Professor of Information Systems. “We found envy to be the missing link.”

According to Benbasat, travel photos are a leading contributor to Facebook envy, pushing friends to post their most perfect pictures. He says the unrealistic portrayal of life is not motivated by the desire to make others jealous, but rather a need to compete and keep up appearances.

Benbasat says the functionality of social networks encourages envy-inducing behaviour, and that’s unlikely to change.

“Sharing pictures and stories about the highlights of your life – that’s so much of what Facebook is for, so you can’t take that away,” he said. “But I think it’s important for people to know what impact it can have on their well-being. Parents and teachers should take note as young people can be particularly vulnerable to the dark side of social media.”

The study, “Why Following Friends Can Hurt You: Empirical Investigation of the Effects of Envy on Social Networking Sites”, was published in the latest issue of Information Systems Research.

We want to be happy!

In 1969, two psychologists at the University of Illinois proposed what they called the Pollyanna Hypothesis – the idea that there is a universal human tendency to use positive words more frequently than negative ones. “Put even more simply,” they wrote, “humans tend to look on (and talk about) the bright side of life.” It was a speculation that has provoked debate ever since.

Now a team of scientists at the University of Vermont and The MITRE Corporation have applied a Big Data approach – using a massive data set of many billions of words, based on actual usage, rather than “expert” opinion – to confirm the 1960s guess.

Movie subtitles in Arabic, Twitter feeds in Korean, the famously dark literature of Russia, websites in Chinese, music lyrics in English, and even the war-torn pages of the New York Times – the researchers found that these, and probably all human language¬, skews toward the use of happy words.

But doesn’t our global torrent of cursing on Twitter, horror movies, and endless media stories on the disaster du jour mean this can’t be true? No. This huge study indicates that language itself – perhaps humanity’s greatest technology – has a positive outlook.

The new study, “Human Language Reveals a Universal Positivity Bias,” appeared in the February 9 online edition of the Proceedings of the National Academy of Sciences.

This new research study also describes a larger project that the team of fourteen scientists has developed to create “physical-like instruments” for both real-time and offline measurements of the happiness in large-scale texts–“basically, huge bags of words,”.

They call this instrument a “hedonometer” – a happiness meter. It can now trace the global happiness signal from English-language Twitter posts on a near-real-time basis, and show differing happiness signals between days. For example, a big drop was noted on the day of the terrorist attack on Charlie Hebdo in Paris, but rebounded over the following three days. The hedonometer can also discern different happiness signals in US states and cities: Vermont currently has the happiest signal, while Louisiana has the saddest. And the latest data puts Boulder, CO, in the number one spot for happiness, while Racine, WI, is at the bottom.

But, as the new paper describes, the team is working to apply the hedonometer to explore happiness signals in many other languages and from many sources beyond Twitter. For example, the team has applied their technique to over ten thousand books, inspired by Kurt Vonnegut’s “shapes of stories” idea. Visualizations of the emotional ups and downs of these books can been seen on the hedonometer website; they rise and a fall like a stock-market ticker. The new study shows that Moby Dick’s 170 914 words has four or five major valleys that correspond to low points in the story and the hedonometer signal drops off dramatically at the end, revealing this classic novel’s darkly enigmatic conclusion. In contrast, Dumas’s Count of Monte Cristo – 100 081 words in French – ends on a jubilant note, shown by a strong upward spike on the meter.

The new research “in no way asserts that all natural texts will skew positive,” the researchers write, as these various books reveal. But at a more elemental level, the study brings evidence from Big Data to a long-standing debate about human evolution: our social nature appears to be encoded in the building blocks of language.

New skills keeps the aging mind sharp

Older adults are often encouraged to stay active and engaged to keep their minds sharp, that they have to “use it or lose it.” But new research indicates that only certain activities — learning a mentally demanding skill like photography, for instance — are likely to improve cognitive functioning.

These findings, forthcoming in Psychological Science, a journal of the Association for Psychological Science, reveal that less demanding activities, such as listening to classical music or completing word puzzles, probably won’t bring noticeable benefits to an aging mind.

The new findings provide much-needed insight into the components of everyday activities that contribute to cognitive vitality as we age.

Never forget a face

Do you have a forgettable face? Many of us go to great lengths to make our faces more memorable, using makeup and hairstyles to give ourselves a more distinctive look.

Now your face could be instantly transformed into a more memorable one without the need for an expensive makeover, thanks to an algorithm developed by researchers in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).

The algorithm, which makes subtle changes to various points on the face to make it more memorable without changing a person’s overall appearance, was unveiled in January at the International Conference on Computer Vision in Sydney.

The system could ultimately be used in a smartphone app to allow people to modify a digital image of their face before uploading it to their social networking pages. It could also be used for job applications, to create a digital version of an applicant’s face that will more readily stick in the minds of potential employers.

Conversely, it could also be used to make faces appear less memorable, so that actors in the background of a television program or film do not distract viewers’ attention from the main actors, for example.

To develop the memorability algorithm, the team first fed the software a database of more than 2 000 images. Each of these images had been awarded a “memorability score,” based on the ability of human volunteers to remember the pictures. In this way the software was able to analyze the information to detect subtle trends in the features of these faces that made them more or less memorable to people.

The researchers then programmed the algorithm with a set of objectives — to make the face as memorable as possible, but without changing the identity of the person or altering their facial attributes, such as their age, gender, or overall attractiveness. Changing the width of a nose may make a face look much more distinctive, for example, but it could also completely alter how attractive the person is, and so would fail to meet the algorithm’s objectives.

When the system has a new face to modify, it first takes the image and generates thousands of copies, known as samples. Each of these samples contains tiny modifications to different parts of the face. The algorithm then analyzes how well each of these samples meets its objectives.

Once the algorithm finds a sample that succeeds in making the face look more memorable without significantly altering the person’s appearance, it makes yet more copies of this new image, with each containing further alterations. It then keeps repeating this process until it finds a version that best meets its objectives.

The team then selected photographs of 500 people and modified them to produce both a memorable and forgettable version of each. When they tested these images on a group of volunteers, they found that the algorithm succeeded in making the faces more or less memorable, as required, in around 75 percent of cases.

The team is now investigating the possibility of adding other attributes to their model, so that it could modify faces to be both more memorable and to appear more intelligent or trustworthy, for example.

Using mobile phones to measure happiness

Researchers at Princeton University are developing ways to use mobile phones to explore how one’s environment influences one’s sense of well-being.

In a study involving volunteers who agreed to provide information about their feelings and locations, the researchers found that mobile phones can efficiently capture information that is otherwise difficult to record, given today’s on-the-go lifestyle. This is important, according to the researchers, because feelings recorded “in the moment” are likely to be more accurate than feelings jotted down after the fact.

To conduct the study, the team created an application for the Android operating system that documented each person’s location and periodically sent the question, “How happy are you?” The application also recorded participants’ locations at various intervals based on either GPS satellites or cellular tower signals.

The mobile phone method could help overcome some of the limitations that come with surveys conducted at people’s homes, according to the researchers. Census measurements tie people to specific areas — the census tracts in which they live — that are usually not the only areas that people actually frequent.

The team did obtain some preliminary results regarding happiness: for example, male subjects tended to describe themselves as less happy when they were further from their homes, whereas females did not demonstrate a particular trend with regards to emotions and distance.

The study was published in the June issue of Demography.

Lost your keys? The brain can rapidly mobilize a search party

A contact lens on the bathroom floor, an escaped hamster in the backyard, a car key in a bed of gravel: How are we able to focus so sharply to find that proverbial needle in a haystack? Scientists at the University of California, Berkeley, have discovered that when we embark on a targeted search, various visual and non-visual regions of the brain mobilize to track down a person, animal or thing.

That means that if we’re looking for a youngster lost in a crowd, the brain areas usually dedicated to recognizing other objects, or even the areas attuned to abstract thought, shift their focus and join the search party. Thus, the brain rapidly switches into a highly focused child-finder, and redirects resources it uses for other mental tasks.

The findings, published in the journal Nature Neuroscience, help explain why we find it difficult to concentrate on more than one task at a time. The results also shed light on how people are able to shift their attention to challenging tasks, and may provide greater insight into neurobehavioral and attention deficit disorders such as ADHD.

These results were obtained in studies that used functional Magnetic Resonance Imaging (fMRI) to record the brain activity of study participants as they searched for people or vehicles in movie clips. In one experiment, participants held down a button whenever a person appeared in the movie. In another, they did the same with vehicles.

The brain scans simultaneously measured neural activity via blood flow in thousands of locations across the brain. Researchers used regularized linear regression analysis, which finds correlations in data, to build models showing how each of the roughly 50 000 locations near the cortex responded to each of the 935 categories of objects and actions seen in the movie clips. Next, they compared how much of the cortex was devoted to detecting humans or vehicles depending on whether or not each of those categories was the search target.

They found that when participants searched for humans, relatively more of the cortex was devoted to humans, and when they searched for vehicles, more of the cortex was devoted to vehicles. For example, areas that were normally involved in recognizing specific visual categories such as plants or buildings switched to become tuned to humans or vehicles, vastly expanding the area of the brain engaged in the search.

The findings build on an earlier UC Berkeley brain imaging study that showed how the brain organizes thousands of animate and inanimate objects into what researchers call a “continuous semantic space.” Those findings challenged previous assumptions that every visual category is represented in a separate region of visual cortex. Instead, researchers found that categories are actually represented in highly organized, continuous maps.

The latest study goes further to show how the brain’s semantic space is warped during visual search, depending on the search target. Researchers have posted their results in an interactive, online brain viewer. Other co-authors of the study are UC Berkeley neuroscientists Jack Gallant, Alexander Huth and Shinji Nishimoto.

We know when we’re being lazy thinkers

Are we intellectually lazy? Yes we are, but we do know when we take the easy way out, according to a new study by Wim De Neys and colleagues, from the CNRS in France. Contrary to what psychologists believe, we are aware that we occasionally answer easier questions rather than the more complex ones we were asked, and we are also less confident about our answers when we do. The work is published online in Springer’s journal Psychonomic Bulletin & Review.

Research to date on human thinking suggests that our judgment is often biased because we are intellectually lazy, or so-called cognitive misers. We intuitively substitute hard questions for easier ones. What is less clear is whether or not we realize that we are doing this and notice our mistake.

Using an adaptation of the standard ‘bat-and-ball’ problem, the researchers explored this phenomenon. The typical ‘bat-and-ball’ problem is as follows: a bat and ball together cost $1.10. The bat costs $1 more than the ball. How much does the ball cost? The intuitive answer that immediately springs to mind is 10 cents. However, the correct response is 5 cents.

The authors developed a control version of this problem, without the relative statement that triggers the substitution of a hard question for an easier one: A magazine and a banana together cost $2.90. The magazine costs $2. How much does the banana cost?

A total of 248 French university students were asked to solve each version of the problem. Once they had written down their answers, they were asked to indicate how confident they were that their answer was correct.

Only 21 percent of the participants managed to solve the standard problem (bat/ball) correctly. In contrast, the control version (magazine/banana) was solved correctly by 98 percent of the participants. In addition, those who gave the wrong answer to the standard problem were much less confident of their answer to the standard problem than they were of their answer to the control version. In other words, they were not completely oblivious to the questionable nature of their wrong answer. The key reason seems to be that reasoners tend to minimize cognitive effort and stick to intuitive processing.

The authors comment: “Although we might be cognitive misers, we are not happy fools who blindly answer erroneous questions without realizing it.”

Indeed, although people appear to unconsciously substitute hard questions for easier ones, in reality, they are less foolish than psychologists might believe because they do know they are doing it.