Languages do not share a single history

Languages do not share a single history but different components evolve along different trajectories and at different rates. A large-scale study of Pacific languages reveals that forces driving grammatical change are different to those driving lexical change. Grammar changes more rapidly and is especially influenced by contact with unrelated languages, while words are more resistant to change.

An international team of researchers, led by scientists at the Max Planck Institute for the Science of Human History, have discovered that a language’s grammatical structures change more quickly over time than vocabulary, overturning a long-held assumption in the field. The study, published October 2, 2017 in PNAS, analyzed 81 Austronesian languages based on a detailed database of grammatical structures and lexicon. By analyzing these languages, all from a single family and geographic region, using sophisticated modelling the researchers were able to determine how quickly different aspects of the languages had changed. Strikingly different processes seemed to be shaping the lexicon and the grammar – the lexicon changed more when new languages were created, while the grammatical structures were more affected by contact with other languages.

One fascinating question for linguists is whether all aspects of a language evolve as an integrated system with all aspects (grammar, morphology, phonology, lexicon) sharing the same history over time or whether different aspects of a language show different histories. Does each word have its own history? The present study, by an international team including researchers from the Max Planck Institute for the Science of Human History, the Max Planck Institute for Psycholinguistics, the Australian National University, the University of Oxford, and Uppsala University, addressed this question by comparing both grammatical structures and lexicon for over 80 Austronesian languages. This study applied cutting-edge computational methods to analyze not only a large number of words, but also a large number of grammatical elements, all from languages that were geographically grouped. This allowed for valuable and deep comparisons. Interestingly, the study found that grammatical structures on average actually changed faster than vocabulary.

The researchers found that there were specific elements of both vocabulary and grammar that change at a slow rate, as well as elements that change more quickly. One interesting finding was that the slowly evolving grammatical structures tended to be those that speakers are less aware of. Why would this be? When two languages come together, or when one language splits into two, speakers of the languages emphasize or adopt certain elements in order to identify or distinguish themselves from others. We’re all familiar with how easily we can distinguish groups among speakers of our own language by accent or dialect, and we often make associations based on those distinctions. Humans in the past did this too and, the researchers hypothesize, this was one of the main drivers of language change. However, if a speaker is unaware of a certain grammatical structure because it is so subtle, they will not try to change it or use it as a marker of group identity. Thus those features of a language often remain stable. The researchers note that the precise features that remain more stable over time are specific to each language group.

The researchers suggest, that although grammar as a whole might not be a better tool for examining language change, a more nuanced approach that combined computational methods with large-scale databases of both grammar and lexicon could allow for a look into the deeper past.

(photo: Thomas Quine)

Ways to counter misinformation and correct ‘fake news’

It’s no use simply telling people they have their facts wrong. To be more effective at correcting misinformation in news accounts and intentionally misleading “fake news,” you need to provide a detailed counter-message with new information — and get your audience to help develop a new narrative.

Those are some takeaways from an extensive new meta-analysis of laboratory debunking studies published in the journal Psychological Science. The analysis, the first conducted with this collection of debunking data, finds that a detailed counter-message is better at persuading people to change their minds than merely labeling misinformation as wrong. But even after a detailed debunking, misinformation still can be hard to eliminate, the study finds.

“Debunking: A Meta-Analysis of the Psychological Efficacy of Messages Countering Misinformation” was conducted by researchers at the Social Action Lab at the University of Illinois at Urbana-Champaign and at the Annenberg Public Policy Center of the University of Pennsylvania. The teams sought “to understand the factors underlying effective messages to counter attitudes and beliefs based on misinformation.” To do that, they examined 20 experiments in eight research reports involving 6,878 participants and 52 independent samples.

The analyzed studies, published from 1994 to 2015, focused on false social and political news accounts, including misinformation in reports of robberies; investigations of a warehouse fire and traffic accident; the supposed existence of “death panels” in the 2010 Affordable Care Act; positions of political candidates on Medicaid; and a report on whether a candidate had received donations from a convicted felon.

The researchers coded and analyzed the results of the experiments across the different studies and measured the effect of presenting misinformation, the effect of debunking, and the persistence of misinformation.

The researchers made three recommendations for debunking misinformation:

– Reduce arguments that support misinformation: News accounts about misinformation should not inadvertently repeat or belabor “detailed thoughts in support of the misinformation.”
– Engage audiences in scrutiny and counterarguing of information: Educational institutions should promote a state of healthy skepticism. When trying to correct misinformation, it is beneficial to have the audience involved in generating counterarguments.
– Introduce new information as part of the debunking message: People are less likely to accept debunking when the initial message is just labeled as wrong rather than countered with new evidence.

The authors encouraged the continued development of “alerting systems” for debunking misinformation such as Snopes.com (fake news), RetractionWatch.com (scientific retractions), and FactCheck.org (political claims). “Such an ongoing monitoring system creates desirable conditions of scrutiny and counterarguing of misinformation,” the researchers wrote.

The music of speech

Researchers at UC San Francisco have identified neurons in the human brain that respond to pitch changes in spoken language, which are essential to clearly conveying both meaning and emotion.

The study was published online August 24, 2017 in Science by the lab of Edward Chang, MD, a professor of neurological surgery at the UCSF Weill Institute for Neurosciences, and led by Claire Tang, a fourth-year graduate student in the Chang lab.

Changes in vocal pitch during speech – part of what linguists call speech prosody – are a fundamental part of human communication, nearly as fundamental as melody to music. In tonal languages such as Mandarin Chinese, pitch changes can completely alter the meaning of a word, but even in a non-tonal language like English, differences in pitch can significantly change the meaning of a spoken sentence.

For instance, “Sarah plays soccer,” in which “Sarah” is spoken with a descending pitch, can be used by a speaker to communicate that Sarah, rather than some other person, plays soccer; in contrast, “Sarah plays soccer” indicates that Sarah plays soccer, rather than some other game. And adding a rising tone at the end of a sentence (“Sarah plays soccer?”) indicates that the sentence is a question.

The brain’s ability to interpret these changes in tone on the fly is particularly remarkable, given that each speaker also has their own typical vocal pitch and style (that is, some people have low voices, others have high voices, and others seem to end even statements as if they were questions). Moreover, the brain must track and interpret these pitch changes while simultaneously parsing which consonants and vowels are being uttered, what words they form, and how those words are being combined into phrases and sentences – with all of this happening on a millisecond scale.

These findings reveal how the brain begins to take apart the complex stream of sounds that make up speech and identify important cues about the meaning of what we’re hearing.

Type, edit and format with your voice in Docs—no keyboard needed!

To get started, select ‘Voice typing’ in the ‘Tools’ menu when you’re using Docs in Chrome. Say what comes to mind—then start editing and formatting with commands like “copy,” “insert table,” and “highlight.”

https://support.google.com/docs/answer/4492226

GPS tracking down to the centimeter

Researchers at the University of California, Riverside have developed a new, more computationally efficient way to process data from the Global Positioning System (GPS), to enhance location accuracy from the meter-level down to a few centimeters.

(photo: UC Riverside)

(photo: UC Riverside)

The optimization will be used in the development of autonomous vehicles, improved aviation and naval navigation systems, and precision technologies. It will also enable users to access centimeter-level accuracy location data through their mobile phones and wearable technologies, without increasing the demand for processing power.

The research, led by Jay Farrell, professor and chair of electrical and computer engineering in UCR’s Bourns College of Engineering, was published recently in IEEE’s Transactions on Control Systems Technology. The approach involves reformulating a series of equations that are used to determine a GPS receiver’s position, resulting in reduced computational effort being required to attain centimeter accuracy.

First conceptualized in the early 1960s, GPS is a space-based navigation system that allows a receiver to compute its location and velocity by measuring the time it takes to receive radio signals from four or more overhead satellites. Due to various error sources, standard GPS yields position measurements accurate to approximately 10 meters.

Differential GPS (DGPS), which enhances the system through a network of fixed, ground-based reference stations, has improved accuracy to about one meter. But meter-level accuracy isn’t sufficient to support emerging technologies like autonomous vehicles, precision farming, and related applications.

“To fulfill both the automation and safety needs of driverless cars, some applications need to know not only which lane a car is in, but also where it is in that lane–and need to know it continuously at high rates and high bandwidth for the duration of the trip,” said Farrell, whose research focuses on developing advanced navigation and control methods for autonomous vehicles.

Farrell said these requirements can be achieved by combining GPS measurements with data from an inertial measurement unit (IMU) through an internal navigation system (INS). In the combined system, the GPS provides data to achieve high accuracy, while the IMU provides data to achieve high sample rates and high bandwidth continuously.

Achieving centimeter accuracy requires “GPS carrier phase integer ambiguity resolution.” Until now, combining GPS and IMU data to solve for the integers has been computationally expensive, limiting its use in real-world applications. The UCR team has changed that, developing a new approach that results in highly accurate positioning information with several orders of magnitude fewer computations.

“Achieving this level of accuracy with computational loads that are suitable for real-time applications on low-power processors will not only advance the capabilities of highly specialized navigation systems, like those used in driverless cars and precision agriculture, but it will also improve location services accessed through mobile phones and other personal devices, without increasing their cost,” Farrell said.

Are you Facebook dependent?

What drives you to Facebook? News? Games? Feedback on your posts? The chance to meet new friends?

If any of these hit home, you might have a Facebook dependency. But that’s not necessarily a bad thing, says Amber Ferris, an assistant professor of communication at The University of Akron’s Wayne College.

Ferris, who studies Facebook user trends, says the more people use Facebook to fulfill their goals, the more dependent on it the become. She is quick to explain this dependency is not equivalent to an addiction. Rather, the reason why people use Facebook determines the level of dependency they have on the social network. The study found those who use Facebook to meet new people were the most dependent on Facebook overall.

To identify dependency factors, Ferris and Erin Hollenbaugh, an associate professor of communication studies at Kent State University at Stark, studied 301 Facebook users between the ages of 18 and 68 who post on the site at least once a month. They found that people who perceive Facebook as helpful in gaining a better understanding of themselves go to the site to meet new people and to get attention from others. Also, people who use Facebook to gain a deeper understanding of themselves tend to have agreeable personalities, but lower self-esteem than others.

Ferris explains that some users observe how others cope with problems and situations similar to their own “and get ideas on how to approach others in important and difficult situations.”

Ferris and Hollenbaugh presented “A Uses and Gratifications Approach to Exploring Antecedents to Facebook Dependency” at the National Communication Association conference in Las Vegas in November. They say other Facebook dependency signs point to users’ needs for information or entertainment. In other words, a user knows about the local festival scheduled for this weekend thanks to Facebook.

In their previous studies, “Facebook Self-disclosure: Examining the Role of Traits, Social Cohesion, and Motives” (2014) and “Predictors of Honesty, Intent, and Valence of Facebook Self-disclosure” (2015) published in the journal Computers in Human Behavior, Ferris and Hollenbaugh also uncovered personality traits common among specific types of Facebook users.

For example, people who use Facebook to establish new relationships tend to be extroverted. Extroverts are more open to sharing their personal information online, but are not always honest with their disclosures, Ferris says.

The most positive posts online come from those who have high self-esteem, according to Ferris.

“Those who post the most and are the most positive in posts do so to stay connected with people they already know and to gain others’ attention,” Ferris says. “This makes a lot of sense – if you are happy with your life, you are more likely to want to share that happiness with others on social media.”

Complex humor is no laughing matter

Since the earliest times, laughter and humor have performed important functions in human interaction. They help to expedite courtship, improve conversational flow, synchronize emotional states and enhance social bonding. Jokes, a structured form of humor, give us control over laughter and are therefore a way to elicit these positive effects intentionally. In order to comprehend why some jokes are perceived as funny and others are not, Robert Dunbar and colleagues at Oxford University investigated the cognitive mechanism underlying laughter and humor. The research is published in Springer’s journal Human Nature.

The ability to fully understand other people’s often unspoken intentions is called mentalizing, and involves different levels of so-called intentionality. For example, an adult can comprehend up to five such levels of intentionality before losing the plot of a too-complex story. Conversations that share facts normally involve only three such levels. Greater brain power is needed when people chat about the social behavior of others, because it requires them to think and rethink themselves into the shoes of others.

The best jokes are thought to build on a set of expectations and have a punchline to update the knowledge of the listener in an unexpected way. Expectations that involve the thoughts or intentions of people other than the joke-teller or the audience, for example the characters in the joke, are harder to pin down. Our natural ability to handle only a limited number of mindstates comes into play.

In order to shed light on how our mental ability limits what we find funny, the researchers analyzed the reaction of 55 undergraduates from the London School of Economics to 65 jokes from an online compilation of the 101 funniest jokes of all time. The collection mostly consisted of jokes from successful stand-up comedians. Some jokes in the compilation were mere one-liners, while others were longer and more complex. A third of the jokes were factual and contained reasonably undemanding observations of idiosyncrasies in the world. The rest involved the mindstates of third parties. The jokes were rated on a scale from one to four (not at all funny to very funny).

The research team found that the funniest jokes are those that involve two characters and up to five back-and-forth levels of intentionality between the comedian and the audience. People easily loose the plot when jokes are more complex than that. The findings do not suggest that humor is defined by how cleverly a joke is constructed, but rather that there is a limit to how complex its contents can be to still be considered funny. According to Dunbar, increasing the mentalizing complexity of the joke improves the perceived quality, but only up to a certain point: stand-up comedians cannot afford to tell intricate jokes that leave their audience feeling as if they’ve missed the punchline.

“The task of professional comics is to elicit laughs as directly and as fast as possible. They generally do this most effectively when ensuring that they keep within the mental competence of the typical audience member,” says Dunbar. “If they exceed these limits, the joke will not be perceived as funny.”

It is likely that everyday conversational jokes do not involve as many intentional levels as those that have been carefully constructed by professional comedians. Further research needs to be conducted in this area. However, Dunbar’s findings shed some light on the mechanics of language-based humor and therefore on the workings of our mind.