woman in green t shirt showing tongue with closed eyes near dark green background

Seen not heard

Gesturing with the hands while speaking is common human behaviour, but no one knows why we do it. Now, a group of researchers reports in an issue of Proceedings of the National Academy of Sciences that gesturing adds emphasis to speech – but not in the way researchers had thought.

Gesturing while speaking, or “talking with your hands,” is common around the world. Many communications researchers believe that gesturing is either done to emphasize important points or to elucidate specific ideas (think of this as the “drawing in the air” hypothesis). But there are other possibilities. For example, it could be that gesturing, by altering the size and shape of the chest, lungs and vocal muscles, affects the sound of a person’s speech.

Because of the way the human body is constructed, hand movements influence the torso and throat muscles. Gestures are tightly tied to amplitude. Rather than just using your chest muscles to produce an airflow for speech, moving your arms while you speak can add acoustic emphasis. And you can hear someone’s motions, even when they’re trying not to let you. “Some language researchers don’t like this idea, because they want the language to be all about communicating the contents of your mind, rather than the state of your body. But we think that gestures are allowing the acoustic signal to carry additional information about bodily tension and motion. It’s information of another kind,” says University of Connecticut psychologist and director of the Center for the Ecological Study of Perception and Action James Dixon, one of the authors of the paper.

photo: San Francisco State University

Digital addiction increases loneliness, anxiety and depression

Smartphones are an integral part of most people’s lives, allowing us to stay connected and in-the-know at all times. The downside of that convenience is that many of us are also addicted to the constant pings, chimes, vibrations and other alerts from our devices, unable to ignore new emails, texts and images. In a new study published in NeuroRegulation, San Francisco State University Professor of Health Education Erik Peper and Associate Professor of Health Education Richard Harvey argue that overuse of smart phones is just like any other type of substance abuse.

On top of that, addiction to social media technology may actually have a negative effect on social connection. In a survey of 135 San Francisco State students, Peper and Harvey found that students who used their phones the most reported higher levels of feeling isolated, lonely, depressed and anxious. They believe the loneliness is partly a consequence of replacing face-to-face interaction with a form of communication where body language and other signals cannot be interpreted. They also found that those same students almost constantly multitasked while studying, watching other media, eating or attending class. This constant activity allows little time for bodies and minds to relax and regenerate, says Peper, and also results in “semi-tasking,” where people do two or more tasks at the same time — but half as well as they would have if focused on one task at a time.

Peper and Harvey note that digital addiction is not our fault but a result of the tech industry’s desire to increase corporate profits. “More eyeballs, more clicks, more money,” said Peper. Push notifications, vibrations and other alerts on our phones and computers make us feel compelled to look at them by triggering the same neural pathways in our brains that once alerted us to imminent danger, such as an attack by a tiger or other large predator. “But now we are hijacked by those same mechanisms that once protected us and allowed us to survive — for the most trivial pieces of information,” he said.

But just as we can train ourselves to eat less sugar, for example, we can take charge and train ourselves to be less addicted to our phones and computers. The first step is recognizing that tech companies are manipulating our innate biological responses to danger. Peper suggests turning off push notifications, only responding to email and social media at specific times and scheduling periods with no interruptions to focus on important tasks.

Languages do not share a single history

Languages do not share a single history but different components evolve along different trajectories and at different rates. A large-scale study of Pacific languages reveals that forces driving grammatical change are different to those driving lexical change. Grammar changes more rapidly and is especially influenced by contact with unrelated languages, while words are more resistant to change.

An international team of researchers, led by scientists at the Max Planck Institute for the Science of Human History, have discovered that a language’s grammatical structures change more quickly over time than vocabulary, overturning a long-held assumption in the field. The study, published October 2, 2017 in PNAS, analyzed 81 Austronesian languages based on a detailed database of grammatical structures and lexicon. By analyzing these languages, all from a single family and geographic region, using sophisticated modelling the researchers were able to determine how quickly different aspects of the languages had changed. Strikingly different processes seemed to be shaping the lexicon and the grammar – the lexicon changed more when new languages were created, while the grammatical structures were more affected by contact with other languages.

One fascinating question for linguists is whether all aspects of a language evolve as an integrated system with all aspects (grammar, morphology, phonology, lexicon) sharing the same history over time or whether different aspects of a language show different histories. Does each word have its own history? The present study, by an international team including researchers from the Max Planck Institute for the Science of Human History, the Max Planck Institute for Psycholinguistics, the Australian National University, the University of Oxford, and Uppsala University, addressed this question by comparing both grammatical structures and lexicon for over 80 Austronesian languages. This study applied cutting-edge computational methods to analyze not only a large number of words, but also a large number of grammatical elements, all from languages that were geographically grouped. This allowed for valuable and deep comparisons. Interestingly, the study found that grammatical structures on average actually changed faster than vocabulary.

The researchers found that there were specific elements of both vocabulary and grammar that change at a slow rate, as well as elements that change more quickly. One interesting finding was that the slowly evolving grammatical structures tended to be those that speakers are less aware of. Why would this be? When two languages come together, or when one language splits into two, speakers of the languages emphasize or adopt certain elements in order to identify or distinguish themselves from others. We’re all familiar with how easily we can distinguish groups among speakers of our own language by accent or dialect, and we often make associations based on those distinctions. Humans in the past did this too and, the researchers hypothesize, this was one of the main drivers of language change. However, if a speaker is unaware of a certain grammatical structure because it is so subtle, they will not try to change it or use it as a marker of group identity. Thus those features of a language often remain stable. The researchers note that the precise features that remain more stable over time are specific to each language group.

The researchers suggest, that although grammar as a whole might not be a better tool for examining language change, a more nuanced approach that combined computational methods with large-scale databases of both grammar and lexicon could allow for a look into the deeper past.

(photo: Thomas Quine)

Ways to counter misinformation and correct ‘fake news’

It’s no use simply telling people they have their facts wrong. To be more effective at correcting misinformation in news accounts and intentionally misleading “fake news,” you need to provide a detailed counter-message with new information — and get your audience to help develop a new narrative.

Those are some takeaways from an extensive new meta-analysis of laboratory debunking studies published in the journal Psychological Science. The analysis, the first conducted with this collection of debunking data, finds that a detailed counter-message is better at persuading people to change their minds than merely labeling misinformation as wrong. But even after a detailed debunking, misinformation still can be hard to eliminate, the study finds.

“Debunking: A Meta-Analysis of the Psychological Efficacy of Messages Countering Misinformation” was conducted by researchers at the Social Action Lab at the University of Illinois at Urbana-Champaign and at the Annenberg Public Policy Center of the University of Pennsylvania. The teams sought “to understand the factors underlying effective messages to counter attitudes and beliefs based on misinformation.” To do that, they examined 20 experiments in eight research reports involving 6,878 participants and 52 independent samples.

The analyzed studies, published from 1994 to 2015, focused on false social and political news accounts, including misinformation in reports of robberies; investigations of a warehouse fire and traffic accident; the supposed existence of “death panels” in the 2010 Affordable Care Act; positions of political candidates on Medicaid; and a report on whether a candidate had received donations from a convicted felon.

The researchers coded and analyzed the results of the experiments across the different studies and measured the effect of presenting misinformation, the effect of debunking, and the persistence of misinformation.

The researchers made three recommendations for debunking misinformation:

– Reduce arguments that support misinformation: News accounts about misinformation should not inadvertently repeat or belabor “detailed thoughts in support of the misinformation.”
– Engage audiences in scrutiny and counterarguing of information: Educational institutions should promote a state of healthy skepticism. When trying to correct misinformation, it is beneficial to have the audience involved in generating counterarguments.
– Introduce new information as part of the debunking message: People are less likely to accept debunking when the initial message is just labeled as wrong rather than countered with new evidence.

The authors encouraged the continued development of “alerting systems” for debunking misinformation such as Snopes.com (fake news), RetractionWatch.com (scientific retractions), and FactCheck.org (political claims). “Such an ongoing monitoring system creates desirable conditions of scrutiny and counterarguing of misinformation,” the researchers wrote.

The music of speech

Researchers at UC San Francisco have identified neurons in the human brain that respond to pitch changes in spoken language, which are essential to clearly conveying both meaning and emotion.

The study was published online August 24, 2017 in Science by the lab of Edward Chang, MD, a professor of neurological surgery at the UCSF Weill Institute for Neurosciences, and led by Claire Tang, a fourth-year graduate student in the Chang lab.

Changes in vocal pitch during speech – part of what linguists call speech prosody – are a fundamental part of human communication, nearly as fundamental as melody to music. In tonal languages such as Mandarin Chinese, pitch changes can completely alter the meaning of a word, but even in a non-tonal language like English, differences in pitch can significantly change the meaning of a spoken sentence.

For instance, “Sarah plays soccer,” in which “Sarah” is spoken with a descending pitch, can be used by a speaker to communicate that Sarah, rather than some other person, plays soccer; in contrast, “Sarah plays soccer” indicates that Sarah plays soccer, rather than some other game. And adding a rising tone at the end of a sentence (“Sarah plays soccer?”) indicates that the sentence is a question.

The brain’s ability to interpret these changes in tone on the fly is particularly remarkable, given that each speaker also has their own typical vocal pitch and style (that is, some people have low voices, others have high voices, and others seem to end even statements as if they were questions). Moreover, the brain must track and interpret these pitch changes while simultaneously parsing which consonants and vowels are being uttered, what words they form, and how those words are being combined into phrases and sentences – with all of this happening on a millisecond scale.

These findings reveal how the brain begins to take apart the complex stream of sounds that make up speech and identify important cues about the meaning of what we’re hearing.

Type, edit and format with your voice in Docs—no keyboard needed!

To get started, select ‘Voice typing’ in the ‘Tools’ menu when you’re using Docs in Chrome. Say what comes to mind—then start editing and formatting with commands like “copy,” “insert table,” and “highlight.”

https://support.google.com/docs/answer/4492226

GPS tracking down to the centimeter

Researchers at the University of California, Riverside have developed a new, more computationally efficient way to process data from the Global Positioning System (GPS), to enhance location accuracy from the meter-level down to a few centimeters.

(photo: UC Riverside)

(photo: UC Riverside)

The optimization will be used in the development of autonomous vehicles, improved aviation and naval navigation systems, and precision technologies. It will also enable users to access centimeter-level accuracy location data through their mobile phones and wearable technologies, without increasing the demand for processing power.

The research, led by Jay Farrell, professor and chair of electrical and computer engineering in UCR’s Bourns College of Engineering, was published recently in IEEE’s Transactions on Control Systems Technology. The approach involves reformulating a series of equations that are used to determine a GPS receiver’s position, resulting in reduced computational effort being required to attain centimeter accuracy.

First conceptualized in the early 1960s, GPS is a space-based navigation system that allows a receiver to compute its location and velocity by measuring the time it takes to receive radio signals from four or more overhead satellites. Due to various error sources, standard GPS yields position measurements accurate to approximately 10 meters.

Differential GPS (DGPS), which enhances the system through a network of fixed, ground-based reference stations, has improved accuracy to about one meter. But meter-level accuracy isn’t sufficient to support emerging technologies like autonomous vehicles, precision farming, and related applications.

“To fulfill both the automation and safety needs of driverless cars, some applications need to know not only which lane a car is in, but also where it is in that lane–and need to know it continuously at high rates and high bandwidth for the duration of the trip,” said Farrell, whose research focuses on developing advanced navigation and control methods for autonomous vehicles.

Farrell said these requirements can be achieved by combining GPS measurements with data from an inertial measurement unit (IMU) through an internal navigation system (INS). In the combined system, the GPS provides data to achieve high accuracy, while the IMU provides data to achieve high sample rates and high bandwidth continuously.

Achieving centimeter accuracy requires “GPS carrier phase integer ambiguity resolution.” Until now, combining GPS and IMU data to solve for the integers has been computationally expensive, limiting its use in real-world applications. The UCR team has changed that, developing a new approach that results in highly accurate positioning information with several orders of magnitude fewer computations.

“Achieving this level of accuracy with computational loads that are suitable for real-time applications on low-power processors will not only advance the capabilities of highly specialized navigation systems, like those used in driverless cars and precision agriculture, but it will also improve location services accessed through mobile phones and other personal devices, without increasing their cost,” Farrell said.