The music of speech

Researchers at UC San Francisco have identified neurons in the human brain that respond to pitch changes in spoken language, which are essential to clearly conveying both meaning and emotion.

The study was published online August 24, 2017 in Science by the lab of Edward Chang, MD, a professor of neurological surgery at the UCSF Weill Institute for Neurosciences, and led by Claire Tang, a fourth-year graduate student in the Chang lab.

Changes in vocal pitch during speech – part of what linguists call speech prosody – are a fundamental part of human communication, nearly as fundamental as melody to music. In tonal languages such as Mandarin Chinese, pitch changes can completely alter the meaning of a word, but even in a non-tonal language like English, differences in pitch can significantly change the meaning of a spoken sentence.

For instance, “Sarah plays soccer,” in which “Sarah” is spoken with a descending pitch, can be used by a speaker to communicate that Sarah, rather than some other person, plays soccer; in contrast, “Sarah plays soccer” indicates that Sarah plays soccer, rather than some other game. And adding a rising tone at the end of a sentence (“Sarah plays soccer?”) indicates that the sentence is a question.

The brain’s ability to interpret these changes in tone on the fly is particularly remarkable, given that each speaker also has their own typical vocal pitch and style (that is, some people have low voices, others have high voices, and others seem to end even statements as if they were questions). Moreover, the brain must track and interpret these pitch changes while simultaneously parsing which consonants and vowels are being uttered, what words they form, and how those words are being combined into phrases and sentences – with all of this happening on a millisecond scale.

These findings reveal how the brain begins to take apart the complex stream of sounds that make up speech and identify important cues about the meaning of what we’re hearing.

Type, edit and format with your voice in Docs—no keyboard needed!

To get started, select ‘Voice typing’ in the ‘Tools’ menu when you’re using Docs in Chrome. Say what comes to mind—then start editing and formatting with commands like “copy,” “insert table,” and “highlight.”

https://support.google.com/docs/answer/4492226

GPS tracking down to the centimeter

Researchers at the University of California, Riverside have developed a new, more computationally efficient way to process data from the Global Positioning System (GPS), to enhance location accuracy from the meter-level down to a few centimeters.

(photo: UC Riverside)

(photo: UC Riverside)

The optimization will be used in the development of autonomous vehicles, improved aviation and naval navigation systems, and precision technologies. It will also enable users to access centimeter-level accuracy location data through their mobile phones and wearable technologies, without increasing the demand for processing power.

The research, led by Jay Farrell, professor and chair of electrical and computer engineering in UCR’s Bourns College of Engineering, was published recently in IEEE’s Transactions on Control Systems Technology. The approach involves reformulating a series of equations that are used to determine a GPS receiver’s position, resulting in reduced computational effort being required to attain centimeter accuracy.

First conceptualized in the early 1960s, GPS is a space-based navigation system that allows a receiver to compute its location and velocity by measuring the time it takes to receive radio signals from four or more overhead satellites. Due to various error sources, standard GPS yields position measurements accurate to approximately 10 meters.

Differential GPS (DGPS), which enhances the system through a network of fixed, ground-based reference stations, has improved accuracy to about one meter. But meter-level accuracy isn’t sufficient to support emerging technologies like autonomous vehicles, precision farming, and related applications.

“To fulfill both the automation and safety needs of driverless cars, some applications need to know not only which lane a car is in, but also where it is in that lane–and need to know it continuously at high rates and high bandwidth for the duration of the trip,” said Farrell, whose research focuses on developing advanced navigation and control methods for autonomous vehicles.

Farrell said these requirements can be achieved by combining GPS measurements with data from an inertial measurement unit (IMU) through an internal navigation system (INS). In the combined system, the GPS provides data to achieve high accuracy, while the IMU provides data to achieve high sample rates and high bandwidth continuously.

Achieving centimeter accuracy requires “GPS carrier phase integer ambiguity resolution.” Until now, combining GPS and IMU data to solve for the integers has been computationally expensive, limiting its use in real-world applications. The UCR team has changed that, developing a new approach that results in highly accurate positioning information with several orders of magnitude fewer computations.

“Achieving this level of accuracy with computational loads that are suitable for real-time applications on low-power processors will not only advance the capabilities of highly specialized navigation systems, like those used in driverless cars and precision agriculture, but it will also improve location services accessed through mobile phones and other personal devices, without increasing their cost,” Farrell said.

Are you Facebook dependent?

What drives you to Facebook? News? Games? Feedback on your posts? The chance to meet new friends?

If any of these hit home, you might have a Facebook dependency. But that’s not necessarily a bad thing, says Amber Ferris, an assistant professor of communication at The University of Akron’s Wayne College.

Ferris, who studies Facebook user trends, says the more people use Facebook to fulfill their goals, the more dependent on it the become. She is quick to explain this dependency is not equivalent to an addiction. Rather, the reason why people use Facebook determines the level of dependency they have on the social network. The study found those who use Facebook to meet new people were the most dependent on Facebook overall.

To identify dependency factors, Ferris and Erin Hollenbaugh, an associate professor of communication studies at Kent State University at Stark, studied 301 Facebook users between the ages of 18 and 68 who post on the site at least once a month. They found that people who perceive Facebook as helpful in gaining a better understanding of themselves go to the site to meet new people and to get attention from others. Also, people who use Facebook to gain a deeper understanding of themselves tend to have agreeable personalities, but lower self-esteem than others.

Ferris explains that some users observe how others cope with problems and situations similar to their own “and get ideas on how to approach others in important and difficult situations.”

Ferris and Hollenbaugh presented “A Uses and Gratifications Approach to Exploring Antecedents to Facebook Dependency” at the National Communication Association conference in Las Vegas in November. They say other Facebook dependency signs point to users’ needs for information or entertainment. In other words, a user knows about the local festival scheduled for this weekend thanks to Facebook.

In their previous studies, “Facebook Self-disclosure: Examining the Role of Traits, Social Cohesion, and Motives” (2014) and “Predictors of Honesty, Intent, and Valence of Facebook Self-disclosure” (2015) published in the journal Computers in Human Behavior, Ferris and Hollenbaugh also uncovered personality traits common among specific types of Facebook users.

For example, people who use Facebook to establish new relationships tend to be extroverted. Extroverts are more open to sharing their personal information online, but are not always honest with their disclosures, Ferris says.

The most positive posts online come from those who have high self-esteem, according to Ferris.

“Those who post the most and are the most positive in posts do so to stay connected with people they already know and to gain others’ attention,” Ferris says. “This makes a lot of sense – if you are happy with your life, you are more likely to want to share that happiness with others on social media.”

Complex humor is no laughing matter

Since the earliest times, laughter and humor have performed important functions in human interaction. They help to expedite courtship, improve conversational flow, synchronize emotional states and enhance social bonding. Jokes, a structured form of humor, give us control over laughter and are therefore a way to elicit these positive effects intentionally. In order to comprehend why some jokes are perceived as funny and others are not, Robert Dunbar and colleagues at Oxford University investigated the cognitive mechanism underlying laughter and humor. The research is published in Springer’s journal Human Nature.

The ability to fully understand other people’s often unspoken intentions is called mentalizing, and involves different levels of so-called intentionality. For example, an adult can comprehend up to five such levels of intentionality before losing the plot of a too-complex story. Conversations that share facts normally involve only three such levels. Greater brain power is needed when people chat about the social behavior of others, because it requires them to think and rethink themselves into the shoes of others.

The best jokes are thought to build on a set of expectations and have a punchline to update the knowledge of the listener in an unexpected way. Expectations that involve the thoughts or intentions of people other than the joke-teller or the audience, for example the characters in the joke, are harder to pin down. Our natural ability to handle only a limited number of mindstates comes into play.

In order to shed light on how our mental ability limits what we find funny, the researchers analyzed the reaction of 55 undergraduates from the London School of Economics to 65 jokes from an online compilation of the 101 funniest jokes of all time. The collection mostly consisted of jokes from successful stand-up comedians. Some jokes in the compilation were mere one-liners, while others were longer and more complex. A third of the jokes were factual and contained reasonably undemanding observations of idiosyncrasies in the world. The rest involved the mindstates of third parties. The jokes were rated on a scale from one to four (not at all funny to very funny).

The research team found that the funniest jokes are those that involve two characters and up to five back-and-forth levels of intentionality between the comedian and the audience. People easily loose the plot when jokes are more complex than that. The findings do not suggest that humor is defined by how cleverly a joke is constructed, but rather that there is a limit to how complex its contents can be to still be considered funny. According to Dunbar, increasing the mentalizing complexity of the joke improves the perceived quality, but only up to a certain point: stand-up comedians cannot afford to tell intricate jokes that leave their audience feeling as if they’ve missed the punchline.

“The task of professional comics is to elicit laughs as directly and as fast as possible. They generally do this most effectively when ensuring that they keep within the mental competence of the typical audience member,” says Dunbar. “If they exceed these limits, the joke will not be perceived as funny.”

It is likely that everyday conversational jokes do not involve as many intentional levels as those that have been carefully constructed by professional comedians. Further research needs to be conducted in this area. However, Dunbar’s findings shed some light on the mechanics of language-based humor and therefore on the workings of our mind.

Words can deceive, but tone of voice cannot

A new computer algorithm can predict whether you and your spouse will have an improved or worsened relationship based on the tone of voice that you use when speaking to each other with nearly 79 percent accuracy.

In fact, the algorithm did a better job of predicting marital success of couples with serious marital issues than descriptions of the therapy sessions provided by relationship experts. The research was published in Proceedings of Interspeech on September 6, 2015.

Researchers recorded hundreds of conversations from over one hundred couples taken during marriage therapy sessions over two years, and then tracked their marital status for five years.

An interdisciplinary team .then developed an algorithm that broke the recordings into acoustic features using speech-processing techniques. These included pitch, intensity, “jitter” and “shimmer” among many – things like tracking warbles in the voice that can indicate moments of high emotion.

Taken together, the vocal acoustic features offered the team’s program a proxy for the subject’s communicative state, and the changes to that state over the course of a single therapy and across therapy sessions.

These features weren’t analyzed in isolation – rather, the impact of one partner upon the other over multiple therapy sessions was studied.

Once it was fine-tuned, the program was then tested against behavioral analyses made by human experts ‹ who had coded them for positive qualities like “acceptance” or negative qualities like “blame”. The team found that studying voice directly – rather than the expert-created behavioral codes – offered a more accurate glimpse at a couple’s future.

Next, using behavioral signal processing, the team plans to use language (e.g., spoken words) and nonverbal information (e.g., body language) to improve the prediction of how effective treatments will be.