Your first language affects learning a new language

Language research has documented that people tend to draw inferences about speakers based on how they talk. These often implicit inferences can occur in the blink of an eye and can affect how smart we think someone is, how much we like them, and more. This is also the case for the non-native accents typically present in speakers who learn a new language later in life.

Linguistic research suggests that these accents are strongly shaped by the speaker’s first language that they learned growing up. New research from an international collaboration between the University of Rochester and universities in Germany and Holland sheds light on just how strong these effects can be. This work, which was conducted by lead author Job Schepens as part of a Fulbright fellowship to the University of Rochester, is the first to evaluate these effects on a large scale and may lead to novel methods of instruction for adults learning to speak foreign languages. The results of the study were published in the journal Cognition.

This strong effect of a learner’s first language seems to be almost entirely driven by how similar the learner’s first language is to the language they are trying to learn.

The study suggests that a large proportion of the perceived non-nativeness of a learner is simply due to the language they grew up with, and this factor is entirely out of their control.

liver

AI can detect language problems tied to liver failure

Natural language processing, the technology that lets computers read, decipher, understand and make sense of human language, is the driving force behind internet search engines, email filters, digital assistants such as Amazon’s Alexa and Apple’s Siri, and language-to-language translation apps. Now, Johns Hopkins Medicine researchers say they have given this technology a new job as a clinical detective, diagnosing the early and subtle signs of language-associated cognitive impairments in patients with failing livers.

They also report finding evidence that cognitive functioning is likely to be restored following a liver transplant.

In a new paper in the journal Digital Medicine (formerly Nature Digital Medicine), the researchers describe how they used natural language processing, or NLP, to evaluate electronic message samples from patients with end-stage liver disease (ESLD), also known as chronic liver failure. ESLD has been associated with transient cognitive abnormalities such as diminished attention span, loss of memory and reduced psychomotor speed, an individual’s ability to detect and respond to the world around them. This troublesome confusion occurs when a failing liver cannot properly remove toxins from the blood and they cross the brain-blood barrier.

Approximately 80% of patients with ESLD have neurocognitive changes associated with poorer quality of life, including deteriorating sleep and work performance. As many as 20% of adults with ESLD can develop the most severe form of cognitive impairment, overt hepatic encephalopathy, which has a mortality rate of 43% after one year.

Overall, the researchers found that NLP detected distinct, yet subtle, pre-transplant and post-transplant differences in sentence length, word length and other language characteristics, suggesting that the technology could be developed into a valuable diagnostic tool.

Following transplant surgery, the majority of patients returned to normal or near-normal language patterns — more concise sentences with longer words, including a greater number with six letters or more.

The researchers say that there were no significant differences in the other NLP measures, either between the pre- and post-transplant messages written by the ESLD group members, or when those messages were compared with ones written by the control group participants.

Following this proof-of-concept trial, the researchers say they plan to conduct studies that follow the post-transplant cognitive progress of larger patient populations over longer periods of time, such as one to two years.

Despite its prevalence in today’s society, NLP technology has only recently begun making its mark in health care. Early medical applications have included using a patient’s word choice to signal bleeding in the brain, mathematically ranking the levels of risk for cirrhosis of the liver and developing a computer model for predicting mortality in intensive care units.

Dravidian language family is approximately 4 500 years old

The origin of the Dravidian language family, consisting of about 80 varieties spoken by 220 million people across southern and central India and surrounding countries, can be dated to about 4 500 years ago. This estimate is based on new linguistic analyses by an international team, including researchers from the Max Planck Institute for the Science of Human History, that used data collected first-hand from native speakers representing all previously reported Dravidian subgroups. These findings, published in Royal Society Open Science, match well with earlier linguistic and archaeological studies.

South Asia, reaching from Afghanistan in the west and Bangladesh in the east, is home to at least six hundred languages belonging to six large language families, including Dravidian, Indo-European, and Sino-Tibetan. The Dravidian language family, consisting of about 80 language varieties (both languages and dialects) is today spoken by about 220 million people, mostly in southern and central India but also in surrounding countries. Its four largest languages, Kannada, Malayalam, Tamil and Telugu have literary traditions spanning centuries, of which Tamil reaches back the furthest. Along with Sanskrit, Tamil is one of the world’s classical languages, but unlike Sanskrit, there is continuity between its classical and modern forms documented in inscriptions, poems, and secular and religious texts and songs.

The study of the Dravidian languages is crucial for understanding prehistory in Eurasia, as they played a significant role in influencing other language groups. The consensus of the research community is that the Dravidians are natives of the Indian subcontinent and were present prior to the arrival of the Indo-Aryans (Indo-European speakers) in India around 3 500 years ago. It is likely that the Dravidian languages were much more widespread to the west in the past than they are today.

In order to examine questions about when and where the Dravidian languages developed, the researchers made a detailed investigation of the historical relationships of 20 Dravidian varieties. Study author Vishnupriya Kolipakam of the Wildlife Institute of India collected contemporary first-hand data from native speakers of a diverse sample of Dravidian languages, representing all the previously reported subgroups of Dravidian.

The researchers used advanced statistical methods to infer the age and subgrouping of the Dravidian language family at about 4 000 – 4 500 years old. This estimate, while in line with suggestions from previous linguistic studies, is a more robust result because it was found consistently in the majority of the different statistical models of evolution tested in this study. This age also matches well with inferences from archaeology, which have previously placed the diversification of Dravidian into North, Central, and South branches at exactly this age, coinciding with the beginnings of cultural developments evident in the archaeological record.

Future research would be needed to clarify the relationships between these branches and to examine the geographical history of the language family.

Language is learned in brain circuits that predate humans

It has often been claimed that humans learn language using brain components that are specifically dedicated to this purpose. Now, new evidence strongly suggests that language is in fact learned in brain systems that are also used for many other purposes and even pre-existed humans.

New research from the Georgetown University School of Medicine shows that children learn their native language and adults learn foreign languages in evolutionarily ancient brain circuits that also are used for tasks as diverse as remembering a shopping list and learning to drive.

The study has important implications not only for understanding the biology and evolution of language and how it is learned, but also for how language learning can be improved, both for people learning a foreign language and for those with language disorders such as autism, dyslexia, or aphasia (language problems caused by brain damage such as stroke).

The results showed that how good we are at remembering the words of a language correlates with how good we are at learning in declarative memory, which we use to memorize shopping lists or to remember the bus driver’s face or what we ate for dinner last night.

Grammar abilities, which allow us to combine words into sentences according to the rules of a language, showed a different pattern. The grammar abilities of children acquiring their native language correlated most strongly with learning in procedural memory, which we use to learn tasks such as driving, riding a bicycle, or playing a musical instrument. In adults learning a foreign language, however, grammar correlated with declarative memory at earlier stages of language learning, but with procedural memory at later stages.

The correlations were large, and were found consistently across languages (e.g., English, French, Finnish, and Japanese) and tasks (e.g., reading, listening, and speaking tasks), suggesting that the links between language and the brain systems are robust and reliable.

The findings from this new study suggest that these genes may also play similar roles in language. Along the same lines, the evolution of these brain systems, and how they came to underlie language, should shed light on the evolution of language.

Additionally, the findings may lead to approaches that could improve foreign language learning and language problems in disorders. For example, various pharmacological agents (e.g., the drug memantine) and behavioral strategies (e.g., spacing out the presentation of information) have been shown to enhance learning or retention of information in the brain systems, he says. These approaches may thus also be used to facilitate language learning, including in disorders such as aphasia, dyslexia, and autism.

Dutch courage – alcohol improves foreign language skills

A new study published in the Journal of Psychopharmacology, conducted by researchers from the University of Liverpool, Maastricht University and King’s College London, shows that bilingual speakers’ ability to speak a second language is improved after they have consumed a low dose of alcohol.

It is well-established that alcohol impairs cognitive and motor functions. ‘Executive functions’, which include the ability to remember, pay attention, and inhibit inappropriate behaviours, are particularly sensitive to the acute effects of alcohol.

Given that executive functions are important when speaking a second (non-native) language, one might expect that alcohol would impair the ability to speak a second language. On the other hand, alcohol increases self-confidence and reduces social anxiety, both of which might be expected to improve language ability when interacting with another person.

Furthermore, many bilingual speakers believe that it can improve their ability to speak a second language. The aim of this experimental study was to test these competing predictions for the first time.

The researchers tested the effects of a low dose of alcohol on participants’ self-rated and observer-rated ability to converse in Dutch. Participants were 50 native German speakers who were studying at a Dutch University (Maastricht) and had recently learned to speak, read and write in Dutch.

The researchers found that participants who had consumed alcohol had significantly better observer-ratings for their Dutch language, specifically better pronunciation, compared to those who had not consumed alcohol. However, alcohol had no effect on self-ratings of Dutch language skills.

Languages do not share a single history

Languages do not share a single history but different components evolve along different trajectories and at different rates. A large-scale study of Pacific languages reveals that forces driving grammatical change are different to those driving lexical change. Grammar changes more rapidly and is especially influenced by contact with unrelated languages, while words are more resistant to change.

An international team of researchers, led by scientists at the Max Planck Institute for the Science of Human History, have discovered that a language’s grammatical structures change more quickly over time than vocabulary, overturning a long-held assumption in the field. The study, published October 2, 2017 in PNAS, analyzed 81 Austronesian languages based on a detailed database of grammatical structures and lexicon. By analyzing these languages, all from a single family and geographic region, using sophisticated modelling the researchers were able to determine how quickly different aspects of the languages had changed. Strikingly different processes seemed to be shaping the lexicon and the grammar – the lexicon changed more when new languages were created, while the grammatical structures were more affected by contact with other languages.

One fascinating question for linguists is whether all aspects of a language evolve as an integrated system with all aspects (grammar, morphology, phonology, lexicon) sharing the same history over time or whether different aspects of a language show different histories. Does each word have its own history? The present study, by an international team including researchers from the Max Planck Institute for the Science of Human History, the Max Planck Institute for Psycholinguistics, the Australian National University, the University of Oxford, and Uppsala University, addressed this question by comparing both grammatical structures and lexicon for over 80 Austronesian languages. This study applied cutting-edge computational methods to analyze not only a large number of words, but also a large number of grammatical elements, all from languages that were geographically grouped. This allowed for valuable and deep comparisons. Interestingly, the study found that grammatical structures on average actually changed faster than vocabulary.

The researchers found that there were specific elements of both vocabulary and grammar that change at a slow rate, as well as elements that change more quickly. One interesting finding was that the slowly evolving grammatical structures tended to be those that speakers are less aware of. Why would this be? When two languages come together, or when one language splits into two, speakers of the languages emphasize or adopt certain elements in order to identify or distinguish themselves from others. We’re all familiar with how easily we can distinguish groups among speakers of our own language by accent or dialect, and we often make associations based on those distinctions. Humans in the past did this too and, the researchers hypothesize, this was one of the main drivers of language change. However, if a speaker is unaware of a certain grammatical structure because it is so subtle, they will not try to change it or use it as a marker of group identity. Thus those features of a language often remain stable. The researchers note that the precise features that remain more stable over time are specific to each language group.

The researchers suggest, that although grammar as a whole might not be a better tool for examining language change, a more nuanced approach that combined computational methods with large-scale databases of both grammar and lexicon could allow for a look into the deeper past.

(photo: Thomas Quine)

Tell me what languages you know and I’ll tell you how you read

The way bilingual people read is conditioned by the languages they speak. This is the main conclusion reached by researchers at the Basque Center on Cognition, Brain and Language (BCBL) after reviewing the existing scientific literature and comparing this information to the findings of studies at their own centre.

The scientists found that the languages spoken by bilingual people (when they learned to read in two languages at the same time) affect their reading strategies and even the cognitive foundations that form the basis for the capacity to read. This discovery could have implications for clinical and education practice.

Monolingual speakers of transparent languages – where letters are pronounced the same independently of the word they are included in, such as Basque or Spanish – have a greater tendency to use analytical reading strategies, where they read words in parts.

On the other hand, speakers of opaque languages, where the sounds of letters differ depending on the word (for example English or French) are more likely to use a global reading strategy. In other words, they tend to read whole words to understand their meaning.

Nonetheless, the BCBL researchers have observed that bilingual people who learn to read two languages at the same time do not read the same way as monolingual speakers; rather, they follow a different pattern which had not previously been described.

According to the literature review, recently published in Psychonomic Bulletin and Review, a contamination effect takes place between the two reading strategies in speakers of two languages. Therefore, a person learning to read in Spanish and in English will have a greater tendency towards a global strategy, even when reading in Spanish, than a monolingual Spanish speaker. This effect is caused by the influence of the second language.

When reading in English, on the contrary, they will tend towards a more analytical strategy (reading by parts) than monolingual English speakers, due to “contagion” from Spanish.

The scientists believe that learning to read in two languages with different characteristics than the mother tongue also causes a change in the cognitive processes that are the basis of reading acquisition, such as visual attention or auditory phonological processes.

In other words, learning to read in an opaque language (such as English or French) reinforces our capacity to rapidly process many visual elements, because whole words must be deciphered to achieve fluent reading in these languages.

As transparent languages have a much greater focus on the letter-sound correspondence, learning to read in these languages is though to improve our sensitivity in perceiving the sounds of the language.

The paper’s authors consider their findings to have implications at different levels. From an educational viewpoint, they allow better understanding of how bilingual populations learn to read and what type of strategies are most advisable to help pupils learn based on the languages they know.

The new discovery could also help in the diagnosis and assessment of dyslexia and other problems with reading.

The languages a child knows are therefore determinants for identifying potential disorders, as this essential information would explain certain mistakes made when reading.

Switching languages rewires brain

Bilinguals use and learn language in ways that change their minds and brains, which has consequences — many positive, according to Judith F. Kroll, a Penn State cognitive scientist.

“Recent studies reveal the remarkable ways in which bilingualism changes the brain networks that enable skilled cognition, support fluent language performance and facilitate new learning,” said Kroll, Distinguished Professor, psychology, linguistics and women’s studies.

Researchers have shown that the brain structures and networks of bilinguals are different from those of monolinguals. Among other things, the changes help bilinguals to speak in the intended language — not to mistakenly speak in the “wrong” language.

And just as humans are not all the same, bilinguals are not all the same and the changes in the mind and brain differ depending on how the individual learned the language, what the two languages are and the context the languages are used in.

“What we know from recent research is that at every level of language processing — from words to grammar to speech — we see the presence of cross-language interaction and competition,” said Kroll, Distinguished Professor of psychology, linguistics and women’s studies. “Sometimes we see these cross-language interactions in behavior, but sometimes we only see them in brain data.”

Kroll presented recent findings about how bilinguals learn and use language in ways that change their minds and brains today at the annual meeting of the American Association for the Advancement of Science.

Both languages are active at all times in bilinguals, meaning the individuals cannot easily turn off either language and the languages are in competition with one another. In turn this causes bilinguals to juggle the two languages, reshaping the network in the brain that supports each.

“The consequences of bilingualism are not limited to language but reflect a reorganization of brain networks that hold implications for the ways in which bilinguals negotiate cognitive competition more generally,” said Kroll.

Kroll was instrumental in establishing the first U.S. chapter of Bilingualism Matters at Penn State, within the University’s Center for Language Science. Bilingualism Matters is an international organization that aims to bring practically applicable findings from current bilingual research to the public.

Words can deceive, but tone of voice cannot

A new computer algorithm can predict whether you and your spouse will have an improved or worsened relationship based on the tone of voice that you use when speaking to each other with nearly 79 percent accuracy.

In fact, the algorithm did a better job of predicting marital success of couples with serious marital issues than descriptions of the therapy sessions provided by relationship experts. The research was published in Proceedings of Interspeech on September 6, 2015.

Researchers recorded hundreds of conversations from over one hundred couples taken during marriage therapy sessions over two years, and then tracked their marital status for five years.

An interdisciplinary team .then developed an algorithm that broke the recordings into acoustic features using speech-processing techniques. These included pitch, intensity, “jitter” and “shimmer” among many – things like tracking warbles in the voice that can indicate moments of high emotion.

Taken together, the vocal acoustic features offered the team’s program a proxy for the subject’s communicative state, and the changes to that state over the course of a single therapy and across therapy sessions.

These features weren’t analyzed in isolation – rather, the impact of one partner upon the other over multiple therapy sessions was studied.

Once it was fine-tuned, the program was then tested against behavioral analyses made by human experts ‹ who had coded them for positive qualities like “acceptance” or negative qualities like “blame”. The team found that studying voice directly – rather than the expert-created behavioral codes – offered a more accurate glimpse at a couple’s future.

Next, using behavioral signal processing, the team plans to use language (e.g., spoken words) and nonverbal information (e.g., body language) to improve the prediction of how effective treatments will be.

We want to be happy!

In 1969, two psychologists at the University of Illinois proposed what they called the Pollyanna Hypothesis – the idea that there is a universal human tendency to use positive words more frequently than negative ones. “Put even more simply,” they wrote, “humans tend to look on (and talk about) the bright side of life.” It was a speculation that has provoked debate ever since.

Now a team of scientists at the University of Vermont and The MITRE Corporation have applied a Big Data approach – using a massive data set of many billions of words, based on actual usage, rather than “expert” opinion – to confirm the 1960s guess.

Movie subtitles in Arabic, Twitter feeds in Korean, the famously dark literature of Russia, websites in Chinese, music lyrics in English, and even the war-torn pages of the New York Times – the researchers found that these, and probably all human language¬, skews toward the use of happy words.

But doesn’t our global torrent of cursing on Twitter, horror movies, and endless media stories on the disaster du jour mean this can’t be true? No. This huge study indicates that language itself – perhaps humanity’s greatest technology – has a positive outlook.

The new study, “Human Language Reveals a Universal Positivity Bias,” appeared in the February 9 online edition of the Proceedings of the National Academy of Sciences.

This new research study also describes a larger project that the team of fourteen scientists has developed to create “physical-like instruments” for both real-time and offline measurements of the happiness in large-scale texts–“basically, huge bags of words,”.

They call this instrument a “hedonometer” – a happiness meter. It can now trace the global happiness signal from English-language Twitter posts on a near-real-time basis, and show differing happiness signals between days. For example, a big drop was noted on the day of the terrorist attack on Charlie Hebdo in Paris, but rebounded over the following three days. The hedonometer can also discern different happiness signals in US states and cities: Vermont currently has the happiest signal, while Louisiana has the saddest. And the latest data puts Boulder, CO, in the number one spot for happiness, while Racine, WI, is at the bottom.

But, as the new paper describes, the team is working to apply the hedonometer to explore happiness signals in many other languages and from many sources beyond Twitter. For example, the team has applied their technique to over ten thousand books, inspired by Kurt Vonnegut’s “shapes of stories” idea. Visualizations of the emotional ups and downs of these books can been seen on the hedonometer website; they rise and a fall like a stock-market ticker. The new study shows that Moby Dick’s 170 914 words has four or five major valleys that correspond to low points in the story and the hedonometer signal drops off dramatically at the end, revealing this classic novel’s darkly enigmatic conclusion. In contrast, Dumas’s Count of Monte Cristo – 100 081 words in French – ends on a jubilant note, shown by a strong upward spike on the meter.

The new research “in no way asserts that all natural texts will skew positive,” the researchers write, as these various books reveal. But at a more elemental level, the study brings evidence from Big Data to a long-standing debate about human evolution: our social nature appears to be encoded in the building blocks of language.