Babies may understand primate calls

Human infants’ responses to the vocalizations of non-human primates shed light on the developmental origin of a crucial link between human language and core cognitive capacities, a new study reports.

Previous studies have shown that even in infants too young to speak, listening to human speech supports core cognitive processes, including the formation of object categories.

Alissa Ferry, lead author and currently a postdoctoral fellow in the Language, Cognition and Development Lab at the Scuola Internationale Superiore di Studi Avanzati in Trieste, Italy, together with Northwestern University colleagues, documented that this link is initially broad enough to include the vocalizations of non-human primates.

In humans, language is the primary conduit for conveying our thoughts. The new findings document that for young infants, listening to the vocalizations of humans and non-human primates supports the fundamental cognitive process of categorization. From this broad beginning, the infant mind identifies which signals are part of their language and begins to systematically link these signals to meaning.

Furthermore, the researchers found that infants’ response to non-human primate vocalizations at three and four months was not just due to the sounds’ acoustic complexity, as infants who heard backward human speech segments failed to form object categories at any age.

The importance of language structure

Think of a frequently used noun or verb in our language. Try to count how many times you have uttered it in the last two hours. Now, do the same with the article “the”. The language we speak is not only made of content words (nouns, verbs, adjectives, for instance) but also of lots of words that provide a support to them (articles, prepositions, etc.) that are used much more frequently than the first (function words, or functors). Despite the huge variability of known languages, language scientists were able to divide them roughly into two main categories: the languages in which the functor precedes the content word – and that use a Verb-Object order (VO) – and vice versa (OV). The experimental observations showed that the frequency of the terms is a clue that helps identifying to which category a language belongs and, as a consequence, “tuning in” to it.

Knowing the language’s structure enables the individual to segment the speech (to divide the language flow into single words) and affects language learnability. This effect has been observed also in very young children for some languages, such as Italian and Japanese. A group of neuroscientists, including Jacques Mehler and Marina Nespor of the International School for Advanced Studies (SISSA) of Trieste, have extended the experiment also to adults, employing a wider range of languages. The research has been published in the review Frontiers in Psychology.

The experiment was carried on Italian and French native speakers (in representation of the VO language group), and on Japanese and Basque native speakers (OV languages), carrying out learning tasks centered on sentences in an artificial (invented) language that could feature one of the two order structures.

Technology reflected in our words

New research shows that as culture has evolved over the last two centuries – with increasing urbanization, greater reliance on technology, and widespread availability of formal education – so has human psychology. The findings are forthcoming in Psychological Science, a journal of the Association for Psychological Science.

Psychological scientist Patricia Greenfield measured culture-wide psychological change over two centuries using the Google Ngram Viewer, examining the frequencies of specific words in a corpus of over 1 160 000 English-language books published in the United States between 1800 and 2000.

Drawing on her Theory of Social Change and Human Development, Greenfield hypothesized that the usage of specific words would wax and wane as a reflection of psychological adaptation to sociocultural change. Data from the corpus of books supported her hypothesis.

For example, the words “choose” and “get” — markers of the individualism and materialistic values that are adaptive in wealthier urban settings — rose in frequency between 1800 and 2000. Words that reflect the social responsibilities that are adaptive in rural settings – such as “obliged” and “give” — declined over the same period.

Interestingly, there was a short-term deviation from this overall trend: Usage of “get” declined between 1940 and the 1960s before rising again in the 1970s. Greenfield believes this deviation may reflect a decline in self-interest motivation during World War II and the Civil Rights movement.

Greenfield also observed a gradual rise in the use of “feel” and concomitant decline in the use of “act,” suggesting a turn toward inner mental life and away from outward behavior.

The corpus also revealed a growing focus on the self, with the use of “child,” “unique,” “individual,” and “self” increasing from 1800 to 2000. Over the same period, the importance of obedience to authority, social relationships, and religion in everyday life seems to have waned, as reflected in the decline of “obedience,” “authority,” “belong,” and “pray.”

The same patterns in word usage also emerged in the corpus of books published in the United Kingdom over the last 200 years. Furthermore, Greenfield was able to replicate the findings using synonyms for each target word in the both the U.S. corpus and the U.K. corpus.

How bilinguals switch between languages

Individuals who learn two languages at an early age seem to switch back and forth between separate “sound systems” for each language, according to new research conducted at the University of Arizona.

The research, to be published in a forthcoming issue of Psychological Science, a journal of the Association for Psychological Science, addresses enduring questions in bilingual studies about how bilingual speakers hear and process sound in two different languages.

Kalim Gonzales’s research supports the view that bilinguals who learn two languages early in life learn two separate processing modes, or “sound systems.”

The study looked at 32 Spanish-English early bilinguals, who had learned their second language before age 8. Participants were presented with a series of pseudo-words beginning with a ‘pa’ or a ‘ba’ sound and asked to identify which of the two sounds they heard.

While ‘pa’ and ‘ba’ sounds exist in both English and Spanish, how those sounds are produced and perceived in the two languages varies subtly. In the case of ‘ba,’ for example, English speakers typically begin to vibrate their vocal chords the moment they open their lips, while Spanish speakers begin vocal chord vibration slightly before they open their lips and produce ‘pa’ in a manner similar to English ‘ba.’ As a result of those subtle differences, English-only speakers might, in some cases, confuse the ‘ba’ and ‘pa’ sounds they hear in Spanish.

For the study, the bilingual participants were divided into two groups. One group was told they would be hearing rare words in Spanish, while the other was told they would be hearing rare words in English. Both groups heard audio recordings of variations of the same two words — bafri and pafri — which are not real words in either language.

Participants were then asked to identify whether the words they heard began with a ‘ba’ or a ‘pa’ sound.

Each group heard the same series of words, but for the group told they were hearing Spanish, the ends of the words were pronounced slightly differently, with the ‘r’ getting a Spanish pronunciation.

The findings: Participants perceived ‘ba’ and ‘pa’ sounds differently depending on whether they were told they were hearing Spanish words, with the Spanish pronunciation of ‘r,’ or whether they were told they were hearing English words, with the English pronunciation of ‘r.’

When the study was repeated with 32 English monolinguals, participants did not show the same shift in perception; they labeled ‘ba’ and ‘pa’ sounds the same way regardless of which language they were told they were hearing. It was that lack of an effect for monolinguals that provided the strongest evidence for two sound systems in bilinguals.

These new findings challenge the idea that bilinguals always have one dominant language.

Linguistics and information theory

The majority of languages — roughly 85 percent of them — can be sorted into two categories: those, like English, in which the basic sentence form is subject-verb-object (“the girl kicks the ball”), and those, like Japanese, in which the basic sentence form is subject-object-verb (“the girl the ball kicks”).

The reason for the difference has remained somewhat mysterious, but researchers from MIT’s Department of Brain and Cognitive Sciences now believe that they can account for it using concepts borrowed from information theory. The researchers will present their hypothesis in an upcoming issue of the journal Psychological Science.

In their paper, the MIT researchers argue that languages develop the word order rules they do in order to minimize the risk of miscommunication across a noisy channel.

The tendency even of speakers of a subject-verb-object (SVO) language like English to gesture subject-object-verb (SOV) may be an example of an innate human preference for linguistically recapitulating old information before introducing new information. The “old before new” theory — which, according to the University of Pennsylvania linguist Ellen Price, is also known as the given-new, known-new, and presupposition-focus theory — has a rich history in the linguistic literature, dating back to at least the work of the German philosopher Hermann Paul, in 1880.

Practice makes word recognition perfect

Word recognition behavior can be fine-tuned by experience and practice, according to a new study by Ian Hargreaves and colleagues from the University of Calgary in Canada. Their work shows, for the first time, that it is possible to develop visual word recognition ability in adulthood, beyond what researchers thought was achievable. Competitive Scrabble players provide the proof. The study is published online in Springer’s journal Memory & Cognition.



Competitive Scrabble involves extraordinary word recognition experience. Expert players typically dedicate large amounts of time to studying the 180,000 words listed in The Official Tournament and Club Word List. Hargreaves and colleagues wanted to establish the effects of experience on visual word recognition. They compared the visual word recognition behaviors of competitive Scrabble players and non-expert participants using a version of the classic word recognition model – the lexical decision task – where subjects need to make a quick decision about whether the word shown to them is a real word.

In a series of two experiments, the authors showed participants words presented both vertically and horizontally, as well as common concrete (e.g. truck) and abstract (e.g. truth) words and measured how quickly, and how, they made judgements about those words.

Competitive Scrabble players’ visual word recognition behavior differed significantly from non-experts’ for letter-prompted verbal fluency (coming up with words beginning with a specific letter) and anagramming accuracy, two Scrabble-specific skills. Competitive players were faster to judge whether or not a word was real. They also judged the validity of vertical words faster than non-experts and were quicker at picking up abstract words than non-competitive players. These findings indicate that Scrabble players are less reliant on the meaning of words to judge whether or not they are real, and more flexible at word recognition using orthographic information.

The authors conclude: “Our results suggest that visual word recognition is shaped by experience and, that with experience, there are efficiencies to be had even in the adult world recognition system. Competitive Scrabble players are visual word recognition experts and their skill pushes the bounds of what we previously considered the end-point of development of the word recognition system.”