corals

Information Theory may define the individual

It’s almost impossible to imagine biology without individuals — individual organisms, individual cells, and individual genes, for example. But what about a worker ant that never reproduces, and could never survive apart from the colony? Are the trillions of microorganisms in our microbiomes, which vastly outnumber our human cells, part of our individuality?

“Despite the near-universal assumption of individuality in biology, there is little agreement about what individuals are and few rigorous quantitative methods for their identification,” write the authors of new work published in the journal Theory in Biosciences. The problem, they note in the paper, is analogous to identifying a figure from its background in a Gestalt psychology drawing. Like the famous image of two faces outlining a vase, an individual life form and its environment create a whole that is greater than the sum of its parts.

One way to solve the puzzle comes from information theory. Instead of focusing on anatomical traits, like cell walls, authors David Krakauer, Nils Bertschinger, Eckehard Olbrich, Jessica Flack, and Nihat Ay look to structured information flows between a system and its environment. “Individuals,” they argue, “are best thought of in terms of dynamical processes and not as stationary objects.”

The individual as a verb: what processes produce distinct identity? Flack stresses that this lens allows for individuality to be “continuous rather than binary, nested, and possible at any level.” An information theory of individuality (or ITI) describes emergent agency at different scales and with distributed communication networks (e.g., both ants and colonies at once).

The authors use a model that suggests three kinds of individuals, each corresponding to a different blend of self-regulation and environmental influence. Decomposing information like this gives a gradient: it ranges from environmentally-scaffolded forms like whirlpools to partially-scaffolded colonial forms like coral reefs and spider webs, to organismal individuals that are sculpted by their environment but strongly self-organizing.

Each is a strategy for propagating information forward through time–meaning, Flack adds, “individuality is about temporal uncertainty reduction.” Replication here emerges as just one of many strategies for individuals to order information in their future. To Flack, this “leaves us free to ask what role replication plays in temporal uncertainty reduction through the creation of individuals,” a question close to asking why we find life in the first place.

Perhaps the biggest implication of this work is in how it places the observer at the center of evolutionary theory. “Just as in quantum mechanics,” Krakauer says, “where the state of a system depends on measurement, the measurements we call natural selection dictate the preferred form of the individual. Who does the measuring? What we find depends on what the observer is capable of seeing.”

brain information addiction

Information – drugs for your brain

Can’t stop checking your phone, even when you’re not expecting any important messages? Blame your brain.

A new study by researchers at UC Berkeley’s Haas School of Business has found that information acts on the brain’s dopamine-producing reward system in the same way as money or food.

The paper, “Common neural code for reward and information value,” was published by the Proceedings of the National Academy of Sciences. Authored by Hsu and graduate student Kenji Kobayashi, now a post-doctoral researcher at the University of Pennsylvania, it demonstrates that the brain converts information into same common scale as it does for money. It also lays the groundwork for unravelling the neuroscience behind how we consume information–and perhaps even digital addiction.

The paper is rooted in the study of curiosity and what it looks like inside the brain. While economists have tended to view curiosity as a means to an end, valuable when it can help us get information to gain an edge in making decisions, psychologists have long seen curiosity as an innate motivation that can spur actions by itself. For example, sports fans might check the odds on a game even if they have no intention of ever betting.

Sometimes, we want to know something, just to know.

In a recent learning study they were able to show that new conceptual information is stored along spatial dimensions in form of a mental map located in the hippocampus. Together with colleagues from the Donders Institute at Radboud University in Nijmegen, they observed brain activity patterns that support the idea that the neural mechanisms that support navigation in physical space might also be involved in conceptual learning.

Our map of the world

In a recent learning study, researchers from the Max Planck Institute were able to show that new conceptual information is stored along spatial dimensions in form of a mental map located in the hippocampus. Together with colleagues from the Donders Institute at Radboud University in Nijmegen, they observed brain activity patterns that support the idea that the neural mechanisms that support navigation in physical space might also be involved in conceptual learning.

Mind reading equipment almost a reality

It may sound like sci-fy, but mind reading equipment are much closer to become a reality than most people can imagine. A new study carried out at D’Or Institute for Research and Education used a Magnetic Resonance machine (MR) to read participants’ minds and find out what song they were listening to. The study, published today in Scientific Reports, contributes for the improvement of the technique and pave the way to new research on reconstruction of auditory imagination and inner speech. In the clinical domain, it can enhance brain-computer interfaces in order to establish communication with locked-in syndrome patients.

In the experiment, six volunteers heard 40 pieces of classical music, rock, pop, jazz, and others. The neural fingerprint of each song on participants’ brain was captured by the MR machine while a computer was learning to identify the brain patterns elicited by each musical piece. Musical features such as tonality, dynamics, rhythm and timbre were taken in account by the computer.

After that, researchers expected that the computer would be able to do the opposite way: identify which song participants were listening to, based on their brain activity – a technique known as brain decoding. When confronted with two options, the computer showed up to 85% accuracy in identifying the correct song, which is a great performance, comparing to previous studies.

Researchers then pushed the test even harder by providing not two but 10 options (e.g. one correct and nine wrong) to the computer. In this scenario, the computer correctly identified the song in 74% of the decisions.

In the future, studies on brain decoding and machine learning will create possibilities of communication regardless any kind of written or spoken language.

Ways to counter misinformation and correct ‘fake news’

It’s no use simply telling people they have their facts wrong. To be more effective at correcting misinformation in news accounts and intentionally misleading “fake news,” you need to provide a detailed counter-message with new information — and get your audience to help develop a new narrative.

Those are some takeaways from an extensive new meta-analysis of laboratory debunking studies published in the journal Psychological Science. The analysis, the first conducted with this collection of debunking data, finds that a detailed counter-message is better at persuading people to change their minds than merely labeling misinformation as wrong. But even after a detailed debunking, misinformation still can be hard to eliminate, the study finds.

“Debunking: A Meta-Analysis of the Psychological Efficacy of Messages Countering Misinformation” was conducted by researchers at the Social Action Lab at the University of Illinois at Urbana-Champaign and at the Annenberg Public Policy Center of the University of Pennsylvania. The teams sought “to understand the factors underlying effective messages to counter attitudes and beliefs based on misinformation.” To do that, they examined 20 experiments in eight research reports involving 6,878 participants and 52 independent samples.

The analyzed studies, published from 1994 to 2015, focused on false social and political news accounts, including misinformation in reports of robberies; investigations of a warehouse fire and traffic accident; the supposed existence of “death panels” in the 2010 Affordable Care Act; positions of political candidates on Medicaid; and a report on whether a candidate had received donations from a convicted felon.

The researchers coded and analyzed the results of the experiments across the different studies and measured the effect of presenting misinformation, the effect of debunking, and the persistence of misinformation.

The researchers made three recommendations for debunking misinformation:

– Reduce arguments that support misinformation: News accounts about misinformation should not inadvertently repeat or belabor “detailed thoughts in support of the misinformation.”
– Engage audiences in scrutiny and counterarguing of information: Educational institutions should promote a state of healthy skepticism. When trying to correct misinformation, it is beneficial to have the audience involved in generating counterarguments.
– Introduce new information as part of the debunking message: People are less likely to accept debunking when the initial message is just labeled as wrong rather than countered with new evidence.

The authors encouraged the continued development of “alerting systems” for debunking misinformation such as Snopes.com (fake news), RetractionWatch.com (scientific retractions), and FactCheck.org (political claims). “Such an ongoing monitoring system creates desirable conditions of scrutiny and counterarguing of misinformation,” the researchers wrote.

The music of speech

Researchers at UC San Francisco have identified neurons in the human brain that respond to pitch changes in spoken language, which are essential to clearly conveying both meaning and emotion.

The study was published online August 24, 2017 in Science by the lab of Edward Chang, MD, a professor of neurological surgery at the UCSF Weill Institute for Neurosciences, and led by Claire Tang, a fourth-year graduate student in the Chang lab.

Changes in vocal pitch during speech – part of what linguists call speech prosody – are a fundamental part of human communication, nearly as fundamental as melody to music. In tonal languages such as Mandarin Chinese, pitch changes can completely alter the meaning of a word, but even in a non-tonal language like English, differences in pitch can significantly change the meaning of a spoken sentence.

For instance, “Sarah plays soccer,” in which “Sarah” is spoken with a descending pitch, can be used by a speaker to communicate that Sarah, rather than some other person, plays soccer; in contrast, “Sarah plays soccer” indicates that Sarah plays soccer, rather than some other game. And adding a rising tone at the end of a sentence (“Sarah plays soccer?”) indicates that the sentence is a question.

The brain’s ability to interpret these changes in tone on the fly is particularly remarkable, given that each speaker also has their own typical vocal pitch and style (that is, some people have low voices, others have high voices, and others seem to end even statements as if they were questions). Moreover, the brain must track and interpret these pitch changes while simultaneously parsing which consonants and vowels are being uttered, what words they form, and how those words are being combined into phrases and sentences – with all of this happening on a millisecond scale.

These findings reveal how the brain begins to take apart the complex stream of sounds that make up speech and identify important cues about the meaning of what we’re hearing.

How the brain builds panoramic memory

When asked to visualize your childhood home, you can probably picture not only the house you lived in, but also the buildings next door and across the street. MIT neuroscientists at MIT’s McGovern Institute for Brain Research and a junior fellow of the Harvard Society of Fellows have now identified two brain regions that are involved in creating these panoramic memories.

These brain regions help us to merge fleeting views of our surroundings into a seamless, 360-degree panorama, the researchers say.

As we look at a scene, visual information flows from our retinas into the brain, which has regions that are responsible for processing different elements of what we see, such as faces or objects. The MIT team suspected that areas involved in processing scenes — the occipital place area (OPA), the retrosplenial complex (RSC), and parahippocampal place area (PPA) — might also be involved in generating panoramic memories of a place such as a street corner.

If this were true, when you saw two images of houses that you knew were across the street from each other, they would evoke similar patterns of activity in these specialized brain regions. Two houses from different streets would not induce similar patterns.

“Our hypothesis was ,” Robertson says.

The researchers explored the hypothesis that as we begin to build memory of the environment around us, there would be certain regions of the brain where the representation of a single image would start to overlap with representations of other views from the same scene. They used immersive virtual reality headsets, which allowed them to show people many different panoramic scenes.

Brain scans revealed that when participants saw two images that they knew were linked, the response patterns in the RSC and OPA regions were similar. However, this was not the case for image pairs that the participants had not seen as linked. This suggests that the RSC and OPA, but not the PPA, are involved in building panoramic memories of our surroundings, the researchers say.

In another experiment, the researchers tested whether one image could “prime” the brain to recall an image from the same panoramic scene. To do this, they showed participants a scene and asked them whether it had been on their left or right when they first saw it. Before that, they showed them either another image from the same street corner or an unrelated image. Participants performed much better when primed with the related image.