woman in green t shirt showing tongue with closed eyes near dark green background

Seen not heard

Gesturing with the hands while speaking is common human behaviour, but no one knows why we do it. Now, a group of researchers reports in an issue of Proceedings of the National Academy of Sciences that gesturing adds emphasis to speech – but not in the way researchers had thought.

Gesturing while speaking, or “talking with your hands,” is common around the world. Many communications researchers believe that gesturing is either done to emphasize important points or to elucidate specific ideas (think of this as the “drawing in the air” hypothesis). But there are other possibilities. For example, it could be that gesturing, by altering the size and shape of the chest, lungs and vocal muscles, affects the sound of a person’s speech.

Because of the way the human body is constructed, hand movements influence the torso and throat muscles. Gestures are tightly tied to amplitude. Rather than just using your chest muscles to produce an airflow for speech, moving your arms while you speak can add acoustic emphasis. And you can hear someone’s motions, even when they’re trying not to let you. “Some language researchers don’t like this idea, because they want the language to be all about communicating the contents of your mind, rather than the state of your body. But we think that gestures are allowing the acoustic signal to carry additional information about bodily tension and motion. It’s information of another kind,” says University of Connecticut psychologist and director of the Center for the Ecological Study of Perception and Action James Dixon, one of the authors of the paper.

corals

Information Theory may define the individual

It’s almost impossible to imagine biology without individuals — individual organisms, individual cells, and individual genes, for example. But what about a worker ant that never reproduces, and could never survive apart from the colony? Are the trillions of microorganisms in our microbiomes, which vastly outnumber our human cells, part of our individuality?

“Despite the near-universal assumption of individuality in biology, there is little agreement about what individuals are and few rigorous quantitative methods for their identification,” write the authors of new work published in the journal Theory in Biosciences. The problem, they note in the paper, is analogous to identifying a figure from its background in a Gestalt psychology drawing. Like the famous image of two faces outlining a vase, an individual life form and its environment create a whole that is greater than the sum of its parts.

One way to solve the puzzle comes from information theory. Instead of focusing on anatomical traits, like cell walls, authors David Krakauer, Nils Bertschinger, Eckehard Olbrich, Jessica Flack, and Nihat Ay look to structured information flows between a system and its environment. “Individuals,” they argue, “are best thought of in terms of dynamical processes and not as stationary objects.”

The individual as a verb: what processes produce distinct identity? Flack stresses that this lens allows for individuality to be “continuous rather than binary, nested, and possible at any level.” An information theory of individuality (or ITI) describes emergent agency at different scales and with distributed communication networks (e.g., both ants and colonies at once).

The authors use a model that suggests three kinds of individuals, each corresponding to a different blend of self-regulation and environmental influence. Decomposing information like this gives a gradient: it ranges from environmentally-scaffolded forms like whirlpools to partially-scaffolded colonial forms like coral reefs and spider webs, to organismal individuals that are sculpted by their environment but strongly self-organizing.

Each is a strategy for propagating information forward through time–meaning, Flack adds, “individuality is about temporal uncertainty reduction.” Replication here emerges as just one of many strategies for individuals to order information in their future. To Flack, this “leaves us free to ask what role replication plays in temporal uncertainty reduction through the creation of individuals,” a question close to asking why we find life in the first place.

Perhaps the biggest implication of this work is in how it places the observer at the center of evolutionary theory. “Just as in quantum mechanics,” Krakauer says, “where the state of a system depends on measurement, the measurements we call natural selection dictate the preferred form of the individual. Who does the measuring? What we find depends on what the observer is capable of seeing.”

Your motivation bias how you gather information.

Your motivation bias how you gather information

A study by Filip Gesiarz, Donal Cahill and Tali Sharot of the University College London suggests people stop gathering evidence earlier when the data supports their desired conclusion than when it supports the conclusion they wish was false.

Previous studies had already provided some clues that people gather less information before reaching desirable beliefs. For example, people are more likely to seek a second medical opinion when the first diagnosis is grave. However, certain design limitations of those studies prevented a definitive conclusion and the reasons behind this bias were previously unknown. By fitting people’s behaviour to a mathematical model Gesiarz and colleagues were able to identify the reasons for this bias.

People start with an assumption that their favored conclusion is more likely true and weight each piece of evidence supporting it more than evidence opposing it. Because of that, people will find no need to gather additional information that could have revealed their conclusion to be false.

The study, “Evidence accumulation is biased by motivation: A computational account”, was published in PLOS Computational Biology.

Next-generation platform for ultradense data recording

credit: ITMO University

credit: ITMO University

A group of scientists from ITMO University in Saint Petersburg has put forward a new approach to effective manipulation of light at the nanoscale based on hybrid metal-dielectric nanoantennas. The new technology promises to bring about a new platform for ultradense optical data recording and pave the way to high throughput fabrication of a wide range of optical nanodevices capable of localizing, enhancing and manipulating light at the nanoscale. The results of the study were published in Advanced Materials.

Nanoantenna is a device that converts freely propagating light into localized light – compressed into several tens of nanometers. The localization enables scientists to effectively control light at the nanoscale. This is one of the reasons why nanoantennas may become the fundamental building blocks of future optical computers that rely on photons instead of electrons to process and transmit information. This inevitable replacement of the information carrier is related to the fact that photons surpass electrons by several orders of magnitude in terms of information capacity, require less energy, rule out circuit heating and ensure high velocity data exchange.

Until recently, the production of planar arrays of hybrid nanoantennas for light manipulation was considered an extremely painstaking process. A solution to this problem was found by researchers from ITMO University in collaboration with colleagues from Saint Petersburg Academic University and Joint Institute for High Temperatures in Moscow. The research group has for the first time developed a technique for creating such arrays of hybrid nanoantennas and for high-accuracy adjustment of individual nanoantennas within the array. The achievement was made possible by subsequently combining two production stages: lithography and precise exposure of thenanoantenna to a femtosecond laser – ultrashort impulse laser.

The practical application of hybrid nanoantennas lies, in particular, within the field of ultradense data recording. Modern optical drives can record information with density around 10 Gbit/inch2, which equals to the size of a single pixel of a few hundred nanometers. Although such dimensions are comparable to the size of the nanoantennas, the scientists propose to additionally control their color in the visible spectrum. This procedure leads to the addition of yet another ‘dimension’ for data recording, which immediately increases the entire data storage capacity of the system.

Apart from ultradense data recording, the selective modification of hybrid nanoantennas can help create new designs of hybrid metasurfaces, waveguides and compact sensors for environmental monitoring. In the nearest future, the research group plans to focus on the development of such specific applications of their hybrid nanoantennas.

The nanoantennas are made of two components: a truncated silicon cone with a thin golden disk located on top. The researchers demonstrated that, thanks to nanoscale laser reshaping, it is possible to precisely modify the shape of the golden particle without affecting the silicon cone. The change in the shape of the golden particle results in changing optical properties of the nanoantenna as a whole due to different degrees of resonance overlap between the silicon and golden nanoparticles.

Contrary to conventional heat-induced fabrication of nanoantennas, the new method raises the possibility of adjusting individual nanoantennas within the array and exerting precise control over overall optical properties of the hybrid nanostructures.

The use of GIS in policing

Police agencies are using Geographic Information Systems (GIS) for mapping crime, identifying crime “hot spots,” assigning officers, and profiling offenders, but little research has been done about the effectiveness of the technology in curbing crime, according to a study at Sam Houston State University.

According to a 2001 survey by the National Institute of Justice, 62 percent of police departments with over 100 officers use GIS systems. Collectively, the technology has been credited with reducing crime, decreasing residential burglaries, tracking parolees and serious habitual offenders, and identifying “hot spots” with high concentrations of crime.

There are four major uses for GIS in policing, the study found. Crime mapping identifies the geographical distribution of crime to deploy officers to “hot spots” of activity and to develop other intervention plans.

Many departments have developed computerized statistics or CompStat systems to help manage the decision-making process. The system assists with instant crime analysis, deployment techniques, active enforcement of trivial crimes, monitoring of emerging patterns, and accountability programs for law enforcement managers.

GIS also is used to create geographical profiling of offenders, an investigative method that allows police to identify locations of connected crimes to help determine where an offender may live, particularly in serial cases.

While these practices are widespread, especially in larger departments, little research is available to measure their effectiveness in policing. Current studies indicates that GIS is used mainly to aid in the design of policing strategies and/or to evaluate the decision-making processes at law enforcement agencies.

Big data, challenges and opportunities for databases

Advances in the technology frontier have resulted in major disruptions and transformations in the massive data processing infrastructures.

For the past three decades, classical database management systems, data warehousing and data analysis technologies have been well recognized as effective tools for data management and analysis. More recently, data from different sources and in different format are being collected at unprecedented scale. This gives rise to the so-called 3V characteristics of the big data: volume, velocity and variety. Classical approaches of data warehousing and data analysis are no longer viable to deal with both the scale of data and the sophisticated analysis. This challenge has been labeled as the ‘big data’ problem. In principle, while earlier DBMSs focused on modeling operational characteristics of enterprises, big data systems are now expected to model vast amounts of heterogeneous and complex data.

Although the massive data pose many challenges and invalidate earlier designs, they provide many great opportunities, and most of all, instead of making decisions based on small sets of data or calibration, decisions can now be made based on the data itself. Various big data applications have emerged, such as Social networking, Enterprise data management, Scientific applications, Mobile computing, Scalable and elastic data management, Scalable data analytics, etc . Meanwhile, many distributed data processing frameworks/systems have been proposed to deal with big data problem. MapReduce is the most successful distributed computing platform whose fundamental idea is to simplify the parallel processing, and has been widely applied. MapReduce systems are good at complex analytics and extract-transform-load tasks at large scale, however it also suffers from its reduced functionality. There also exist many other distributed data processing systems that go beyond the MapReduce framework. These systems have been designed to address various problems not well handled by MapReduce, e.g., Dremel for Interactive analysis, GraphLab for Graph analysis, STORM for stream processing, Spark for memory computing.

The big data presents us the challenges and opportunities in designing new data processing systems for managing and processing the massive data. The potential research topics in this field lie in all phases of data management pipeline that includes data acquisition, data integration, data modeling, query processing, data analysis, etc. Besides, the big data also brings great challenges and opportunities to other computer science disciplines such as system architecture, storage system, system software and software engineering.

Internet search engines drove librarians to redefine themselves

Although librarians adopted Internet technology quickly, they initially dismissed search engines, which duplicated tasks they considered integral to their field. Their eventual embrace of the technology required a reinvention of their occupational identity, according to a study by University of Oregon researchers.

The story of the successful transition — of accommodating a new technology — into a new identity is a good example for professionals in other fields who have faced or currently face such challenges, says Andrew J. Nelson, a professor of management and the Bramsen Faculty Fellow in Innovation, Entrepreneurship and Sustainability in the UO’s Lundquist College of Business.

Librarians, the researchers found, have gone from thinking of themselves as the knowledgeable person with the best answer to a patron’s question to being an interpreter and connector who points patrons to helpful materials for their consideration.

Early on, the researchers wrote that: “Librarians initially described Internet search technology as a niche and emphasized their own unique (and superior) value.” The emerging technology was dismissed, Nelson said, “as something that wasn’t going to spread and be widely used.” But that idea began to fade as more than 70 online search engines emerged between 1989 and 2011.

Nelson and Irwin defined occupational identity as an overlap between “who we are” and “what we do” as they explored the “paradox of expertise” in which librarians failed to grow their informational prowess with an emerging technology. “What made us curious about what happened was that librarians had technical skills — many had been building online databases of their collections with search capabilities very similar to what search engines aimed to develop,” Irwin said. Yet librarians, the researcher said, had misinterpreted the possibilities of Internet searching for such information.

For her doctoral dissertation, Irwin had focused on technological change in American libraries over about 150 years. This project was a side road for her as part of the management department’s philosophy of pairing graduate students with non-supervising faculty for an outside project to broaden their education.

The research was able to document a four-step transition, beginning with librarians “dismissing the technology as something that wasn’t going to spread and be widely used,” Nelson said. Next librarians began to differentiate themselves, accepting Internet searches as a way to provide simple answers because they preferred to interpret web-based search information for patrons.

Eventually, Nelson said, librarians decided to capture the technology and offer their expertise in collaboration with companies that were generating search engines, but the companies chose to go their own way.

Finally, librarians “evolved their approach” by working to develop scholarly-based search engines, such as Google Scholar, and others tied specifically to library holdings.

Medical information may be leaked

Patients who search on free health-related websites for information related to a medical condition may have the health information they provide leaked to third party tracking entities through code on those websites, according to a research letter by Marco D. Huesch, M.B.B.S., Ph.D., of the University of Southern California, Los Angeles.

Between December 2012 and January 2013, using a sample of 20 popular health-related websites, Huesch used freely available privacy tools to detect third parties. Commercial interception software also was used to intercept hidden traffic from the researcher’s computer to the websites of third parties.

Huesch found that all 20 sites had at least one third-party element, with the average being six or seven. Thirteen of the 20 websites had one or more tracking element. No tracking elements were found on physician-oriented sites closely tied to professional groups. Five of the 13 sites that had tracker elements had also enabled social media button tracking. Using the interception tool, searches were leaked to third-party tracking entities by seven websites. Search terms were not leaked to third-party tracking sites when done on U.S. government sites or four of the five physician-oriented sites, according to the study results.

(JAMA Intern Med. Published online July 8, 2013. doi:10.1001/jamainternmed.2013.7795)