Next-generation platform for ultradense data recording

credit: ITMO University

credit: ITMO University

A group of scientists from ITMO University in Saint Petersburg has put forward a new approach to effective manipulation of light at the nanoscale based on hybrid metal-dielectric nanoantennas. The new technology promises to bring about a new platform for ultradense optical data recording and pave the way to high throughput fabrication of a wide range of optical nanodevices capable of localizing, enhancing and manipulating light at the nanoscale. The results of the study were published in Advanced Materials.

Nanoantenna is a device that converts freely propagating light into localized light – compressed into several tens of nanometers. The localization enables scientists to effectively control light at the nanoscale. This is one of the reasons why nanoantennas may become the fundamental building blocks of future optical computers that rely on photons instead of electrons to process and transmit information. This inevitable replacement of the information carrier is related to the fact that photons surpass electrons by several orders of magnitude in terms of information capacity, require less energy, rule out circuit heating and ensure high velocity data exchange.

Until recently, the production of planar arrays of hybrid nanoantennas for light manipulation was considered an extremely painstaking process. A solution to this problem was found by researchers from ITMO University in collaboration with colleagues from Saint Petersburg Academic University and Joint Institute for High Temperatures in Moscow. The research group has for the first time developed a technique for creating such arrays of hybrid nanoantennas and for high-accuracy adjustment of individual nanoantennas within the array. The achievement was made possible by subsequently combining two production stages: lithography and precise exposure of thenanoantenna to a femtosecond laser – ultrashort impulse laser.

The practical application of hybrid nanoantennas lies, in particular, within the field of ultradense data recording. Modern optical drives can record information with density around 10 Gbit/inch2, which equals to the size of a single pixel of a few hundred nanometers. Although such dimensions are comparable to the size of the nanoantennas, the scientists propose to additionally control their color in the visible spectrum. This procedure leads to the addition of yet another ‘dimension’ for data recording, which immediately increases the entire data storage capacity of the system.

Apart from ultradense data recording, the selective modification of hybrid nanoantennas can help create new designs of hybrid metasurfaces, waveguides and compact sensors for environmental monitoring. In the nearest future, the research group plans to focus on the development of such specific applications of their hybrid nanoantennas.

The nanoantennas are made of two components: a truncated silicon cone with a thin golden disk located on top. The researchers demonstrated that, thanks to nanoscale laser reshaping, it is possible to precisely modify the shape of the golden particle without affecting the silicon cone. The change in the shape of the golden particle results in changing optical properties of the nanoantenna as a whole due to different degrees of resonance overlap between the silicon and golden nanoparticles.

Contrary to conventional heat-induced fabrication of nanoantennas, the new method raises the possibility of adjusting individual nanoantennas within the array and exerting precise control over overall optical properties of the hybrid nanostructures.

The use of GIS in policing

Police agencies are using Geographic Information Systems (GIS) for mapping crime, identifying crime “hot spots,” assigning officers, and profiling offenders, but little research has been done about the effectiveness of the technology in curbing crime, according to a study at Sam Houston State University.

According to a 2001 survey by the National Institute of Justice, 62 percent of police departments with over 100 officers use GIS systems. Collectively, the technology has been credited with reducing crime, decreasing residential burglaries, tracking parolees and serious habitual offenders, and identifying “hot spots” with high concentrations of crime.

There are four major uses for GIS in policing, the study found. Crime mapping identifies the geographical distribution of crime to deploy officers to “hot spots” of activity and to develop other intervention plans.

Many departments have developed computerized statistics or CompStat systems to help manage the decision-making process. The system assists with instant crime analysis, deployment techniques, active enforcement of trivial crimes, monitoring of emerging patterns, and accountability programs for law enforcement managers.

GIS also is used to create geographical profiling of offenders, an investigative method that allows police to identify locations of connected crimes to help determine where an offender may live, particularly in serial cases.

While these practices are widespread, especially in larger departments, little research is available to measure their effectiveness in policing. Current studies indicates that GIS is used mainly to aid in the design of policing strategies and/or to evaluate the decision-making processes at law enforcement agencies.

Big data, challenges and opportunities for databases

Advances in the technology frontier have resulted in major disruptions and transformations in the massive data processing infrastructures.

For the past three decades, classical database management systems, data warehousing and data analysis technologies have been well recognized as effective tools for data management and analysis. More recently, data from different sources and in different format are being collected at unprecedented scale. This gives rise to the so-called 3V characteristics of the big data: volume, velocity and variety. Classical approaches of data warehousing and data analysis are no longer viable to deal with both the scale of data and the sophisticated analysis. This challenge has been labeled as the ‘big data’ problem. In principle, while earlier DBMSs focused on modeling operational characteristics of enterprises, big data systems are now expected to model vast amounts of heterogeneous and complex data.

Although the massive data pose many challenges and invalidate earlier designs, they provide many great opportunities, and most of all, instead of making decisions based on small sets of data or calibration, decisions can now be made based on the data itself. Various big data applications have emerged, such as Social networking, Enterprise data management, Scientific applications, Mobile computing, Scalable and elastic data management, Scalable data analytics, etc . Meanwhile, many distributed data processing frameworks/systems have been proposed to deal with big data problem. MapReduce is the most successful distributed computing platform whose fundamental idea is to simplify the parallel processing, and has been widely applied. MapReduce systems are good at complex analytics and extract-transform-load tasks at large scale, however it also suffers from its reduced functionality. There also exist many other distributed data processing systems that go beyond the MapReduce framework. These systems have been designed to address various problems not well handled by MapReduce, e.g., Dremel for Interactive analysis, GraphLab for Graph analysis, STORM for stream processing, Spark for memory computing.

The big data presents us the challenges and opportunities in designing new data processing systems for managing and processing the massive data. The potential research topics in this field lie in all phases of data management pipeline that includes data acquisition, data integration, data modeling, query processing, data analysis, etc. Besides, the big data also brings great challenges and opportunities to other computer science disciplines such as system architecture, storage system, system software and software engineering.

Internet search engines drove librarians to redefine themselves

Although librarians adopted Internet technology quickly, they initially dismissed search engines, which duplicated tasks they considered integral to their field. Their eventual embrace of the technology required a reinvention of their occupational identity, according to a study by University of Oregon researchers.

The story of the successful transition — of accommodating a new technology — into a new identity is a good example for professionals in other fields who have faced or currently face such challenges, says Andrew J. Nelson, a professor of management and the Bramsen Faculty Fellow in Innovation, Entrepreneurship and Sustainability in the UO’s Lundquist College of Business.

Librarians, the researchers found, have gone from thinking of themselves as the knowledgeable person with the best answer to a patron’s question to being an interpreter and connector who points patrons to helpful materials for their consideration.

Early on, the researchers wrote that: “Librarians initially described Internet search technology as a niche and emphasized their own unique (and superior) value.” The emerging technology was dismissed, Nelson said, “as something that wasn’t going to spread and be widely used.” But that idea began to fade as more than 70 online search engines emerged between 1989 and 2011.

Nelson and Irwin defined occupational identity as an overlap between “who we are” and “what we do” as they explored the “paradox of expertise” in which librarians failed to grow their informational prowess with an emerging technology. “What made us curious about what happened was that librarians had technical skills — many had been building online databases of their collections with search capabilities very similar to what search engines aimed to develop,” Irwin said. Yet librarians, the researcher said, had misinterpreted the possibilities of Internet searching for such information.

For her doctoral dissertation, Irwin had focused on technological change in American libraries over about 150 years. This project was a side road for her as part of the management department’s philosophy of pairing graduate students with non-supervising faculty for an outside project to broaden their education.

The research was able to document a four-step transition, beginning with librarians “dismissing the technology as something that wasn’t going to spread and be widely used,” Nelson said. Next librarians began to differentiate themselves, accepting Internet searches as a way to provide simple answers because they preferred to interpret web-based search information for patrons.

Eventually, Nelson said, librarians decided to capture the technology and offer their expertise in collaboration with companies that were generating search engines, but the companies chose to go their own way.

Finally, librarians “evolved their approach” by working to develop scholarly-based search engines, such as Google Scholar, and others tied specifically to library holdings.

Medical information may be leaked

Patients who search on free health-related websites for information related to a medical condition may have the health information they provide leaked to third party tracking entities through code on those websites, according to a research letter by Marco D. Huesch, M.B.B.S., Ph.D., of the University of Southern California, Los Angeles.

Between December 2012 and January 2013, using a sample of 20 popular health-related websites, Huesch used freely available privacy tools to detect third parties. Commercial interception software also was used to intercept hidden traffic from the researcher’s computer to the websites of third parties.

Huesch found that all 20 sites had at least one third-party element, with the average being six or seven. Thirteen of the 20 websites had one or more tracking element. No tracking elements were found on physician-oriented sites closely tied to professional groups. Five of the 13 sites that had tracker elements had also enabled social media button tracking. Using the interception tool, searches were leaked to third-party tracking entities by seven websites. Search terms were not leaked to third-party tracking sites when done on U.S. government sites or four of the five physician-oriented sites, according to the study results.

(JAMA Intern Med. Published online July 8, 2013. doi:10.1001/jamainternmed.2013.7795)

Lost your keys? The brain can rapidly mobilize a search party

A contact lens on the bathroom floor, an escaped hamster in the backyard, a car key in a bed of gravel: How are we able to focus so sharply to find that proverbial needle in a haystack? Scientists at the University of California, Berkeley, have discovered that when we embark on a targeted search, various visual and non-visual regions of the brain mobilize to track down a person, animal or thing.

That means that if we’re looking for a youngster lost in a crowd, the brain areas usually dedicated to recognizing other objects, or even the areas attuned to abstract thought, shift their focus and join the search party. Thus, the brain rapidly switches into a highly focused child-finder, and redirects resources it uses for other mental tasks.

The findings, published in the journal Nature Neuroscience, help explain why we find it difficult to concentrate on more than one task at a time. The results also shed light on how people are able to shift their attention to challenging tasks, and may provide greater insight into neurobehavioral and attention deficit disorders such as ADHD.

These results were obtained in studies that used functional Magnetic Resonance Imaging (fMRI) to record the brain activity of study participants as they searched for people or vehicles in movie clips. In one experiment, participants held down a button whenever a person appeared in the movie. In another, they did the same with vehicles.

The brain scans simultaneously measured neural activity via blood flow in thousands of locations across the brain. Researchers used regularized linear regression analysis, which finds correlations in data, to build models showing how each of the roughly 50 000 locations near the cortex responded to each of the 935 categories of objects and actions seen in the movie clips. Next, they compared how much of the cortex was devoted to detecting humans or vehicles depending on whether or not each of those categories was the search target.

They found that when participants searched for humans, relatively more of the cortex was devoted to humans, and when they searched for vehicles, more of the cortex was devoted to vehicles. For example, areas that were normally involved in recognizing specific visual categories such as plants or buildings switched to become tuned to humans or vehicles, vastly expanding the area of the brain engaged in the search.

The findings build on an earlier UC Berkeley brain imaging study that showed how the brain organizes thousands of animate and inanimate objects into what researchers call a “continuous semantic space.” Those findings challenged previous assumptions that every visual category is represented in a separate region of visual cortex. Instead, researchers found that categories are actually represented in highly organized, continuous maps.

The latest study goes further to show how the brain’s semantic space is warped during visual search, depending on the search target. Researchers have posted their results in an interactive, online brain viewer. Other co-authors of the study are UC Berkeley neuroscientists Jack Gallant, Alexander Huth and Shinji Nishimoto.

New algorithm ranks scientific literature

Keeping up with current scientific literature is a daunting task, considering that hundreds to thousands of papers are published each day. Now researchers from North Carolina State University have developed a computer program to help them evaluate and rank scientific articles in their field.

The researchers use a text-mining algorithm to prioritize research papers to read and include in their Comparative Toxicogenomics Database (CTD), a public database that manually curates and codes data from the scientific literature describing how environmental chemicals interact with genes to affect human health.

To help select the most relevant papers for inclusion in the CTD, Thomas Wiegers, a research bioinformatician at NC State and the other co-lead author of the report, developed a sophisticated algorithm as part of a text-mining process. The application evaluates the text from thousands of papers and assigns a relevancy score to each document.

But how good is the algorithm at determining the best papers? To test that, the researchers text-mined 15 000 articles and sent a representative sample to their team of biocurators to manually read and evaluate on their own, blind to the computer’s score. The biocurators concurred with the algorithm 85 percent of the time with respect to the highest-scored papers.

Using the algorithm to rank papers allowed biocurators to focus on the most relevant papers, increasing productivity by 27 percent and novel data content by 100 percent.

There are always outliers in these types of experiments: occasions where the algorithm assigns a very high score to an article that a human biocurator quickly dismisses as irrelevant. The team that looked at those outliers was often able to see a pattern as to why the algorithm mistakenly identified a paper as important.

(The paper, “Text mining effectively scores and ranks the literature for improving chemical-gene-disease curation at the Comparative Toxicogenomics Database,” was published online April 17 in PLOS ONE. Co-authors are Dr. Cindy Murphy, a biocurator scientist at NC State; Dr. Carolyn Mattingly, associate professor of biology at NC State; and Drs. Robin Johnson, Jean Lay, Kelley Lennon-Hopkins, Cindy Saraceni-Richards and Daniela Sciaky from The Mount Desert Island Biological Laboratory.)

3D Microchip

Scientists from the University of Cambridge have created, for the first time, a new type of microchip which allows information to travel in three dimensions. Currently, microchips can only pass digital information in a very limited way – from either left to right or front to back. The research was published in Nature.

Researchers believe that in the future a 3D microchip would enable additional storage capacity on chips by allowing information to be spread across several layers instead of being compacted into one layer, as is currently the case.

For the research, the Cambridge scientists used a special type of microchip called a spintronic chip which exploits the electron’s tiny magnetic moment or ‘spin’ (unlike the majority of today’s chips which use charge-based electronic technology). Spintronic chips are increasingly being used in computers, and it is widely believed that within the next few years they will become the standard memory chip.

To create the microchip, the researchers used an experimental technique called ‘sputtering’. They effectively made a club-sandwich on a silicon chip of cobalt, platinum and ruthenium atoms. The cobalt and platinum atoms store the digital information in a similar way to how a hard disk drive stores data. The ruthenium atoms act as messengers, communicating that information between neighbouring layers of cobalt and platinum. Each of the layers is only a few atoms thick.

They then used a laser technique called MOKE to probe the data content of the different layers. As they switched a magnetic field on and off they saw in the MOKE signal the data climbing layer by layer from the bottom of the chip to the top. They then confirmed the results using a different measurement method.