Lost your keys? The brain can rapidly mobilize a search party

A contact lens on the bathroom floor, an escaped hamster in the backyard, a car key in a bed of gravel: How are we able to focus so sharply to find that proverbial needle in a haystack? Scientists at the University of California, Berkeley, have discovered that when we embark on a targeted search, various visual and non-visual regions of the brain mobilize to track down a person, animal or thing.

That means that if we’re looking for a youngster lost in a crowd, the brain areas usually dedicated to recognizing other objects, or even the areas attuned to abstract thought, shift their focus and join the search party. Thus, the brain rapidly switches into a highly focused child-finder, and redirects resources it uses for other mental tasks.

The findings, published in the journal Nature Neuroscience, help explain why we find it difficult to concentrate on more than one task at a time. The results also shed light on how people are able to shift their attention to challenging tasks, and may provide greater insight into neurobehavioral and attention deficit disorders such as ADHD.

These results were obtained in studies that used functional Magnetic Resonance Imaging (fMRI) to record the brain activity of study participants as they searched for people or vehicles in movie clips. In one experiment, participants held down a button whenever a person appeared in the movie. In another, they did the same with vehicles.

The brain scans simultaneously measured neural activity via blood flow in thousands of locations across the brain. Researchers used regularized linear regression analysis, which finds correlations in data, to build models showing how each of the roughly 50 000 locations near the cortex responded to each of the 935 categories of objects and actions seen in the movie clips. Next, they compared how much of the cortex was devoted to detecting humans or vehicles depending on whether or not each of those categories was the search target.

They found that when participants searched for humans, relatively more of the cortex was devoted to humans, and when they searched for vehicles, more of the cortex was devoted to vehicles. For example, areas that were normally involved in recognizing specific visual categories such as plants or buildings switched to become tuned to humans or vehicles, vastly expanding the area of the brain engaged in the search.

The findings build on an earlier UC Berkeley brain imaging study that showed how the brain organizes thousands of animate and inanimate objects into what researchers call a “continuous semantic space.” Those findings challenged previous assumptions that every visual category is represented in a separate region of visual cortex. Instead, researchers found that categories are actually represented in highly organized, continuous maps.

The latest study goes further to show how the brain’s semantic space is warped during visual search, depending on the search target. Researchers have posted their results in an interactive, online brain viewer. Other co-authors of the study are UC Berkeley neuroscientists Jack Gallant, Alexander Huth and Shinji Nishimoto.

New algorithm ranks scientific literature

Keeping up with current scientific literature is a daunting task, considering that hundreds to thousands of papers are published each day. Now researchers from North Carolina State University have developed a computer program to help them evaluate and rank scientific articles in their field.

The researchers use a text-mining algorithm to prioritize research papers to read and include in their Comparative Toxicogenomics Database (CTD), a public database that manually curates and codes data from the scientific literature describing how environmental chemicals interact with genes to affect human health.

To help select the most relevant papers for inclusion in the CTD, Thomas Wiegers, a research bioinformatician at NC State and the other co-lead author of the report, developed a sophisticated algorithm as part of a text-mining process. The application evaluates the text from thousands of papers and assigns a relevancy score to each document.

But how good is the algorithm at determining the best papers? To test that, the researchers text-mined 15 000 articles and sent a representative sample to their team of biocurators to manually read and evaluate on their own, blind to the computer’s score. The biocurators concurred with the algorithm 85 percent of the time with respect to the highest-scored papers.

Using the algorithm to rank papers allowed biocurators to focus on the most relevant papers, increasing productivity by 27 percent and novel data content by 100 percent.

There are always outliers in these types of experiments: occasions where the algorithm assigns a very high score to an article that a human biocurator quickly dismisses as irrelevant. The team that looked at those outliers was often able to see a pattern as to why the algorithm mistakenly identified a paper as important.

(The paper, “Text mining effectively scores and ranks the literature for improving chemical-gene-disease curation at the Comparative Toxicogenomics Database,” was published online April 17 in PLOS ONE. Co-authors are Dr. Cindy Murphy, a biocurator scientist at NC State; Dr. Carolyn Mattingly, associate professor of biology at NC State; and Drs. Robin Johnson, Jean Lay, Kelley Lennon-Hopkins, Cindy Saraceni-Richards and Daniela Sciaky from The Mount Desert Island Biological Laboratory.)

3D Microchip

Scientists from the University of Cambridge have created, for the first time, a new type of microchip which allows information to travel in three dimensions. Currently, microchips can only pass digital information in a very limited way – from either left to right or front to back. The research was published in Nature.

Researchers believe that in the future a 3D microchip would enable additional storage capacity on chips by allowing information to be spread across several layers instead of being compacted into one layer, as is currently the case.

For the research, the Cambridge scientists used a special type of microchip called a spintronic chip which exploits the electron’s tiny magnetic moment or ‘spin’ (unlike the majority of today’s chips which use charge-based electronic technology). Spintronic chips are increasingly being used in computers, and it is widely believed that within the next few years they will become the standard memory chip.

To create the microchip, the researchers used an experimental technique called ‘sputtering’. They effectively made a club-sandwich on a silicon chip of cobalt, platinum and ruthenium atoms. The cobalt and platinum atoms store the digital information in a similar way to how a hard disk drive stores data. The ruthenium atoms act as messengers, communicating that information between neighbouring layers of cobalt and platinum. Each of the layers is only a few atoms thick.

They then used a laser technique called MOKE to probe the data content of the different layers. As they switched a magnetic field on and off they saw in the MOKE signal the data climbing layer by layer from the bottom of the chip to the top. They then confirmed the results using a different measurement method.

Cape Town’s heritage mapped

city-heritageCape Town, South Africa, is one of the world’s most popular tourist destinations. It has a great climate, a fantastic natural setting and a good infrastructure.

As the Victoria & Alfred Waterfront in Cape Town is the most popular tourist spot in South Africa, the site launched with the all heritage sites and famous landmarks in the area.

Work on the next phase has already started, all the heritage sites on Robben Island, a World Heritage Site, where Nelson Mandela was a prisoner for many years.

The third phase will start with the most famous of the city’s many natural wonders, Table Mountain. The city centre at the foot of the mountain will be mapped, followed by the suburbs of the Atlantic seaboard in the following phases.
Cape-Town-Heritage.co.za aims to map all the heritage sites and famous landmarks in the Cape Peninsula.

Personalized experiences coming to smartphones

The proliferation of technology focused on finding the best tickets, the hottest restaurants or the next flight out of town may mean it’s time to bid adieu to the concierge and other traditional service information gatekeepers, according to new research.

Face-to-face interactions with front desk clerks and concierges are not essential for personalized service, and increasingly these encounters are being substituted with Smartphone apps and other automated service systems, according to a study in the current edition of the Journal of Service Research.

Business travelers who frequent the same hotels time and again may develop personal relationships with certain concierges over time, but a “smart digital assistant” app can provide consistent personalized recommendations for every customer, every time, no matter where they are, the researchers report.

Service providers can collect information about customer preferences through methods such as customer satisfaction forms and tracking “likes” on Facebook or other social media. But automated service systems and applications can capture both explicit and implicit feedback, and do it in the most complete and effective way possible. Already, service systems can record customers’ choices, track navigation or record web browsing behavior. Almost instantaneously, these systems can exploit that information to personalize future recommendations.

Co-authors Professor Robert J. Glushko and Karen Joy Nomorosa envision an app that personalizes customer experiences by developing personalized systems that integrate across all service platforms, from hotels, to dining, to booking flights.

Of course, there are still customers who enjoy the “lazy chatty conversation with a bank teller or hotel front desk clerk,” Nomorosa points out, but “for every customer who enjoys a lazy chat, there is surely someone who wants a minimalist information-driven experience.”

How useful is SEC-mandated XBRL data

Wall of BanksIn 2009, the US Securities and Exchange Commission mandated that public companies submit portions of annual and quarterly reports—in a digitized format known as eXtensible Business Reporting Language (XBRL). The goal of this type of data was to provide more relevant, timely, and reliable “interactive” data to investors and analysts. The XBRL-formatted data is meant to allow users to manipulate and organize the financial information according to their own purposes faster, cheaper, and more easily than current alternatives.

But how useful and usable is the new data to analysts and investors? Early proponents of interactive data, from Columbia Business School’s Center for Excellence in Accounting and Security Analysis (CEASA), recently completed a review of the state of XBRL, with a focus on its usefulness and usability for security analysis. The study questions the reliability of the data, the simplicity and stability of the underlying taxonomy and architecture, as well as the lack of user tools that add value and are easily integrated into an investor’s or analyst’s existing work flow and tools. As a result, the researchers conclude that XBRL has promised more than it has delivered to date and is at risk of becoming obsolete for use by analysts and investors.

However, the researchers recommend specific changes that could make the formatted data more useful to investors and analysts. First, the entire XBRL stakeholder community must reduce significantly the error rate and limit unnecessary “extensions” (company-specific data identifiers or “tags”). Steps that might achieve this include: greater regulatory oversight and enforcement, mandatory audits of the data and tags, or requirements around meeting the XBRL US organization’s error and quality checks. Second, the entities that file the XBRL-formatted financial reports should focus their energy on improving the quality of their data, rather than on trying to destroy the SEC’s XBRL regulation. Third, the ongoing development of XBRL technology should be taken over and run by technologists, rather than accountants and regulators. An interesting approach for this might include partnering with major business information system vendors (like IBM, Oracle, and SAP), the key web-based financial information suppliers (like Google and Yahoo), and possibly even the major data aggregators (Bloomberg, CapitalIQ, Factset, and Thomson Reuters) not only to ensure that the SEC’s regulatory data can be used effectively by investors and analysts, but, more importantly, to help improve the XBRL technology and usability overall.

Thoughts on the establishment of a records management programme

Author: Quintus van Rensburg
2012

A records management programme looks after important records, see to it that they are preserved and makes them available for use.

After obtaining the backing of management the first step would be to draw up a proposal for a records management programme. This is an important aspect as it states why such a programme is necessary and what it entails.

The following issues should be addressed in the proposal for a specific organization:

  • How records were handled in the past and the problems it created, the financial benefit of a change and how it will make the organisation more efficient.
  • How the new programme will address the problems of the current management programme; including an explanation on some of the elements of the new programme.
  • What resource will be required for the new programme.
  • A timetable and sequence of events for implementing the new programme.

If the proposal receives support from management a following aspect which should receive attention is preparation of a formal policy for the new programme. This may be a difficult phase as the policy will need to be approved by different levels of management in an organisation.

For creating a new programme or changing an existing one the task of systems analysis needs to be performed. This usually entails looking at how the processes in the organisation work, trying to improve them and to standardise record content and style.

The purpose of systems analysis is to match the records that are created to the workings of the organisation. A filing series should reflect the different functions being performed. There should also be a distinction between substantive and facilitative records. Substantive records are primary to the functions of the organisation, while facilitative records are those records all offices have in common. Paperwork management, which includes creating files and distributing copies, should reflect the distinction of substantive and facilitative.

Record style is improved by studying the esthetics of forms and trying to improve on elements such as format and layout. Other important aspects are directives management (methods of communicating policy and procedures), mail management (handling internal mail), security surrounding records and disposal of records.

The importance of the different aspects will differ from one organisation to the other, depending on needs and size. For example, and organization might decide to focus on improving the filing, expecting to improve overall performance this way. The focus of the programme will differ in every case.

Linguistics and information theory

The majority of languages — roughly 85 percent of them — can be sorted into two categories: those, like English, in which the basic sentence form is subject-verb-object (“the girl kicks the ball”), and those, like Japanese, in which the basic sentence form is subject-object-verb (“the girl the ball kicks”).

The reason for the difference has remained somewhat mysterious, but researchers from MIT’s Department of Brain and Cognitive Sciences now believe that they can account for it using concepts borrowed from information theory. The researchers will present their hypothesis in an upcoming issue of the journal Psychological Science.

In their paper, the MIT researchers argue that languages develop the word order rules they do in order to minimize the risk of miscommunication across a noisy channel.

The tendency even of speakers of a subject-verb-object (SVO) language like English to gesture subject-object-verb (SOV) may be an example of an innate human preference for linguistically recapitulating old information before introducing new information. The “old before new” theory — which, according to the University of Pennsylvania linguist Ellen Price, is also known as the given-new, known-new, and presupposition-focus theory — has a rich history in the linguistic literature, dating back to at least the work of the German philosopher Hermann Paul, in 1880.