Are you Facebook dependent?

What drives you to Facebook? News? Games? Feedback on your posts? The chance to meet new friends?

If any of these hit home, you might have a Facebook dependency. But that’s not necessarily a bad thing, says Amber Ferris, an assistant professor of communication at The University of Akron’s Wayne College.

Ferris, who studies Facebook user trends, says the more people use Facebook to fulfill their goals, the more dependent on it the become. She is quick to explain this dependency is not equivalent to an addiction. Rather, the reason why people use Facebook determines the level of dependency they have on the social network. The study found those who use Facebook to meet new people were the most dependent on Facebook overall.

To identify dependency factors, Ferris and Erin Hollenbaugh, an associate professor of communication studies at Kent State University at Stark, studied 301 Facebook users between the ages of 18 and 68 who post on the site at least once a month. They found that people who perceive Facebook as helpful in gaining a better understanding of themselves go to the site to meet new people and to get attention from others. Also, people who use Facebook to gain a deeper understanding of themselves tend to have agreeable personalities, but lower self-esteem than others.

Ferris explains that some users observe how others cope with problems and situations similar to their own “and get ideas on how to approach others in important and difficult situations.”

Ferris and Hollenbaugh presented “A Uses and Gratifications Approach to Exploring Antecedents to Facebook Dependency” at the National Communication Association conference in Las Vegas in November. They say other Facebook dependency signs point to users’ needs for information or entertainment. In other words, a user knows about the local festival scheduled for this weekend thanks to Facebook.

In their previous studies, “Facebook Self-disclosure: Examining the Role of Traits, Social Cohesion, and Motives” (2014) and “Predictors of Honesty, Intent, and Valence of Facebook Self-disclosure” (2015) published in the journal Computers in Human Behavior, Ferris and Hollenbaugh also uncovered personality traits common among specific types of Facebook users.

For example, people who use Facebook to establish new relationships tend to be extroverted. Extroverts are more open to sharing their personal information online, but are not always honest with their disclosures, Ferris says.

The most positive posts online come from those who have high self-esteem, according to Ferris.

“Those who post the most and are the most positive in posts do so to stay connected with people they already know and to gain others’ attention,” Ferris says. “This makes a lot of sense – if you are happy with your life, you are more likely to want to share that happiness with others on social media.”

Complex humor is no laughing matter

Since the earliest times, laughter and humor have performed important functions in human interaction. They help to expedite courtship, improve conversational flow, synchronize emotional states and enhance social bonding. Jokes, a structured form of humor, give us control over laughter and are therefore a way to elicit these positive effects intentionally. In order to comprehend why some jokes are perceived as funny and others are not, Robert Dunbar and colleagues at Oxford University investigated the cognitive mechanism underlying laughter and humor. The research is published in Springer’s journal Human Nature.

The ability to fully understand other people’s often unspoken intentions is called mentalizing, and involves different levels of so-called intentionality. For example, an adult can comprehend up to five such levels of intentionality before losing the plot of a too-complex story. Conversations that share facts normally involve only three such levels. Greater brain power is needed when people chat about the social behavior of others, because it requires them to think and rethink themselves into the shoes of others.

The best jokes are thought to build on a set of expectations and have a punchline to update the knowledge of the listener in an unexpected way. Expectations that involve the thoughts or intentions of people other than the joke-teller or the audience, for example the characters in the joke, are harder to pin down. Our natural ability to handle only a limited number of mindstates comes into play.

In order to shed light on how our mental ability limits what we find funny, the researchers analyzed the reaction of 55 undergraduates from the London School of Economics to 65 jokes from an online compilation of the 101 funniest jokes of all time. The collection mostly consisted of jokes from successful stand-up comedians. Some jokes in the compilation were mere one-liners, while others were longer and more complex. A third of the jokes were factual and contained reasonably undemanding observations of idiosyncrasies in the world. The rest involved the mindstates of third parties. The jokes were rated on a scale from one to four (not at all funny to very funny).

The research team found that the funniest jokes are those that involve two characters and up to five back-and-forth levels of intentionality between the comedian and the audience. People easily loose the plot when jokes are more complex than that. The findings do not suggest that humor is defined by how cleverly a joke is constructed, but rather that there is a limit to how complex its contents can be to still be considered funny. According to Dunbar, increasing the mentalizing complexity of the joke improves the perceived quality, but only up to a certain point: stand-up comedians cannot afford to tell intricate jokes that leave their audience feeling as if they’ve missed the punchline.

“The task of professional comics is to elicit laughs as directly and as fast as possible. They generally do this most effectively when ensuring that they keep within the mental competence of the typical audience member,” says Dunbar. “If they exceed these limits, the joke will not be perceived as funny.”

It is likely that everyday conversational jokes do not involve as many intentional levels as those that have been carefully constructed by professional comedians. Further research needs to be conducted in this area. However, Dunbar’s findings shed some light on the mechanics of language-based humor and therefore on the workings of our mind.

Words can deceive, but tone of voice cannot

A new computer algorithm can predict whether you and your spouse will have an improved or worsened relationship based on the tone of voice that you use when speaking to each other with nearly 79 percent accuracy.

In fact, the algorithm did a better job of predicting marital success of couples with serious marital issues than descriptions of the therapy sessions provided by relationship experts. The research was published in Proceedings of Interspeech on September 6, 2015.

Researchers recorded hundreds of conversations from over one hundred couples taken during marriage therapy sessions over two years, and then tracked their marital status for five years.

An interdisciplinary team .then developed an algorithm that broke the recordings into acoustic features using speech-processing techniques. These included pitch, intensity, “jitter” and “shimmer” among many – things like tracking warbles in the voice that can indicate moments of high emotion.

Taken together, the vocal acoustic features offered the team’s program a proxy for the subject’s communicative state, and the changes to that state over the course of a single therapy and across therapy sessions.

These features weren’t analyzed in isolation – rather, the impact of one partner upon the other over multiple therapy sessions was studied.

Once it was fine-tuned, the program was then tested against behavioral analyses made by human experts ‹ who had coded them for positive qualities like “acceptance” or negative qualities like “blame”. The team found that studying voice directly – rather than the expert-created behavioral codes – offered a more accurate glimpse at a couple’s future.

Next, using behavioral signal processing, the team plans to use language (e.g., spoken words) and nonverbal information (e.g., body language) to improve the prediction of how effective treatments will be.

Researchers develop more accurate Twitter analysis tools

“Trending” topics on the social media platform Twitter show the quantity of tweets associated with a specific event. However, trends only show the highest volume keywords and hashtags, and may not give qualitative information about the tweets themselves. Now, using data associated with the Super Bowl and World Series, researchers at the University of Missouri have developed and validated a software program that analyzes event-based tweets and measures the context of tweets rather than just the quantity. The program will help Twitter analysts gain better insight into human behavior associated with trends and events.

“Trends on Twitter are almost always associated with hashtags, which only gives you part of the story,” said Sean Goggins, assistant professor in the School of Information Science and Learning Technologies at MU. “When analyzing tweets that are connected to an action or event, looking for specific words at the beginning of the tweets gives us a better indication of what is occurring, rather than only looking at hashtags.”

Goggins partnered with Ian Graves, a doctoral student in the Computer Science and IT Department at the College of Engineering at MU. Graves developed software that analyzes tweets based on the words found within the tweets. By programming a “bag of words,” or tags they felt would be associated with the Super Bowl and World Series, the software analyzed the words and their placement within the 140 character tweets.

“The software is able to detect more nuanced occurrences within the tweet, like action happening on the baseball field in between batters at the plate or plays in the game,” Graves said. “The program uses a computational approach to seek out not only a spike in hashtags or words, but also what’s really happening on a micro level. By looking for low-volume, localized tweets, we gleaned intelligence that stood apart from the clutter and noise associated with tweets related to the World Series.”

Goggins feels using this method to analyze tweets on a local level can help officials involved with community safety or disaster relief to investigate the causes of major events like the Boston bombing or to help predict future events.

“Most of the things that happen on Twitter are not related to specific events in the world,” Goggins said. “If analysts are just looking at the volume of tweets, they’re not getting the insight they need about what’s truly happening or the whole picture. By focusing on the words within the tweet, we have the potential to find a truer signal inside of a very noisy environment.”

The study, “Sifting signal from noise: a new perspective on the meaning of tweets about the ‘big game,'” was published in the journal, New Media and Strategy.

Forming consensus in social networks

Social networks have become a dominant force in society. Family, friends, peers, community leaders and media communicators are all part of people’s social networks. Individuals within a network may have different opinions on important issues, but it’s their collective actions that determine the path society takes.

To understand the process through which we operate as a group, and to explain why we do what we do, researchers have developed a novel computational model and the corresponding conditions for reaching consensus in a wide range of situations. The findings are published in the August 2014 issue on Signal Processing for Social Networks of the IEEE Journal of Selected Topics in Signal Processing.

The new model helps us to understand the collective behavior of adaptive agents–people, sensors, data bases or abstract entities–by analyzing communication patterns that are characteristic of social networks.

The model addresses some fundamental questions: what is a good way to model opinions and how these opinions are updated, and when is consensus reached.

One key feature of the new model is its capacity to handle the uncertainties associated with soft data (such as opinions of people) in combination with hard data (facts and numbers).

The agents exchange and revise their beliefs through their interaction with other agents. The interaction is usually local, in the sense that only neighboring agents in the network exchange information, for the purpose of updating one’s belief or opinion. The goal is for the group of agents in a network to arrive at a consensus that is somehow ‘similar’ to the ground truth – what has been confirmed by the gathering of objective data.

In previous works, consensus achieved by the agents was completely dependent on how agents update their beliefs. In other words, depending on the updating scheme being utilized, one can get different consensus states. The consensus in the current model is more rational or meaningful.

According to the model, if the consensus opinion is closer to an agent’s opinion, then one can say that this agent is more credible. On the other hand, if the consensus opinion is very different from an agent’s opinion, then it can be inferred that this agent is less credible.

In the future, the researchers would like to expand their model to include the formation of opinion clusters, where each cluster of agents share similar opinions. Clustering can be seen in the emergence of extremism, minority opinion spreading, the appearance of political affiliations, or affinity for a particular product, for example.

Internet not responsible for dying newspapers

We all know that the Internet has killed the traditional newspaper trade, right? After all, until the general population started interacting with the web in the mid-90s, the newspaper business was thriving—offering readers top notch journalism and pages of ads.

But a recently-published study finds that we may be all wrong about the role of the Internet in the decline of newspapers.

According to research by University of Chicago Booth School of Business Professor Matthew Gentzkow, assumptions about journalism are based on three false premises.

In his new paper, “Trading Dollars for Dollars: The Price of Attention Online and Offline,” which was published in the May issue of the American Economic Review, Gentzkow notes that the first fallacy is that online advertising revenues are naturally lower than print revenues, so traditional media must adopt a less profitable business model that cannot support paying real reporters. The second is that the web has made the advertising market more competitive, which has driven down rates and, in turn, revenues. The third misconception is that the Internet is responsible for the demise of the newspaper industry.

“This perception that online ads are cheaper to buy is all about people quoting things in units that are not comparable to each other—doing apples-to-oranges comparisons,” Gentzkow says. Online ad rates are typically discussed in terms of “number of unique monthly visitors” the ad receives, while circulation numbers determine newspaper rates.

Several different studies already have shown that people spend an order of magnitude more time reading than the average monthly visitor online, which makes looking at these rates as analogous incorrect.

By comparing the amount of time people actually see an ad, Gentzkow finds that the price of attention for similar consumers is actually higher online. In 2008, he calculates, newspapers earned $2.78 per hour of attention in print, and $3.79 per hour of attention online. By 2012, the price of attention in print had fallen to $1.57, while the price for attention online had increased to $4.24.

Gentzkow also points out that the popularity of newspapers had already significantly diminished between 1980 and 1995, well before the Internet age, and has dropped at roughly the same rate ever since. “People have not stopped reading newspapers because of the Internet,” Gentzkow notes.

Near error-free wireless detection

The accuracy and range of radio frequency identification (RFID) systems, which are used in everything from passports to luggage tracking, could be vastly improved thanks to a new system developed by researchers at the University of Cambridge.

The vastly increased range and accuracy of the system opens up a wide range of potential monitoring applications, including support for the sick and elderly, real-time environmental monitoring in areas prone to natural disasters, or paying for goods without the need for conventional checkouts.

The new system improves the accuracy of passive (battery-less) RFID tag detection from roughly 50 per cent to near 100 per cent, and increases the reliable detection range from two to three metres to approximately 20 metres. The results are outlined in the journal IEEE Transactions on Antennas and Propagation.

RFID is a widely-used wireless sensing technology which uses radio waves to identify an object in the form of a serial number. The technology is used for applications such as baggage handling in airports, access badges, inventory control and document tracking.

RFID systems are comprised of a reader and a tag, and unlike conventional bar codes, the reader does not need to be in line of sight with the tag in order to detect it, meaning that tags can be embedded inside an object, and that many tags can be detected at once. Additionally, the tags require no internal energy source or maintenance, as they get their power from the radio waves interrogating them.

Several other methods of improving passive RFID coverage have been developed, but they do not address the issues of dead spots.

However, by using a distributed antenna system (DAS) of the type commonly used to improve wireless communications within a building, Dr Sabesan and Dr Michael Crisp, along with Professors Richard Penty and Ian White, were able achieve a massive increase in RFID range and accuracy.

By multicasting the RFID signals over a number of transmitting antennas, the researchers were able to dynamically move the dead spots to achieve an effectively error-free system. Using four transmitting and receiving antenna pairs, the team were able to reduce the number of dead spots in the system from nearly 50 per cent to zero per cent over a 20 by 15 metre area.

In addition, the new system requires fewer antennas than current technologies. In most of the RFID systems currently in use, the best way to ensure an accurate reading of the tags is to shorten the distance between the antennas and the tags, meaning that many antennas are required to achieve an acceptable accuracy rate. Even so, it is impossible to achieve completely accurate detection. But by using a DAS RFID system to move the location of dead spots away from the tag, an accurate read becomes possible without the need for additional antennas.

The team is currently working to add location functionality to the RFID DAS system which would allow users to see not only which zone a tagged item was located in, but also approximately where it was within that space.

The system, recognised by the award of the 2011 UK RAEng/ERA Innovation Prize, is being commercialised by the Cambridge team. This will allow organisations to inexpensively and effectively monitor RFID tagged items over large areas.