Walk into a modern newsroom and you will likely notice something that was unimaginable just a few years ago: screens with real-time updates on “trending” stories from the outlet’s website. This unprecedented monitoring has helped to make round-the-clock news into very big business. And, as more news migrates online, and as more of it is delivered to handheld devices, such detailed analysis of the audience has allowed media networks to exert a troubling level of influence over the stories their ‘users’ consume.
Recent allegations of political bias at Facebook show how dangerously this knowledge can be misused. Earlier this week the website Gizmodo reported that former Facebook staffers alleged that the company’s “trending” news section “routinely suppressed news stories of interest to conservative readers”; that they were asked to “inject” stories that didn’t merit inclusion in the news feed, and ensured that Facebook itself did not become a trending topic. These disclosures have since prompted some of the network’s users — some 160 million of whom are sampling the newsfeed at any given moment – to pay closer attention to the algorithms, and humans, curating their daily feeds.
An internal Facebook memo gives a good sense of what happens when primary news sources are repackaged. It lists seventeen news categories: “Business, politics, science, technology, health, disaster, crime, lifestyle, celebrity, strange, education, war/terrorism, sad/disturbing, other, local, gossip, risqué” with a lack of alphabetization that may indicate their relative importance. It also offers such advice as “Assume every topic in pending is a real-world event, until proven otherwise”; “Do not copy another outlet’s headline. For legal reasons, all of our descriptions need to be original” and such technocratic Newspeak as “The unique keyword [for a post] should not exceed six words.”
The apparent rank of categories in the memo is almost as disturbing as the insistence on new headlines – words which journalists and editors have, presumably, chosen in order to convey some nuance. One obvious consequence is that news items acquire new meanings as we drift further downstream from their source. Consider, for instance, the seemingly endless updates on ‘scientific’ studies that tell us to consume more, or less, of a particular food. Most turn out to be shoddy science – small sample groups, poor controls, funders with vested interests – while others say nothing like to what the headlines, and secondary headlines, allege. This may mean nothing more than confusing dietary advice in “lifestyle” stories, but it also distorts the public understanding of complex issues like foreign policy, global change and healthcare reform.
This is not the first time that Facebook has been criticized for manipulating users’ responses. Two years ago the company disclosed that it had tweaked the news feeds of more than half a million users, as part of an academic study into how social media affect our emotional responses to news. The Orwellian title of the associated study – “Experimental evidence of massive-scale emotional contagion through social networks” – should be warning enough: At the time, the company’s chief product officer reassured its users that Facebook was “just providing a layer of technology that helps people get what they want … That’s the master we serve at the end of the day.” Gizmodo’s revelations suggest otherwise.
What has allegedly taken place at Facebook ought to remind us that new technology may alter our delivery systems but it cannot fix our prejudices. If anything, it exacerbates our tendency to seek out opinions that confirm those we already hold. The amplification of partisan differences may stir passions and help ‘trending’ stories to generate more ad revenue, but perhaps we should pass up the convenience of mysteriously curated newsfeeds for more traditional journalism that serves no masters and still pursues old-fashioned ideas like accuracy, balance and the public interest.