Skip to main content

User Experience Center

Thumbnail

UX, Public Trust, and Bots

Published by Megan Campos

Published by Megan Campos

UX designers and researchers are tasked with identifying, analyzing, and honing in on experiences in many users’ lives that can be improved on a number of levels.  In light of the frequency with which people use the internet as an information highway, UX is especially important as a means of illuminating and accommodating human needs in an otherwise in human space.

Traditional news media—think television, radio, and print news networks—have been positively and negatively effected by the global turn to social media. As people have shifted to social media as a news source, these traditional outlets have been able to reshape how they interact with their audiences, grow their readership, and create new jobs specifically for online growth.

The flip side, however, is that other entities—more on who they are later—can use social media to create and disseminate stories that are intentionally factually inaccurate or that push extreme viewpoints. These entities are able to disseminate fake stories widely through the use of automated accounts, called “bots”, that use simple algorithms to identify social media trends, adapt them to their own purposes, and re-share these stories directly to an audience that is primed to believe them. While many bots are harmless, there are several entities that aim to use them for nefarious purposes.

In a world of bot-spread false stories, UX designers and researchers have an opportunity to reconfigure information sharing interfaces to ensure that users are better able to decrease their exposure to false stories and instead access real, fact-based information.

news botBots on the Internet

WHAT ARE BOTS?

A bot is, at its core, a mechanism for automating something. Bots are present in our daily lives, whether we know it or not; from the intentional installation of an Amazon Echo assistant to the automatic tweet that gets generated when the New York Times posts a new article, bots operate in both the forefront and background of human online activity.

Bots do not exist in a vacuum. They are created, oriented, and managed by humans, and those humans have a range of intentions. The most benign are merely expediting processes—like the Twitter bot that reposts news stories for media outlets, or the bot that processes your online food order. Others, however, are created for the sole purpose of disruption, distraction, or even destruction. These bots are the ones that should be curbed, and UX designers and researchers can play a role in ensuring that nefarious bots do not exert undue influence over people’s world view.

WHY ARE BOTS A PROBLEM?

Recently, Russian-owned bots have been heavily featured in the news. These bots were intended to disseminate fake news stories (created by humans) en masse, ensuring that a broad audience was exposed to and given the ability to further share false information. That these bots were created in an attempt to influence the outcome of the 2016 presidential election clearly illustrates how bots can be problematic when used for nefarious purposes.

fake news bot

Some bots are problematic for a variety of additional reasons. These include:

  1. Their ability to over-share sensationalist new stories, which means that other key news may be obscured or ignored.
  2. The sheer volume of bots across social media platforms means that the stories they promote receive an automatic boost, whether that means appearing in the trending topics on Twitter or the trending news section of Facebook. If false stories are front and center, they are far harder to ignore or disregard.
  3. The stories they share are often polarizing, which means that they exacerbate latent feelings of distrust or hostility and spread them across entire groups by means of social sharing.

WHY DO BOTS SUCCEED?

Bot-generated false news stories are appealing because they are often outsized and fantastical. For example, a story posted by a Rhode Island blog claimed to describe historical accounts of an Irish slave trade in America. Because the story was labeled as describing “the slaves that time forgot”, and because the details in the story were outsized and dramatic, the story was shared many times over—by social media and reputable publications alike—before it was debunked by fact checkers. A study from Buzzfeed showed that users engaged with fake news stories more than they did with fact-based stories generated by reputable outlets.

Bots largely succeed through inundation of messaging and adherence to incendiary trends on social media—think trending hashtags that tap into extremist views or bring deep-seated fears to the forefront of the headlines. This includes mass postings to popular Twitter hashtags, repeated dissemination of links to false news articles, and the posting of brief, highly polarized messages. There is also the very human tendency towards confirmation bias, which in this context means that those who hold a strong set of beliefs tend to seek out and find news stories that show supporting evidence for those beliefs—no matter how extreme. 

Human Factors

TRUST IN NEWS SOURCES

Understanding how people find their news sources and why they place their trust in them is key to understanding both how bots work and why they are problematic. As was especially clear in the 2016 presidential election, politically partisan news is hugely divisive. Bots play into this divide by disseminating stories that appeal to extreme extrapolations of the core beliefs of either side of the political spectrum. They specifically key into the buzzwords employed by trusted news sources, retweeting or reposting with relevant hashtags to ensure that the desired audience sees them. Once that audience sees the stories, they are likely to retweet or reshare on their own personal accounts, thereby ensuring that the dissemination continues within like-minded communities.

People seek out and trust their news sources in different ways, and their level of trust often rises and falls along with the perceived partisanship of each news organization. It’s understandable that left-leaning and moderate people are more likely to trust media that is perceived as more liberal, and right-leaning people are more likely to trust media that is perceived as more conservative. There is also value attached to name recognition within each of those groups, which means that a small number of news organizations hold the majority of the country’s trust.

SOCIAL MEDIA

An article in The New Yorker argues that several psychological human factors present significant difficulties in the dissemination of news on online platforms:

  1. The tendency towards confirmation bias acts directly in opposition to people’s opinion of outright facts, and it follows that users are far more convinced by supporting evidencethat gives credibility to their strongly-held beliefs.
  2. Humans also struggle with recognizing shortcomings in their own logical reasoning, but they’re quick to notice the weaknesses in others’ arguments.
  3. Our logical reasoning has not evolved in tandem with the speed of technology, and so the biological tools we employ to distinguish fact from fiction are less effective.

We split the cognitive load of assessing valid versus invalid information with those around us, and we give credence to their views in a way that we might not with traditional news sources. All this to say that we cannot simply rely on human intuition or self-interest to mitigate the proliferation of bots.

A UX Approach

WHY UX

Reconfiguring the social media landscape in regards to the backend engineering might be complex, but the user experience is something designers and researchers are able to break down and address. UX designers have the ability to craft what users see, how they see it, and how they interact with it—that’s the job. If the information users are seeing is false or misleading, their experience and ability to rely on the internet as a news source suffers. While it may not be a professional obligation, helping to clarify and streamline the ways in which users experience, ingest, and disseminate their news is a means of improving their interactions with the world as a whole.

ONE SOLUTION

If detection is a key methodology, then from a UX standpoint there are several further steps that social media sites like Facebook could take. Indeed, there are many entities out there trying to come up with solutions that balance sophisticated code with human autonomy. Perhaps, however, the short-term goal should be to create roadblocks to bots that do not curtail the attempts of actual users to share information they are interested in, whether it is true or not.

To think about possible solutions, I worked with two non-UX adults and ran a metaphor brainstorming session (Wilson, 2011). We asked the question “What detection metaphors might provide ideas or insights about how to improve the detection of bots on social media?” After a brainstorm session we focused on Captcha as a detection mechanism, and explored how its component parts could be used in the detection of bots on social media. The process helped to break this complex problem into more digestible issues.

There are two pieces of technology—captcha and Facebook’s disputed news alert—that could be utilized to further stop the dissemination of bot-generated news. Facebook already uses this tool when suspect false stories are about to be posted, but carrying it one step further by adding a captcha check is a logical means of distinguishing bots from humans. Here’s a sketch of what the process could look like:
 

graph

Moving Forward

Next steps to addressing the problem of bots would delve into researching the connection—or disconnect—between user trust and news media. This research could supplement bot suppression efforts, so that not only are people receiving less false information, they are also presented with news sources that are more consistently reliable.

The problem of bot-disseminated fake news is multifaceted, but so is user experience. By implementing simple design solutions, UX professionals can chip away at the problem so that the experience of reading and believing news can be more consistently reliable. Captcha is one such solution, and if a simple brainstorming exercise can produce that idea, there are surely many other that the industry UX Design and Research professionals could create.

 

<BACK TO ALL UXC BLOGS>

Megan campos

Megan Campos

Megan is a Research Associate at the User Experience Center. Prior to joining the UXC, Megan was a website strategist at a Boston-area web and print design firm. In this capacity, she served as a project manager, copywriter, and information architect for clients across a wide range of industries. She also has professional experience in college admissions and institutional development.

Megan holds a Bachelor of Arts in Sociology and Spanish from Dickinson College. She is currently pursuing a Masters of Science degree in Human Factors in Information Design from Bentley University. Website | LinkedIn

Let's start a conversation

Get in touch to learn more about Bentley UX consulting services and how we can help your organization.