Skip to main content

User Experience Center

Thumbnail

The Ethical Implications of the Chatbot User Experience

Published by Jennifer Siegel 

Humans seem to love yelling at robots. While fielding questions from customers through online help, my coworkers and I would be cursed out by people who mistook us as chatbot or virtual assistants. Sometimes, these defamations were unsolicited, with people sending insults before ever asking a technical question. We probably all have our own stories of asking inappropriate questions or saying mean things to chatbots. When faced with unfamiliar artificial intelligence (AI) in the form of chatbots, from AIM’s SmarterChild to Apple’s Siri, humans try to push the boundaries of its capability. According to Dr. Sheryl Brahnam, Assistant Professor in Computer Information Systems at Missouri State University, 10%-50% of our interactions with conversational agents (CAs) are abusive.

10%-50% of our interactions with conversational agents (CAs) are abusive.

Many in the UX field are now considering hedonomics, or the “branch of science and design devoted to the promotion of pleasurable human-technology interaction”. While UX designers strive to create the ideal experience, what does it say about our society if cruelty is tolerated and widespread? What is our ethical responsibility as designers to accommodate or not?

To address this problem, we first need to understand why people are mean to virtual assistants. When speaking to a bot at a call center, raising our voices often gets us to a human representative more quickly, as it can be frustrating to deal with a computer. Artificial intelligence may be crossing into the realm of Uncanny Valley, a phenomenon where a design that is similar, but not identical, to a human being causes a very negative response to this simulated likeness. In an interview with The Atlantic, Dr. Justine Cassell, Professor of Human-Computer Interaction at Carnegie Mellon, states that “The more human-like a system acts, the broader the expectations that people may have for it.” As virtual assistants do not visually indicate their functionality, we may assume they have more abilities than they actually do or probe to better understand those limitations.

One purpose of verbal abuse is to cause harm. Since AIs lack emotional capability, does that mean these actions are harmless? Though abusing a virtual assistant may seem innocuous, this behavior reflects back on our character. An article in the Harvard Business Review suggests that yelling at technology represents poor leadership. We may argue that abusing AI is a way of letting off steam on an inanimate object, rather than an actual person. However, studies have shown that venting does not reduce emotion, but rehearses it. People who are encouraged to express their anger are actually more aggressive in their interaction with others. Therefore, practicing this behavior towards virtual assistants could bleed into our interactions with other beings. While we might not be able to offend Siri or Alexa, we can definitely offend those who overhear us. This might be most dangerous to our children, who learn by imitating those around them.

If our behavior towards AI may be mimicked by our children and influences our treatment of each other, there could be significant implications for gender dynamics

If our behavior towards AI may be mimicked by our children and influences our treatment of each other, there could be significant implications for gender dynamics. With the emergence of AI virtual assistants, like Siri, Alexa, and Google Home, the majority of the personas are female. While some identify as gender-neutral, the default voice is typically that of a woman. Research shows that across cultures, we affiliate qualities such as kindness, helpfulness, warmth, and communicative, which may be why people prefer female voices for their chatbots.

In my opinion, though, these female virtual assistants perpetuate the stereotype that women are subservient. More concerning, Dr. Brahnam’s research found that users direct more sexual and profane comments towards female-presenting chatbots than male presenting chatbots. Brahnam has stated that “If we’re practicing abusing agents of different types, then I think it lends itself to real world abuse.” It is troubling if the harassment of female chatbots may contribute to the already widespread issue of sexual harassment towards women. In a similar scenario, studies have examined the influence of violence and sexual objectification of women in video games on rape myth acceptance, or the internalization of attitudes that justify or excuse rape. While results vary amongst the research, some studies have concluded that playing long-term exposure to sex-typed video games are correlated with greater tolerance of sexual harassment and rape myth acceptance. I’d be interested to see additional research on how AI abuse affects our communications with others, especially considering the prevalence of female AI.

How do these social implications influence our design choices? UX designers are tasked with optimizing the user experience. However, there may be times where we focus on greater user experience of society rather than each user. For example, though users may prefer a system with a female voice, it may be helpful to introduce male personas. On the biological side, low pitched voices are more discernible amongst background noise and also more accessible to elderly populations. From an ethical consideration, we can reduce the belief that service roles are reserved for women. Personally, I find it frustrating to assume that a design solution is to replace the female voice with a male voice. I liken it to teaching young girls how to avoid sexual assault rather than teaching others not to.

A better solution may be to examine the actual responses of the chatbots. While women are a minority in computer science, they are even less well represented in the subfield of AI. Based on registration information, at NIPS 2016, one of the biggest artificial-intelligence conferences, only 15% of attendees were women. Therefore, there are few women making decisions about how these virtual assistants are presented, and consequently, how a woman’s voice is literally heard across various devices and companies. An article in Quartz by Leah Fessler reviewed how different bots (Siri, Alexa, Cortana, and Google Home) responded to various forms of harassment. The examination revealed a range of responses, from lackluster at best to harmful on average. Siri actually coyly flirts in response to derogatory comments.

Fessler critiques Google Home for not understanding many comments and responding, “Sorry, I don’t understand”. While I am not certain if this is due to Google Home’s later start in development or if it was a conscious choice by the designers, I wouldn’t underestimate the choice not to engage. Similar to techniques when confronting real-world harassment, a victim may choose not to engage, and therefore not encourage the harasser. Related to Uncanny valley, it can be disturbing when devices mimic human behavior. It might be effective to remind users that this is a computer and not a person.

As an alternative way forward, Fessler offers potential responses, such as “Your sexual harassment is unacceptable and I won’t tolerate it. Here’s a link that will help you learn appropriate sexual communication techniques.” This response seems quite effective, yet I wonder if any company would ever implement it. From a UX perspective, some might say that this is a particularly uncomfortable situation for the user, which would detract from their overall experience. However, I think our ethical obligation as designers means we should not accommodate threatening behavior. Rather, we should try to address social problems in our designs.

Our ethical obligation as designers means we should not accommodate threatening behavior. Rather, we should try to address social problems in our designs.

In the end, this improves the greater user experience of these devices and how users interact in many aspects of their lives. In the spirit of inclusion, we should strive to create designs that make everyone feel comfortable on a larger scale and do not perpetuate disrespectful or demeaning behavior. While it would be great if everyone was kinder, we cannot control individual’s actions. We can, however, design chatbots that respond and defend those who might not be able to defend themselves.

 

Jen Seigel

Jennifer Siegel is a Research Associate at the User Experience Center. Prior to joining the UXC, she worked in a variety of roles within the medical device industry. She is interested in how user experience extends beyond the devices themselves and into our interactions with others. She graduated from Princeton University in 2013 with a Bachelor of Science in Engineering in Chemical and Biological Engineering and is currently pursuing a Master of Science in Human Factors in Information Design from Bentley University.

 

BACK TO ALL UXC BLOGS

Let's start a conversation

Get in touch to learn more about Bentley UX consulting services and how we can help your organization.