Digital Assistants Are Almost All ‘Female’ Unless They Are The Most Powerful

It doesn’t take decades of study to figure out that having assistants be overwhelmingly “female” in their interactions with users teaches the idea that girls and women, not boys and men, are accommodating, helpful, supportive, and communal.

Last week, writing in The New Republic, Jessica Nordell described the many reasons why making practically all digital assistants “female” is problematic. “Consistently representing digital assistants as female matters a lot in real life,” she wrote, “It hard-codes a connection between a woman’s voice and subservience.”

Cortana, Alexa, Viv, Siri, and Moneypenny are just a few of the most well-known products. Everything from guidance systems to vacuum cleaners to all-purpose gadgets come packaged with voices and most people think of them in gendered terms. Sometimes, as with WAZE, the popular logistics app, users have choices, sometimes they don’t. When Apple first launched Siri, for example, users in the U.K., but not the U.S., had the option of choosing a male voice.

It doesn’t take decades of study to figure out that having assistants be overwhelmingly “female” in their interactions with users teaches the idea that girls and women, not boys and men, are accommodating, helpful, supportive, and communal. Digital assistants are a kind of public resource, agents whose purpose is to do what they’re told to do. In the case of Siri, which is a Norwegian name meaning “beautiful woman who leads you to victory,” the presumptions around asking and gender go one step further: Siri appears not to actually be able to say “no” in response to any request.

Ask Siri, “Can you say ‘no’?” and Siri says, “Who, me?” Ask, “Are you human?” and Siri might say, “In the cloud, no one questions your existential status.” Inquire as to whether Siri can drive a car and “she” might answer, “I like to travel at the speed of light.” “Are you able to read?” gets “I’m afraid I can’t answer that.” The only time I could get Siri to say the word “no” was to ask, “What is the opposite of yes?” After dozens of tries Siri first provided a link to a thesaurus and eventually said, “The answer is ‘no,’” which is different from Siri actually saying “no,” when asked to do something. On the other hand, Siri’s answer to simply hearing, “Siri,” was, “Your wish is my command.”

As Nordell points out, this artificial assistant fall back to gender stereotypes is hardly rare. The same effect can be seen in toys, in gaming, on television, in movies, and in life where girls and women continue to be portrayed as “assistants to the leading male character.” Robots, also, are being developed in highly gendered ways that are, frankly, somewhat depressing.

Today, the most common job for women in the United States is still what it was in 1960: administrative assistant. And, a large percentage of the top jobs for women in the country continue to be those in which women serve in support roles, often to higher status, higher paid men—secretaries, teachers, and nurses, for example. Women are the majority of counselors, social workers, teachers, librarians, public relations professionals, health aides, maids, housekeepers, nannies, and medical technicians.

A common thread in these types of jobs is emotional labor and that, too, is being computerized. Ellie, for example, is the name given to a “virtual therapist” created at USC in conjunction with DARPA. Ellie, created “to talk to people in a safe and secure environment,” is designed to diagnose PTSD, anxiety, and depression. She is, like so many women, a health care support agent who “listens well” and is highly responsive to soldiers as they recount trauma.

The transfer of gender stereotypes about women to digital assistants is only half of the equation, however. Men’s names are being used. It’s just that they are associated with higher value, super smart, and powerful computers. These are the computers that have higher status and that we are supposed to fear. The ones that will fill our jobs or take over our civilization.

Watson, IBM’s super computer, is one example. In 2015, during an interview, IBM CEO Ginni Rometty explained that Watson was not something to be scared of, but to appreciate. She urged people to think of Watson, a machine unmatched in its ability to use natural language processing, grammar, and machine learning to parse unprecedented volumes of data, as a digital assistant. Like other digital assistants, people ask Watson questions and Watson answers. The difference is that he is bigger, better, more powerful, and scarier. As Rometty described Watson’s rapidly expanding abilities, ones that included it becoming more human-like, she moved from referring to the computer as “it” to calling Watson “he.”

Likewise, “Eugene Goostman” is the name of the bot that has most successfully competed in the Turning Test, a measure of computer intelligence based on whether human judges can determine whether or not the “person” they are speaking to is real. Eugene Goostman is designed to be a 13-year-old boy.

There are exceptions, such as a supercomputer named after Admiral Grace Hopper, but they are less well known.

Gender bias in STEM fields, including most acutely in computer science, is well understood. The work of remarkable women pioneers is frequently ignored and “tech” has become, over time, coded as “male.” In the U.S., where math and science gender gaps are among of the largest in the world, most people assume that technology is not interesting to girls and women, a stereotype that becomes a self-fulfilling prophecy. Many pioneers of early computing were women and for decades women outpaced men in terms of growth in the field. A series of overlapping cultural and technological events—backlash against feminism, the hyper gendering of toys, and the introduction of desktop computers in the mid-’80s negatively affected women’s participation and the number of women in computer science has since precipitously dropped. In the mid ’80s, women made up more than 35% of computer scientists. Today that number is fewer than 18%. They are paid less than their male peers and, because of hostility in the workplace, more than 50% of them leave within 10 years.

Many people dismiss issues like these, which are fundamentally about representation and its impact on self-image, ambition, and human potential, as inconsequential, but they are mistaken. Naming and designing products in these ways is both a symptom of bias and a cause, and steps should be taken in the design of new technology to understand how social inequalities are related to technical ones.

Soraya L. Chemaly writes about gender, feminism, and culture for several online media including Role Reboot, The Huffington Post, Fem2.0, RHReality Check, BitchFlicks, and Alternet among others. She is particularly interested in how systems of bias and oppression are transmitted to children through entertainment, media, and religious cultures. She holds a History degree from Georgetown University, where she founded the school’s first feminist undergraduate journal, and studied post-grad at Radcliffe College. She is currently the Director of the Women’s Media Center Speech Project.

Other Links: