Gender Bias in Artificial Intelligence (AI) Virtual Assistants

9/16/19 • From The Oxford Student by Allan Haaugh and The Telegraph by Natasha Bernal

This pair of articles from The Oxford Student and The Telegraph focus the need for a genderless voice in Artificial Intelligence (AI) virtual assistants as a way to end gender bias in and the exclusion of non-binary users by those products.


Professor Gina Neff, a Senior Research Fellow at the Oxford Internet Institute and Associate Professor at the Department of Sociology at Oxford University, believes “there is no reason why we need to reproduce old-fashioned human notions in new technologies.” While genderless voices “may seem frivolous to some, [they] can reach people who might have otherwise felt erased from advanced of technology.”

Research done by Vice Media’s Virtue creative agency identified a range of 145 to 175 hertz as a gender neutral frequency.

In another report recently published by the United Nations, “I’d Blush if I Could”, indicated that AI virtual assistants are reflecting, reinforcing and spreading gender bias and explores the consequences of the spread of smart speakers and voice assistants (expected to grow 35% annually at least until 2023).

Neff comments that the report “raises important questions for how people of all genders will fare within this new future and what we might do today to address inequalities tomorrow.” A primary concern is the passive nature of responses the voice assistants give to verbal sexual harassment, which can contribute to and normalize such treatment of women. The report also seeks to address how and why the majority of voice assistants have been developed as female.

Designing voice assistants for gender neutrality means “taking on board expertise in anthropology, psychology, the humanities and the social sciences to design technologies that fit the much richer realities of human interaction,” Neff.  “Researchers like me have been calling on companies to do social impact assessments of their technologies to understand the full range of their potential social, cultural, political and ethical effects on society. Technology companies should design better technologies and commit to fixing products and services that have a negative impact on people,” she adds.


Read the Original Stories:

Are artificial intelligence voice assistants reinforcing gender bias?

World's first genderless voice created to stop 'exclusion' of non-binary users by assistants like Alexa

Previous
Previous

Rising Privacy Fears are Keeping Older Internet Users Offline

Next
Next

Imperial Alumnus Receives Prestigious Scholarship to Research in United States