Designing Conversational Assistants to Reduce Gender Bias

Recently, biased machine learning has received increased attention. This project will address a different type of bias which is not learnt from data, but encoded during the design process. We illustrate this problem on the example of Conversational Assistants, such as Amazon's Alexa, Apple's Siri, Microsoft's Cortana, or Google's Assistant, which are predominately modelled as young, submissive women. According to UNESCO, this bears the risk of reinforcing gender stereotypes. In this project, we will explore this claim via psychological studies on how conversational gendering (expressed through voice, content and style) influences human behaviour in both online and offline interactions. Based on the insights gained, we will establish a principled framework for designing and developing alternative conversational personas which are less likely to perpetuate bias. A persona can be viewed as a composite of elements of identity (background facts or user profile), language behaviour, and interaction style. This framework will include state-of-the-art data-efficient NLP deep learning tools for generating dialogue responses which are consistent with a given persona. The persona parameters can be specified by non-expert users in order to to facilitate more inclusive design, as well as to enable a wider critical discussion.

Research areas
Data Education in Schools
Research team

Professor Judy Robertson (PI)

Dr Valentina Andries, Research Assistant

Key contact
Professor Judy Robertson
Funding

EPSRC £188,353

Dates
-

Related news