Recently, biased machine learning has received increased attention. This project will address a different type of bias which is not learnt from data, but encoded during the design process. We illustrate this problem on the example of Conversational Assistants, such as Amazon's Alexa, Apple's Siri, Microsoft's Cortana, or Google's Assistant, which are predominately modelled as young, submissive women. According to UNESCO, this bears the risk of reinforcing gender stereotypes. In this project, we will explore this claim via psychological studies on how conversational gendering (expressed through voice, content and style) influences human behaviour in both online and offline interactions. Based on the insights gained, we will establish a principled framework for designing and developing alternative conversational personas which are less likely to perpetuate bias. A persona can be viewed as a composite of elements of identity (background facts or user profile), language behaviour, and interaction style. This framework will include state-of-the-art data-efficient NLP deep learning tools for generating dialogue responses which are consistent with a given persona. The persona parameters can be specified by non-expert users in order to to facilitate more inclusive design, as well as to enable a wider critical discussion.
Designing Conversational Assistants to Reduce Gender Bias
Related news
Project update: Children's understanding of smart speakers
Could talking to a smart speaker as if it is human be confusing for children? What consequences might this have for their interactions with technology and their attention to privacy?