Human Centred AI and Its Impact on the LGBTQIA+ Community
Article from Dawn Hunter, Project Manager at the Scottish AI Alliance
The speakers
For our inaugural Pride event, the Scottish AI Alliance was pleased to host a though-provoking event around human centred AI and its impact on the LGBTQIA+ community. This event took place on the 28th of June and featured two fabulous speakers, all moderated by myself.
Our first speaker of the day was writer and research from the University of Washington, Os Keyes. Os’ research focuses on human centred AI design, particularly in the areas of gender and healthcare. Os’ talked about how artificial intelligence design to date has either ignored or been actively detrimental to the queer community, particularly where it intersects with gender. Talking about the phenomenon of gendered voice assistants, Os spoke about how the market preference for feminine voices ultimately came from a place of misogyny before taking us through some examples of attempts to create a “non-binary” voice assistant voice. Together Os and the audience questioned whether a non-binary voice assistant voice was possible, desirable or even necessary.
Our second speaker of the day was Fiona McAra, Service Design Lead for the National Care Service and Head of Practice for Service Design for the Scottish Government. Fiona spoke about the difficulties of designing human centred services and integrating truly human centred tech into large scale public services. She discussed the gap between tech blue sky thinking and the realities of delivering services at scale with tight time and financial resources. She moved on to discuss examples of supporting the queer community in delivering large scale public services, for example by removing honorifics from applications.
The event was over all too quickly, and many excellent questions from the audience were left by the wayside. So in order to fully engage with our audience, we engaged Os and Fiona further to discuss some of the other areas of interest for our audience.
Avoiding bias
One of the key areas of interest for further exploration was the ever prescient subject of bias in AI, and what the best way to develop an unbiased AI tool would be, if, in fact, this was even possible.
Os kindly expounded on this, although we noted that it was a difficult area to avoid becoming philosophical in! “I don't think it's possible to develop an unbiased tool”, Os sent in their email, “nor that it's (always) something to aspire to - but on the frequent occasions where it is, the trick is to treat reduction in biases as an ongoing activity, not a one-off. You need to approach it with the understanding that there will be problems that you've missed; that even fixing these problems might create new ones; that there is always contingency built in. In that respect, it's nothing different from much software engineering: you know there's a non-zero chance your bugfix introduced another, different bug. The task is to build processes that let you spot and address them early, and infrastructure that means you find them out /before/ you wreck all your users' days in production.”
Treating bias reduction as a process similar to bug identification in software development, or as a process of continuous iteration and improvement as you may experience in an Agile project management or development environment, certainly seems to be a way in which bias reduction might seem like less of an insurmountable object. We wanted to expand on Os’ thoughts though: in what circumstances might you want a biased tool?
In what circumstances might we want a bias tool?
Again, we attempted to avoid becoming armchair philosophers in exploring this question. Os explained that a lot of what we think of as bias is really a mismatch between expectation and design. “A heart attack detection algorithm that focuses on pain in the left arm isn't inherently biased - it's perfectly accurate....if it's applied to people assigned male at birth,” Os explains. “The bias comes in when we try to treat that population as "universal". In other words, pretty much any tool that's focused on a particular population, problem or assumption becomes biased when expanded outside it, but sometimes that's the fault of broader infrastructures (for conceptualising "cis men" as representative of people) as much as it is the tool proper.”
This led us onto another key area of questioning posed by the audience: how to best ensure a cycle of incremental continuous improvement into more inclusive AI system development, and in the implementation of tech such as AI into large scale services? This was further compounded by questioning how we can ensure the experiences and needs of queer people are built into this.
Design principles
Before we look at a cycle of continuous improvement, we need to look at the service design principles of building the right thing and then building the thing right. As mentioned during the course of the event, there is a lot of focus in the tech space on building the “thing” (and if you’re lucky it’s the right thing) without the follow up to make sure that the thing is built right. Ensuring participation from the broadest range of people, co-creating where possible, centring the lived experiences of users and challenging the assumptions of yourself and others are all ways in which human centred AI design can go some ways to building the experiences and needs of queer people into their systems.
From this basis, we can then better discuss continuous improvement, and ensuring continuous improvement in AI can follow the same cycle as non-AI systems: a cycle of user feedback, exploring and improving the system based on feedback, testing new data, deploying the new model and back round again to feedback. However, in order to ensure that our continuous improvement cycle is inclusive and effective, we need to seek feedback from as wide a group of our key users as possible, as well as our employees, developers, other stakeholders. This has to be supported by an atmosphere of psychological safety, where criticisms can be raised without reproach and where mistakes and failures are not just forgiven but encouraged in the pursuit of improving outcomes.
Continuous improvement
Fiona has much more direct advice. “In short – money and good practice. Real continuous improvement takes people and time. Human centred continuous improvement, particularly focusing on easily ignored groups takes more people and more time – the aim is to meet people where they are and make participation meaningful. The implementation of this needs senior sponsorship (or sometimes advocacy) and for people to put money behind their ideals. In practice, that often means that we need people involved in the set-up of projects who have experience in taking a human centred approach, so that they can factor in these costs to initial budget conversations.
Taking a human centred approach should always consider the impact of any project/product/service on lesser heard groups at the beginning of the process – these should then influence the definition of key user groups, carrying through all stages of research and design (both initial and continuous improvement). It’s not about artificially including queer people and other minorities, it’s about ensuring that they are appropriately considered at all steps in the process.”
Grappling with such weighty topics could barely be contained to a short online event and follow up blog, but our speakers did tremendously. We want to take this opportunity to thank Os Keyes and Fiona McAra for contributing their time and their interesting insights and for the audience for attending and being so engaged.