Equality & AI: Promoting & Protecting Equality in the Public Sector

Graphic for podcast, which reads "Equality & AI: Promoting & Protecting Equality in the Public Sector. Guests include Katharine Weatherhead, EHRC and David Morrison, NHS24, plus digital art of two people on a seesaw, representing 'equality'.

In this episode of Turing's Triple Helix, we discuss how equality, artificial intelligence and the public sector intersect.

Explore how people with protected characteristics may be impacted by AI systems, how public bodies are embedding equality and the barriers they face.

Guests

  • Katharine Weatherhead, Senior Associate - Scotland Compliance, Equality & Human Rights Commission

  • Davie Morrison, Participation & Equalities Manager, NHS 24

Listen

Links

  • Calum McDonald, SAIA

    Hello and welcome to Turing’s Triple Helix, the podcast channel of the Scottish AI Alliance.

    My name is Calum, Engagement Officer at the AI Alliance and today we’ll be diving into how public bodies are considering equality when using artificial intelligence.

    To explore this we are joined by two special guests today:

    Katharine Weatherhead is a Senior Associate for Scotland Compliance with Equality & Human Rights Commission

    And we are also joined by Davie Morrison, Participation & Equalities Manager at NHS 24.

    Welcome both, and thanks for joining us today to explore how equality, the public sector and AI intersect.

    To start us off, Katharine, Davie, how could AI technologies affect people with protected characteristics, vulnerable people and marginalised communities? Katharine –

    Katharine Weatherhead, EHRC

    Thanks Calum.

    I think a key point to make here is that AI can present both substantial benefits and risks for different groups.

    So vulnerability will vary according to the context, and what the AI system is. For instance, are we talking about algorithms to guide the allocation of social security benefits, or about facial recognition technology used for policing?

    But the Equality Act requires public bodies to really think about how their policies and practices affect groups with certain protected characteristics. Now, there are nine protected characteristics under the legislation, including race, disability, age and so on.

    Now, equality law is really clear about the need to eliminate discrimination linked to protected characteristics. So, in AI discrimination against a particular group could result from bias in the data used to train an algorithm for example. So we need to be alert to these risks of discrimination.

    On the other side of things, equality law also directs public bodies to consider how they can advance equality of opportunity for protected characteristic groups and foster good relations.

    I think this is where AI can really improve experiences for people and reduce inequality. It also helps to use an example, so for instance: a healthcare provider might use software to identify people who are at risk of missing their medical appointments, or in order to then provide them with information in an alternative format such as Easy Read or another language. So the lens of equality law helps alert us to both the negative and positive effects of AI on particular groups in a specific context.

    Calum McDonald, SAIA

    Brilliant, thank you. And Davie, do you have any thoughts on this?

    Davie Morrison, NHS24

    The sheer fact that you’re identifying that there are marginalised groups in Scotland, identifies that not everyone can access public services. And I think it’s really key then when public services are considering how to use AI that they also need to take account of people, who, where through economic circumstance or other situations, they are unable to access digital services. AI is going to be used in a digital way, so it’s really important that public services are accessible to as many people in Scotland as possible. And I think to achieve that then there needs to be a realisation that people will access services in different ways and therefore it’s really important that they can do that and they can do that in a way which meets their needs and enables them to access different services, the broad range of public services in Scotland

    Calum McDonald, SAIA

    Brilliant, thank you very much. At the Scottish AI Alliance, our vision is that Scotland becomes a leader in the development and use of AI which is trustworthy, ethical and inclusive.

    Our approach is that embedding equality in the use of AI is key to that vision. Embedded equality means we can trust AI systems more. Embedded equality is a marker of an ethical approach, and it compliments and enriches a push for a diverse and inclusive relationship with AI technologies.

    To that end, we have recently launched the Scottish AI Register which you access at Scottish AI Register.com. Here public bodies can share information about AI systems they use, how they work, and what impact they have. While we are at a very early stage of the Register, we hope that promoting transparency and openness around AI algorithms in public service can help push the needle towards embedding equality.

    Katharine, what’s your perspective on how organisations can embed equality when they are considering whether to use AI?

    Katharine Weatherhead, EHRC

    Thanks Calum. I mean, as you’ve just said, that question of how to embed equality in AI is really important.

    Under the Public Sector Equality Duty, public authorities and organisations carrying out a public function are required by law to consider equality in everything that they do and adopting a similar approach will help private organisations avoid discrimination and advance equality too.

    So, first, I’d say that people should start thinking about equality early. That thought process should start at the earliest possible stage to really give you the best chance of understanding the AI system’s potential equality impacts. And evidence is crucial here. So public bodies should be asking themselves if they have enough information to consider the range of equality impacts, and cast that net wide in terms of looking for sources of relevant information. So there may be internal data, or external data, there could be relevant local data or national data to look at. There may also be evidence gaps for particular groups, and public bodies should seek to consult and engage protected characteristic groups to better understand their experiences, their needs and any challenges in relation to the issue that the AI is focussed on. And we have consistently called for improved data collection in a variety of areas, not just in relation to AI, that help inform better decision-making.

    As a second point, I’d say – keep thinking about equality when the AI is being used. So, don’t stop considering equality once the AI system has been rolled out, because we can only know its actual impacts once it’s been introduced. So, public bodies should think about whether they have mechanisms in place to monitor the AI’s impact. That could involve, internally analysing who’s using the tool or any different outcomes for people subject to the tool, or it could involve ensuring that there are accessible routes for people to share feedback or even complain about the experiences.

    It’s important to note that the duties under the Equality Act are ongoing, and without giving the duties sufficient consideration, organisations risk legal action for breaches, risk reputational damage, but they also missed that opportunity to improve their decision making on a specific AI project. So, ultimately in response to your question, I would say – get that equality thinking early, and keep it going.

    Calum McDonald, SAIA

    Brilliant: start it early, and keep it going.

    So, Davie, with this in mind, can you tell me a bit about the area within which you work and your approach.

    Davie Morrison, NHS24

    One of the areas NHS24 is particularly interested in is – how can people access our services, and how can we reduce the health inequalities that people experience? I think a key point in this is about access. So how can we improve access?

    I think when you look across the public sector in Scotland, one of the questions we need to ask ourselves is – how accessible are public bodies in Scotland? You know? How easy is it for people to access services?

    We work on a ratio of possibly 20% of people in Scotland would find it difficult to access services. So the design approach has to be, if we can improve access and design out inequality for that 20% then the rest of the people would be able access it as well. So it’s having that equality focus.

    When we talk about equality, it’s also about fairness. The way the public services are providing services, are they fair to the public? So, one of the approaches we have taken at NHS24 is we have become a sponsor in the Scottish Government’s CivTech programme – specifically CivTech Challenge 7.6.

    This is a programme that enables the Scottish Government and public bodies to work with companies to design solutions based on the evidence acquired from people with lived experience. I think a key thing here is that if you’re going to design a service, or you think you’ve got a solution, then what you actually need to do is you need to engage with people. You need to find out what people’s access requirements are, you need to find out what people’s expectations are, you need to find out what are the barriers that people experience and I’ve no doubt at all that AI is going to be a really positive thing in terms of improving access for people. But unless you involve people, and I think key to that is involving the third sector as well, unless you start to identify – okay so what’s the challenge? So for our challenge, it was - how can we use artificial intelligence data and new technologies to improve access for disabled people?

    And using the approach of engaging with people, understanding what the barriers are, engaging with third-sector organisations, then through that we’re at a research and development phase of a product which has been known as ‘Connecting You Now.’ Which is intended to be a concierge service to enable people to ask a question using AI and find them a solution and an answer. I think some of the feedback that we’re getting from people who post information on websites is that they host so much information that they find it difficult to access the information itself. So how can we use artificial intelligence to identify – so for instance if you’re wanting to ask questions about where’s your local pharmacy or you want to identify some information about services available in your local area – then how easily is that information found? That’s where artificial intelligence can be an enabler.

    I think the most important thing about how we apply AI, it can’t further disable people; it needs to enable people to get access to information, and do so in a way in which it meets your individual needs. So through the CivTech Challenge 7.6, and working with colleagues in the Scottish Government, working with Third Sector organisations and working with individuals to learn their individual needs and aspirations then the Connecting You Now programme is intended to enable access. It’s intended to help people find the information in an easy way, in a streamlined way, and in a way in which they are then able to take forward the – you know whether that’s attending a pharmacy, or whatever it is – then they’re able to do that.

    Calum McDonald, SAIA

    Great. Yeah that sounds like a good example of what Katharine was talking about in the first question about how there are potential ways which we need to make sure AI doesn’t impinge on equality, it sounds like a good way in which AI can enable equality. It sounds like a really interesting project, and I’m glad it’s happening where I live.

    Davie Morrison, NHS24

    Okay, so Calum, just to say that the focus of the CivTech Challenge 7.6 is on sensory loss. That’s just the starting point clearly, you know clearly, we’re looking at people who are disabled and sensory loss is a key area of improvement where if people can access information then that will be of benefit to them. AI can be used in so many different ways, where our stream may be looking at disabled people or find themselves disabled from accessing public services, so too that can also be the experience of minority ethnic people – particularly if English isn’t their first language or their chosen language. So again, using the AI then, what we would hope in the future – that that can be used in a posisve way to enable people to gain greater access to services than they currently do.

    Calum McDonald, SAIA

    Great, thank you very much Davie.

    Yeah it seems to me that increasing accessibility increases equality: a great feedback loop there. When you do things right it benefits everyone, not just people with protected characteristics. Equality and accessibility is beneficial to all of us.

    What would your top tip be for an organisation looking to get things right in this regard?

    Davie Morrison, NHS24

    My top tip without a doubt is to engage people early. Don’t come up with a solution, don’t come up with you think you’ve got the service. As Katharine brought up earlier, an Equality Impact Assessment is a crucial step, as is a Data Protection Impact Assessment. Also it’s about engaging with people, talking with people. Find out what people’s experiences are. Find out what people’s frustrations are when they’re accessing services.

    Also, there are circumstances where people are fearful, because they’re at home, they’re unable to access service at a time of need, and that means that people are frightened, and that has an impact on their health and their wellbeing.

    You know, so trying to find solutions to improve access can be a positive step, but to achieve that you need to listen to people. You need to understand the points that they’re trying to raise, the frustrations that they have, and by doing that and by working with them and involving them in the design and the development of a service or a solution, then what you’ll get then is you’ll get take up, because there’s absolutely no point in designing something, and developing something, and introducing something if people are unaware of it. People don’t feel they’ve been involved in it, if people don’t have an ownership of it. So it’s really important that as you’re designing and developing, you’re also communicating what you’re doing with the community, so that the community gets an opportunity to contribute as well and to feel part of it because if they don’t the solution won’t be used and therefore the whole thing will have been futile.

    Calum McDonald, SAIA

    Thank you very much for sharing.

    At the Scottish AI Alliance we try to implement a similar approach with our approach with the delivery of the AI Strategy in Scotland and we are currently on the lookout for folk who would like to help feed into the AI Strategy and you can drop us a wee email if you’d like to be involved to engage@ScottishAI.com

    Katharine – you’ve been doing a lot of research around this recently. Can you share any insights on the main barriers to good practice to equality around AI which public bodies are facing just now please?

    Katharine Weatherhead, EHRC

    Yes, very happy to.

    It’s been really interesting research and we’ve come across quite a few barriers that public bodies are facing when they’re considering equality in AI but in the interest of time I’ll stick to three, and I’ll cover them briefly.

    So one of the first challenges is that it can just be difficult to understand the technology. So, AI involves programming computers to sift these large volumes of data and learn to answer questions or deal with problems, answering problems similar to what Davie was talking about. But a key challenge is finding out where AI is actually being used, because public bodies don’t always know what the AI is, or whether they’re using it – or they might know that they are using AI but they won’t necessarily know the intricacy of a particular programme or how it really works. So getting at the granularity of an AI system’s equality impacts could be dependent on the public body asking the supplier for it and the supplier then actually sharing that information.

    The public sector equality duty applies to AI systems that public bodies are already using, or that others may be developing or using on their behalf. It is therefore really important to embed equality thinking into the supplier relationship from the outset.

    So that’s the sort of, first challenge that we come across – just understanding that technology.

    But moving on, there’s also confusion about which frameworks to follow. We know that there are several legal and non-legal frameworks applicable in the AI sphere in Scotland. Davie’s mentioned a couple already, like data protection impact assessments, and there are a lot of good practice ethical guidelines, but then there are obligatory frameworks like Equality Impact Assessments. There is a risk that public bodies undertaking an AI projects get confused by the number of separate frameworks, and if we take ethics as an example, it is really clear that ethics features really quite heavily in conversations about AI. Moreso than equality. But there’s a risk then that the Equality legal framework gets lost in these conversations about ethical good practice more generally and broader principles. So that’s why it’s really important for internal policies and procedures to really set out what is required when a public body commissions and uses an AI system.

    Finally, it is related, but there is also a perceived tension between data collection on the one hand, and data protection on the other hand. So we’ve found that public bodies may be a bit hesitant to collect data on the impact of AI on protected characteristic groups due to a sense that data protection law requires the collection of as little personal data as possible. But data protection law doesn’t prevent public authorities from processing personal data for the purposes of the public sector equality duty as long as it’s in line with the data protection principles. And actually, to have due regard to the duty, public authorities really have to understand how their policies and practices affect people with particular protected characteristics. So collecting that data can help build that understanding. Some equality impacts may even be visible in data-via-proxies, like in postcodes for instance, so it’s important to be alert to that issue of proxies as well, within this data collection area.

    And I’ll maybe just cheekily take the opportunity here to direct listeners to our guidance on the public sector equality duty and data protection that’s available at our website www.equalityhumanrights.com

    Calum McDonald, SAIA

    Brilliant, thank you very much Katharine. Davie, do you have any reflections on this question at all?

    Davie Morrison, NHS24

    I think the use of data should be seen as a positive one if it drives forward improvement. I don’t think people should be, you know, organisations need to be responsible in how they use data, but using data appropriately can really drive forward improvements and as Katharine gave us an example of ‘postcode’ then that in itself can influence how within NHS24 we are targeting our engagement to try to promote access in areas, as we have been doing in areas recently that it’s been identified that there’s multiple deprivation, so specifically identifying that if we go in and engage with communities within areas of multiple deprivation that can reduce health inequalities. So I think the use of data is a key thing in terms of how you identify ways in which you can improve access, again it’s about being responsible with it.

    Calum McDonald, SAIA

    Brilliant, so that brings us to the end of our podcast today. I hope our listeners have enjoyed our exploration of how public bodies’ use of AI intersects with equality. I want to say thank you very much firstly to Davie Morrison from NHS24. Thank you Davie for joining us.

    Davie Morrison, NHS24

    Thank you Calum

    Calum McDonald, SAIA

    And Katharine Weatherhead from the Equality & Human Rights Commission. Thank you very much for joining us today Katharine.

    Katharine Weatherhead, EHRC

    Thank you Calum, and I’d also just encourage people to look up our guidance on artificial intelligence in public services, which is also available on our website www.equalityhumanrights.com

    Calum McDonald, SAIA

    Brilliant. I hope our listeners dive into that and find out all about it, and I’m looking forward to seeing how Connecting You Now evolves, Davie.

    Thanks very much and I hope you have a lovely day. See you later.

Other Channels

You can listen to Turing’s Triple Helix on a variety of platforms. Choose the one that suits you best!

Previous
Previous

Call for Proposals for Scottish AI Summit 2024

Next
Next

Save the Date: 28 March 2024 -Scottish AI Summit