Scottish AI Alliance

View Original

Is the public sector ready for child-centred AI?

Guest blog from Morgan Briggs, Mhairi Aitken, and David Leslie of the Alan Turing Institute

taking children into account

As the use of artificial intelligence (AI) systems is playing an ever-growing part in children’s lives, it is vital that their views are taken on board in shaping the design, development, and deployment of the AI applications they interact with. In the public sector—where AI is increasingly used to inform the allocation of resources or to categorise potential users of services—there are important questions around the ways in which children should be included in decision-making processes: What are the opportunities and challenges to meaningfully engaging children and how can policymakers ensure that AI technologies are used in ways which protect children’s rights?

UNICEF Policy Guidelines

In the Ethics Theme of the Alan Turing Institute, the United Kingdom’s national institute for data science and AI, these are some of the questions we explored in our recent research in collaboration with UNICEF.

This project considered the effectiveness of UNICEF’s draft Policy Guidance on AI for Children in guiding public sector uses of AI. UNICEF’s guidance was published in 2020 and co-produced with the Government of Finland. The draft policy guidance contains an introduction to what is meant by the term AI followed by descriptions of the key opportunities and risks AI poses in the context of children’s rights. It also sets forth nine requirements for child-centred AI such as supporting children’s development and well-being and prioritising fairness and non-discrimination for children, among others.

As part of this release, UNICEF recruited organisations across the world – including policy institutions, governments, and businesses – to participate in a piloting of the guidance. Organisations were tasked with using the UNICEF guidance and openly sharing findings about its effectiveness in practice. Our team at the Alan Turing Institute was invited to take part due to our involvement in the production of UNICEF’s policy guidance and our ongoing work on AI in the public sector.  

Our own guidance, Understanding artificial intelligence ethics and safety: A guide for the responsible design and implementation of AI systems in the public sector is a comprehensive document used widely across the UK. It assists organisations with designing, developing, and deploying AI systems in a reflective and principles-based manner that prioritises values such as human dignity, interpersonal solidarity, human well-being, and biospheric flourishing. The content of the guidance is currently undergoing an expansion in which we hope to give public sector employees more tools for implementing it. We are in the process of designing a series of practice-based workbooks to accompany the guidance, and through our partnership with UNICEF, one of the workbooks will focus on the ethical implications and potential impacts of the use of children’s data in public sector AI applications.

Empower, Include, and Prepare

To inform the writing of this workbook, we consulted with public sector organisations across the UK on critical topics such as the challenges to implementing child-centred AI in their organisations, as well as existing polices such as the UNICEF guidance and GDPR. We prioritised three of UNICEF’s child-centred requirements during this project, namely, to empower governments and businesses with knowledge of AI and children’s rights, to ensure the inclusion of and for children, and to prepare children for present and future developments of AI.

We conducted 14 hour-long semi-structured interviews from June to August 2021, spanning organisations across a variety of sectors such as health, education, and regulation. Various topics were covered in the interviews including current uses of AI and existing policies being applied within interviewees’ organisations, perceived impacts of AI technologies in the public sector, the implementation of policies such as GDPR, and how children and their families should be involved in the design, development, and deployment of AI technologies that use their data.

UNICEF has recently published a case study which documents our findings here. Some of the main takeaways from the public sector organisations that we interviewed were as follows:

  • Public sector organisations believe there are low rates of data literacy amongst the public

  • There is an overall lack of understanding and clarity surrounding the implementation of GDPR principles

  • There are many guidance documents being drafted on the topic of children’s rights and AI, and organisations are unsure which to use moving forward. Organisations wished to see synergies formed between existing and upcoming guidance documents

  • There is a desire to make the UNICEF Policy Guidance on AI for Children more actionable, to include more specific recommendations by sector, and to ensure the guidance is delivered in an age-accessible manner

Involving children in decision making

The findings from these interviews revealed public sector stakeholders’ commitments to protecting children’s rights and their enthusiasm to engage children in discussions relating to AI, but they also revealed many challenges associated with doing so. It is clear that in order to address these challenges children and young people must be involved in informing decision-making about the ways that AI is used in the public sector now and in the future.

So far, our research has only investigated one side of the story. We are now developing novel approaches to engage children, young people, and their families in discussions around AI in the public sector, so that we can hear the other side. In particular, we plan to engage children and their families to better understand how they would like to be involved in the design, development, and deployment of AI technologies that use their data.

We know that public dialogue surrounding AI and its uses is crucial to underpin ethical and trustworthy practices and to facilitate public trust in organisations using AI systems, and so we’re excited that in 2022 we will be embarking on an ambitious project to engage children with this important subject. We will be running a series of deliberative engagement activities to understand children, young people, and families’ current perceptions of AI, how they would like AI to be used in the future, and how they would like to be involved in processes to inform the design, development, and deployment of new AI systems.