Our Response to the UK's AI Opportunities Action Plan
Image by: Elise Racine & The Bigger Picture / Better Images of AI / Web of Influence I / CC-BY 4.0
On the 13th of January 2025 the UK Government published its AI Opportunities Action Plan to much fanfare and critique. The paper outlines the government’s recommendations to enable the UK to “shape the AI revolution” and avoid falling behind the progress being made by the USA and China. You can read the plan in full on the UK Government website: https://www.gov.uk/government/publications/ai-opportunities-action-plan/ai-opportunities-action-plan.
While Scotland has its own AI strategy and ecosystem, any decisions made by the UK Government will significantly impact Scotland and its approach to artificial intelligence. Rather than responding straight away, we wanted to take some time to digest the proposal and research its potential implications. We spoke to our governance group to get their takes on the plan from their unique, personal perspectives. They have given their comments as individuals who are passionate about the ethical, trustworthy, and inclusive development and deployment of AI in Scotland. There are excerpts below and you can read their full statements at the bottom of this page.
Speaking as the Chair of the Scottish AI Alliance, Catriona Campbell said of the plan:
“The recent announcement by the Prime Minister on leveraging AI to drive economic growth and enhance public sector productivity marks a pivotal moment in solidifying the UK’s long-term vision for AI.
For Scotland, this initiative holds significant promise. Our nation is home to a wealth of AI talent and cutting-edge research, particularly from our renowned universities and startup ecosystem. By ensuring that this talent is retained and supported, Scotland can play a central role in the UK’s AI ambitions. The focus on democratizing access to AI infrastructure (and improving infrastructure in Scotland e.g. data centres) will enable the Scottish public sector, Enterprise and SMEs to compete on a level playing field, fostering innovation and economic growth across the region.
This collaborative approach will not only benefit Scotland but also contribute to the UK’s overall success in the AI domain. I hope this becomes a key focus in the government’s AI strategy, as it would significantly benefit both Scotland and the wider UK.”
You can read further comments from Catriona below. Do you have thoughts on the UK’s AI Opportunities Action Plan? Let us know on LinkedIn, BlueSky or Instagram.
Catriona Campbell - Chair of the Scottish AI Alliance / EY UK & Ireland, Lead Client Technology & Innovation
“For Scotland, this initiative holds significant promise. By ensuring that this talent is retained and supported, Scotland can play a central role in the UK’s AI ambitions”
Angus Allan - Senior Product Manager, CreateFuture
“The UK has a unique opportunity to show how a country can compete aggressively in AI while maintaining high ethical standards—success will require deeper collaboration between government, industry, and academia”
Michael Boniface - Chief Executive Officer, Kythera.AI
“Success, of course, will only be achieved through action, and there are other additional levers that need to be pulled to stimulate entrepreneurship and make the UK competitive in this global landscape”
Yunhyong Kim - Lecturer in the School of Humanities, University of Glasgow
“Robust mechanisms to preserve public trust must be emphasized further, particularly in how government handles sensitive data and individual rights”
Linzi Penman - Partner, DLA Piper
“To ensure that companies do not exploit rightsholder data for commercial gain without obtaining a licence, it will be important to develop new, commercially attractive high-quality curated datasets”
Full Statements
-
The announcement this week by the Prime Minister on how AI can turbocharge growth and boost public sector productivity is a crucial step towards the cementing the UK’s long-term vision for AI.
The focus on compute infrastructure and foundational data elements like the National Data Library is particularly encouraging, as the UK cannot fully reap AI's benefits without these enablers. It will be interesting to see how the government plans to harness this AI ‘horsepower’ and ensure SMEs have access, in order to level the playing field with larger firms. By democratising access to this crucial infrastructure, the UK can unlock AI's full economic value.
“The ambition to be a maker, not just a taker, of AI is bold but achievable. The UK is already home to key industry players, a vibrant R&D environment, and a strong tech and cybersecurity base. To capitalise on this, the government must identify and nurture current tech innovators, and talent especially those graduating from our world leading Universities in Scotland, ensuring they remain in the UK.
Coordination between government, educators and the private sector will be critical to achieving this.
-
The Government's AI Opportunities Action Plan shows impressive ambition, and it is great to see the breadth of recommendations, including explicitly calling out adoption as a key barrier to overcome. In the private sector, we are already seeing a slight disconnect between the transformative potential new foundation models bring and the ability of organisations to translate that into meaningful products and process improvements. With this in mind, the emphasis on AI Growth Zones, expanded compute capacity, and the creation of UK Sovereign AI demonstrates serious intent to compete globally. Having DSIT, the AI Safety Institute, and other key bodies working in concert gives us a credible path to implementation.
However, while we're positioning ourselves between the US's innovation-first approach and the EU's regulatory-heavy stance, we need to be more explicit about how we'll maintain this balance. Recommendation 24 on text and data mining reform highlights how we're already falling behind the EU in some areas. The plan is light on concrete measures for ethical AI development—this is actually an opportunity. As we accelerate toward more capable AI systems, including those that could pose profound risks to society, the UK could lead in showing how to drive innovation while upholding strong ethical standards, becoming a model for other nations and extending our soft power in AI governance.
Looking at recommendations 47-49 around private sector adoption, we see strong frameworks for implementation but limited guidance on responsible deployment. I'm particularly concerned that while we're funding AI scholars and education through recommendations 15-19, we're not adequately addressing workforce transition. That said, the foundations laid in this plan give us a platform to have these crucial conversations about ethical AI implementation, workforce development, and responsible innovation. The UK has a unique opportunity to show how a country can compete aggressively in AI while maintaining high ethical standards—success will require deeper collaboration between government, industry, and academia to ensure we're not just driving innovation, but shaping it responsibly for the benefit of society. Having an action plan is a great first step in being proactive, and I look forward to seeing the details translate into concrete action.
-
Like most in the business community I greet the AI Opportunities Action Plan with optimism and see it as a big step in the right direction. The introduction of a National Data Library and encouraging Public-Private partnerships creates a wealth of opportunity, especially if the government can succeed in reducing red tape and lengthy procurement times. This will significantly improve the chances of success, especially with the ‘Scan, Pilot, Scale’ approach. This method of approaching innovation will be both familiar and of particular appeal to the start-up community; combined with commitment to two-way partnerships, this will open the door for fast innovation and deployment in areas that have been, historically, slow and difficult to secure partnerships in.
Success, of course, will only be achieved through action, and there are other additional levers that need to be pulled to stimulate entrepreneurship and make the UK competitive in this global landscape. This announcement is a bold statement on the country's ambition; it sets a framework of opportunity for business, especially start-ups, to accelerate the adoption of AI into everyday life in a way that creates meaningful impact. We await, with great anticipation, the details on how these initiatives will be implemented.
-
The AI Opportunities Action Plan outlines an ambitious framework for leveraging AI to drive economic growth, improve public services, and secure the UK’s leadership in this critical field. From a social and cultural perspective, the plan holds immense promise, though several areas warrant deeper reflection and proactive measures.
The plan’s focus on diversity and inclusion in AI workforce development (Section 1.3) is a commendable effort to ensure equitable access to AI-driven opportunities. Initiatives to increase the representation of women and marginalized groups in AI and expand pathways into AI careers reflect a commitment to addressing systemic inequities. Similarly, the emphasis on the “Scan → Pilot → Scale” approach for public sector AI adoption (Section 2.2) supports not only innovation but also ensures AI’s tangible benefits reach all corners of society, particularly through improved public services like healthcare and education.
The commitment to unlocking UK-based infrastructure (Section 1.1) and data assets (Section 1.2) promises to accelerate innovation. However, it seems equally vital to recognise that the social implications of data security, privacy, and ethical usage are profound, going beyond performance and evaluation measures. The recent dispute between BBC and Apple regarding the latter's AI generated news alerts provides a cautionary tale, highlighting the multiple levels of risks, encompassing not only immediate impact on the reputation of both those providing AI tools before they are ready, and the news media companies that have been misrepresented, but propagation of misinformation that have consequences on people's perception of current affairs and individuals and groups featured in the news. In the longer term these inaccuracies can exacerbate political and individual conflicts. Robust mechanisms to preserve public trust must be emphasized further, particularly in how government handles sensitive data and individual rights.
The plan’s acknowledgment of AI’s potential to disrupt traditional labor markets is evident in its focus on lifelong skills development and reskilling programmes (Section 1.3). These efforts could be enhanced by addressing cultural shifts in workplaces and the introduction of concrete support for communities disproportionately affected by the introduction of AI. Ensuring that the programmes help fill skills and knowledge gaps in critical thinking, human rights and ethical evaluation, audit of cultural implications, social aspects of climate change, and practice-based real world experience feels vital. This could be achieved by drawing on expertise of AI experts in the arts and humanities and social sciences. Likewise, the government could enhance societal trust by embedding community dialogues and public consultations into initiatives outlined. This would ensure regulatory approaches to AI development prioritize ethical considerations and cultural sensitivities.
The AI Opportunities Action Plan is a noteworthy proposal toward UK leadership. By emphasizing community engagement, diversity and inclusion, and a broader definition of AI expertise, I feel the plan can be elevated to the next level, strengthening responsible and innovative AI development. To this end, the plan should include:
- Communities engagement and participatory mechanisms to be in place for each stage of the plan: for example, decision making in relation to building infrastructure, establishing data libraries, allocation of resources, strengthening UK as a natural home for talent, and regulation, safety and assurance. This could involve partnerships with a people's parliament or mock committees who can bring their lived experience to the table, perhaps in coordination with initiative such as the Scottish AI Alliance.
- Diversity and inclusion as a principle throughout: not only for the longer term change in the demographics of the available AI work force but also to introduce mechanisms that help reflect this during decision making processes. This can be in coordination with organisations such as the Ada Lovelace Institute. This could be expanded further with the recognition that diversity and inclusion does not only refer to demographic characteristics but broader dimensions of diversification (for example, background and experience).
- Re-envisioning of AI experts to include a broader set of experts at the border of AI and arts and humanities and/or social sciences, reflecting recent movements such as, but not limited to, RAI-UK and BRAID. Including experts in areas such as data management, archival practice, creative practice, and more, as AI experts to lead developing and deploying the plan would establish the UK as a leader in Responsible AI. It would further send a positive message to emphasise that the approach to evaluating social and cultural implications of AI is as much innovation as the AI product itself.
-
I'm aligned with the section proposing reform to remove the current uncertainty around intellectual property. In 2022, the Intellectual Property Office considered a broad copyright exception to allow commercial data mining. This wasn't progressed due to concerns about undermining the creative industries relying on copyright to protect works. The UK must strike a balance between ensuring the UK’s creative industries thrive and fostering innovation. As alluded to in the Action Plan, the UK could adopt a similar approach to the EU in allowing a broad copyright exception, subject to an ability for rightsholders to object (for example, by including the opt-out in the work's metadata and on the rightsholder's website). Where a rightsholder has effectively opted-out, a commercial licence would be required by the AI company in order to use the works to train its model. To ensure that companies do not exploit rightsholder data for commercial gain without obtaining a licence, it will be important to develop new, commercially attractive high-quality curated datasets at the scale required for LLM training; and require developers to maintain records, which can be accessed by rightsholders i.e. ‘transparency-by-design’, so that proper data management is integral from the beginning, thus giving rightsholders an ability to understand what an LLM has used or at least had access to. This could increase the ability to track whether products used in the public sector have been built upon the infringement of UK creators’ rights.
As a data, tech, and AI lawyer, I am particularly interested in section 1.2, regarding the creation of the National Data Library. It mentions the importance of exploring the use of synthetic data generation techniques to construct privacy-preserving versions of sensitive data sets. I agree this is a fundamental requirement to rapidly facilitate innovation in the UK. But further guidance from the UK government is required on:
training AI models using personal data and whether AI models themselves process personal data. In rejecting the proposed Hamburg thesis, the EDPB has recently stated that AI models trained on personal data should be considered anonymous - that is, considered not to involve the processing of personal data - only if personal data cannot be extracted or regurgitated. It must be impossible, using all means reasonably likely to be used, to obtain personal data from the model, either through attacks which aim to extract the original training data from the model itself, or through interactions with the AI model (i.e., personal data provided in responses to prompts/queries). This is a high bar and could hinder innovation / prevent the achievement of this task compliantly in a quick manner. It would be helpful for the UK to give clarity on its approach and consider the Hamburg thesis which provides a practical pro-innovation view;
guidance on the incompatibility / secondary purpose requirements in the UK GDPR / DPA 2018. Currently, the law provides that: (i) where the processing for a purpose other than that for which the personal data have been collected is not based on the data subject’s consent or law, the controller has to consider if it is compatible with the purpose for which the personal data are initially collected; and (ii) personal data shall be collected for specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with those purposes. To facilitate the ambitious plans outlined in the UK AI Action Plan, the UK should give a basis in law for the reuse of data collected (e.g. initially to prevent fraud or for the monitoring of IT systems or provide a health service) for the creation of a secondary data set that could be used to train an AI system. Without the need for a compatibility assessment to remove the paper shield and focus instead on appropriate controls e.g. from a security perspective; and
interested to hear how the UK government defines academia in the context of ensuring academia's needs for high-quality data sets are met. Clearly, this would have benefits beyond the public sector and start-ups e.g. in the Financial Services and Life Science sectors.
On a final note, there are references to enabling "safe AI innovation", but what UK companies (particularly those operating on a global scale) are struggling with is what benchmark to apply and how "safe" will be interpreted by different sectoral regulators / globally. Clarity on this at a UK level would enable quicker adoption e.g. reference to a harmonised globally acceptable standard – such as ISO42001.