The EU’s Draft AI Regulations: A Thoughtful But Imperfect Start

“The EU’s proposed rules for AI may not be perfect, but they’re just the start of a more impactful discourse on algorithmic fairness and a safer relationship with artificial intelligence”

 
 

Blog from Catriona Campbell, member of the Scottish AI Alliance Leadership Circle and CTIO of EY UK & Ireland

In April this year, the EC published its much-anticipated Draft AI Regulations. Although the proposed rules haven’t yet been adopted (a process that could take until 2026), they’re the first-ever solid endeavour to regulate artificial intelligence. I recently discussed the draft regulations with fellow members of the newly formed Scottish AI Alliance Leadership Circle. As it’s a complex topic, my colleague Callum Sinclair, Head of Technology & Commercial at law firm Burness Paull, provided us with his view of the draft regulations - a great summary of some key points:

  •  Approach using a risk-based approach, distinguishing between unacceptable (and thus banned), high, limited, and minimal risks, with proportionate regulatory interventions in each instance

  • Likely to have a scope of application much broader than the EU, as every company that does business with EU citizens will be in scope

  • Rely on Member States for enforcement.

  • Place technology providers under a variety of obligations, including high-quality data sets, user information, technical documentation, human oversight, and much more.

  • Could lead to substantial fines for non-compliance, up to €30 million or 6% of turnover for major breaches

 Click here for a more detailed rundown of the key provisions.

Asked about my thoughts on business, there are three areas that immediately spring to mind; parallels with GDPR legislation, internal controls and talent. GDPR started as a draft legislation and is now a global standard - I think that AI could follow this model, which would be fantastic. Watch this space! Second, all companies need effective internal controls and this becomes critical as companies scale.  With the added complexities that AI will bring, companies will need time to implement the changes to adhere to the legislation and manage their internal controls.

One final point I expressed during our session was that, for organisations to properly adhere to the rules, they’ll need an incredibly resilient workforce that understands this stuff. This means that business leaders will need to look harder to find and retain the best talent possible, which is a huge task. They’ll need plenty of time to change and make sure they can deliver on the promise, and for this reason, I actually welcome a longer adoption period. And also...great news for those looking to carve out a career in the field, as more talent will be needed across the sector - in start-ups and corporates.

It’s also great to hear from leading academics, such as Michael Rovatsos, Professor of AI at the University of Edinburgh (amongst other things!) talk about what it means to them - apart from research being exempt from the regulations.  Michael made an fascinating observation that companies outside the traditional technology sector will now be responsible to validate their own products that contain AI elements from suppliers.  This will create a need for much closer working between suppliers, end users and government - which could be an opportunity to showcase great community relationships and democratisation of decision making.

Feedback on the regulations from commentators has been mixed, and the main glitch, which appears to be a problem for most critics, are the sweeping exemptions for law enforcement using remote biometric surveillance to, say, prevent terrorist attacks or search for missing people, which they worry will lead to widespread misuse of such systems.

Another serious issue for many is that the regulations don’t guarantee support for the people they’re designed to protect. For instance, there’s no provision to make it mandatory to inform people when their personal details have been processed by an algorithm, such as during the employment process. For this reason, one worry is the rules will fall short of meaningfully preventing AI systems from discriminating against typically marginalised groups.

Despite some creases in an overall great plan, which I have faith the EU will iron out in the years before Member States adopt the rules, I think it’s about time we saw a major stride forward like this. The Draft AI Regulations represent the dawn of a new age, one in which there’s a more impactful discourse around artificial intelligence. In a world where there’s a lot of talk about taking action (or more often than not, a lot of talk about talking about taking action), it’s important that we do, at some point, graduate from that talk and actually take said action.  

But not everyone agrees this is the right move. In fact, some are dead set against it. It’s no major shock that the loudest sounds of protest come from the technology providers, big and small, whose AI activities the rules will limit to varying degrees. For example, perhaps two of the biggest innovations are the requirement for assessments to ensure high-risk AI systems conform to the rules before they’re sold or implemented and the requirement for a monitoring system to spot and solve issues once products are on the market.

Silicon Valley, at the forefront of AI development to date, has long been frank about its thoughts on the regulation of AI: lawmakers shouldn’t stand in the way of technological progress. I think it’s safe to say that, if regulators succumbed to such rhetoric, we’d end up in a world where we implement AI systems without, for example, rigorous processes to stop algorithmic bias. As we know from the recent UK Post Office scandal, blindly trusting the system can lead to massive problems and even more public distrust.

And companies can’t complain - not really. In the grand scheme of things, the Draft AI Regulations give the key players in the industry very little to keep them awake at night, even though unregulated activities give us proponents of AI regulation many a sleepless night! The rules do, however, offer the public some peace of mind - enough to sleep well until they’re tweaked to perfection.

Previous
Previous

AI and Children CivTech Challenge Exploration Stage Begins

Next
Next

New podcast alert!