‘From the lab to the Market’ – Will the EU’s proposed AI regulation set a new ‘global standard’?

A photo of glasses infront of a computer screen.
Photo taken by Kevin Ku

The newly published Artificial Intelligence (AI) regulation proposed by the EU has promised to set ‘new global norms’ on the provision and application of AI and machine learning systems. But can the European regulators really hope to contain and compete with the innovators in China and the US?

European AI regulation – A Background

On the 21st of April 2021, the European Commission released its proposed AI regulation. The proposal sets out a nuanced regulatory position that seeks to mitigate and eliminate potential risks through requiring providers and users of ‘high-risk’ AI systems to comply with set requirements.

A total ban on the use of systems that cause or are likely to cause ‘physical or psychological’ harm through the use of ‘subliminal techniques’ is included, as are prohibitions against the use of ‘real-time’ remote biometric identification systems in public places. Equally, the proposal quantifies ‘high-risk’ systems as including those that pose special risks to established fundamental rights; such as systems used for remote biometric identification, educational or employment purposes, eligibility for public benefits, and credit scoring.

Providers of AI systems will need technical documentation to demonstrate that their system conforms. These conformity assessments will also be required as and when any modifications to the system become substantial thanks to its learning.

In many ways, the proposal builds upon the foundations set by the General Data Protection Regulation (GDPR) back in 2018. Its focus on protecting individual rights and freedoms, transparency and provision of information, human oversight and maintenance, and security and accuracy, all echo the fundamental principles of the GDPR. The proposed creation of a European Artificial Intelligence Board with the regulatory powers to rule on interpretations and implementation within member states can also be seen as similar to the European Data Protection Board (EDPB). Finally, potential fines of €30 million or 6% of the offender’s global revenue (whichever is higher) have also been proposed – a slight increase from those outlined in the GDPR (€20 million or 4% of turnover).

‘From Lab to Market’ - Innovation and Regulation

The fact that the European Union are leading the charge on AI regulation is hardly surprising. Brussels has consistently set the standard for innovative regulation of technological advancements; Since its implementation, the GDPR has effectively become the global standard for data privacy legislation. In its accompanying press release, the Commission has asserted that it hopes that (once passed into law) its AI regulation will set a similar ‘global norm’, turning Europe into the ‘global hub for trustworthy AI’.
But regulation alone will not be enough to engender this.

Indeed, Europe is in danger of falling behind its competitors on the global stage in production and output of AI innovation. Between 2016-2019 China’s output of AI-related research increased by just over 120%, whereas output in the US increased by almost 70%. More AI-related research papers were published in India than in Europe in 2019 alone. In 2020 Daniel Araya, a policy analyst for the Centre for International Governance innovation argued that because of Europe’s stringent data regulations, it is ‘unlikely that it will produce very sophisticated AI as a consequence’.

So how does a historically impressive regulator seek to re-establish its innovative credentials?

The Commission hopes that through establishing a coordinated plan amongst member states and by promising substantial investment to enable the conditions for AI development, Europe can foster excellence ‘from the lab to the market’. This includes ensuring that the regulatory framework is applicable throughout the developmental phase, and each high-risk system must meet these requirements before they can be offered on the market or put into service.

By re-establishing its influence, the EU anticipates setting a global standard for AI development and integration.

What could it mean?

The Commission’s proposed legislation will remain simply a proposal until it is officially codified into law by the EU. As such, it is likely that we will see a period of intense lobbying in the coming years by private industry and public bodies. Whilst the actual regulation is likely to have a limited effect upon the day-to-day workings of Big Tech and Silicon Valley (it is far less forthcoming on algorithmic fairness and informing data subjects), the broad scope and jurisdiction of the proposed legislation will likely have seismic effects upon the international governance landscape.

The ambiguity and flexibility of the proposal has also come under fire, particularly by human rights groups who seek a more codified commitment to offsetting potential discriminatory practices by AI. However, we should view this flexibility as a positive response to the rapidly changing AI and machine learning landscape. As the technology it hopes to regulate updates and develops, the malleability of the proposed legislation allows for regulatory bodies to react and develop alongside.

The regulations jurisdiction covers providers of AI systems in the EU irrespective of where the provider is located, as well as users of AI systems located within the EU, and suppliers and users located outside the EU ‘where the output produced by the system is used in the Union’. This is reflected in Margrethe Vestager’s (executive Vice-President) claim that the legislation will be ‘future-proof and innovation-friendly, [its] rules will intervene where strictly needed: when the safety and fundamental rights of EU citizens are at stake’. Almost any interaction with European data subjects will necessitate adherence.

By positioning its regulation as a defence of European values and contrasting this with the less scrupulous approaches of other AI developers (such as those in China), the Commission might also be able to create trans-Atlantic partnerships and opportunities for cooperation. Indeed, following the proposal’s publication, the White House’s National Security Adviser (Jake Sullivan) welcomed the proposal and noted that it could form the basis for future cooperation.

Whilst the proposed AI regulation is another example of EU technology leadership, its effects upon the global landscape will likely not be measurable in the immediate future. In the face of non-European development and wider trends of regulation, the Commission should seek to work in partnership with like-minded parties, such as the US and UK. Through integrating their respective approaches, the AI regulation might indeed become the GDPR for machine learning.

About Us: Tacita are GDPR compliance experts. Tacita help clients achieve and maintain GDPR compliance. Get in touch to explore our range of GDPR services including the Tacita GDPR Audit, GDPR Consultant Service and the GDPR Toolkit.

Share this article:

Facebook
Twitter
LinkedIn
WhatsApp