‘From the lab to the Market’ - Will the EU’s proposed AI regulation set a new ‘global standard’?

A computer screen with coding on it. A pair of glasses are in the foreground.

Photo taken by Kevin Ku

‘From the lab to the Market’ - Will the EU’s proposed AI regulation set a new ‘global standard’?

 The newly published Artificial Intelligence (AI) regulation proposed by the EU has promised to set ‘new global norms’ on the provision and application of AI and machine learning systems. But can the European regulators really hope to contain and compete with the innovators in China and the US?

European AI regulation – A Background

On the 21st of April 2021, the European Commission released its proposed AI regulation. The proposal sets out a nuanced regulatory position that seeks to mitigate and eliminate potential risks through requiring providers and users of ‘high-risk' AI systems to comply with set requirements.

A total ban on the use of systems that cause or are likely to cause ‘physical or psychological’ harm through the use of ‘subliminal techniques’ is included, as are prohibitions against the use of ‘real-time’ remote biometric identification systems in public places. Equally, the proposal quantifies ‘high-risk’ systems as including those that pose special risks to established fundamental rights; such as systems used for remote biometric identification, educational or employment purposes, eligibility for public benefits, and credit scoring.

Providers of AI systems will need technical documentation to demonstrate that their system conforms. These conformity assessments will also be required as and when any modifications to the system become substantial thanks to its learning.

In many ways, the proposal builds upon the foundations set by the General Data Protection Regulation (GDPR) back in 2018. Its focus on protecting individual rights and freedoms, transparency and provision of information, human oversight and maintenance, and security and accuracy, all echo the fundamental principles of the GDPR. The proposed creation of a European Artificial Intelligence Board with the regulatory powers to rule on interpretations and implementation within member states can also be seen as similar to the European Data Protection Board (EDPB). Finally, potential fines of €30 million or 6% of the offender's global revenue (whichever is higher) have also been proposed – a slight increase from those outlined in the GDPR (€20 million or 4% of turnover).

‘From Lab to Market’ - Innovation and Regulation

The fact that the European Union are leading the charge on AI regulation is hardly surprising. Brussels has consistently set the standard for innovative regulation of technological advancements; Since its implementation, the GDPR has effectively become the global standard for data privacy legislation. In its accompanying press release, the Commission has asserted that it hopes that (once passed into law) its AI regulation will set a similar ‘global norm’, turning Europe into the ‘global hub for trustworthy AI’.
But regulation alone will not be enough to engender this.

Indeed, Europe is in danger of falling behind its competitors on the global stage in production and output of AI innovation. Between 2016-2019 China’s output of AI-related research increased by just over 120%, whereas output in the US increased by almost 70%. More AI-related research papers were published in India than in Europe in 2019 alone. In 2020 Daniel Araya, a policy analyst for the Centre for International Governance innovation argued that because of Europe’s stringent data regulations, it is ‘unlikely that it will produce very sophisticated AI as a consequence’.

So how does a historically impressive regulator seek to re-establish its innovative credentials?

The Commission hopes that through establishing a coordinated plan amongst member states and by promising substantial investment to enable the conditions for AI development, Europe can foster excellence ‘from the lab to the market’. This includes ensuring that the regulatory framework is applicable throughout the developmental phase, and each high-risk system must meet these requirements before they can be offered on the market or put into service.

By re-establishing its influence, the EU anticipates setting a global standard for AI development and integration.

Playing with the Big Boys - Global Trends and Approaches

Whilst establishing the global standard for AI regulation and legislation is a noble ideal, the EU face a variety of barriers regarding public and private approaches to AI development. Indeed, the contrast between the regulatory stances of the EU, the United States, and China is deeply significant.

The US approach in recent years can best be described as ‘piecemeal’, with the Trump administration delegating responsibility to specific regulatory agencies, such as the National Security Commission. In early 2020, the administration even issued a memorandum to all departments seeking commentary and thoughts on potential areas of further deregulation.

The development and implementation of AI in China has been one of the Chinese Communist Party’s (CCP) crowning achievements of the 21st Century. Their 2017 ‘Artificial Intelligence Development Plan’ (linked to their national ‘Made in China 2025’ plan) has fostered a technological boom for AI research and output. Unsurprisingly these developments have been implemented in ways antithetical to the EU’s approach, such as through the mass surveillance and social control of the CCP’s ‘Social Credit’ system.

Despite this, the Commission’s aim may be aided by a growing trend in public awareness and concern. In the West, public worries regarding AI have only increased in recent years. A 2018 study from the Future of Humanity Institute at the University of Oxford found that 82% of Americans interviewed believed “that robots and/or AI should be carefully managed”. Similarly, research undertaken by the Oxford Commission on AI and Good Governance (OxCAIGG) found that 44% of Europeans consulted were concerned about the harm caused by AI.

What could it mean?

The Commission’s proposed legislation will remain simply a proposal until it is officially codified into law by the EU. As such, it is likely that we will see a period of intense lobbying in the coming years by private industry and public bodies. Whilst the actual regulation is likely to have a limited effect upon the day-to-day workings of Big Tech and Silicon Valley (it is far less forthcoming on algorithmic fairness and informing data subjects), the broad scope and jurisdiction of the proposed legislation will likely have seismic effects upon the international governance landscape.

The ambiguity and flexibility of the proposal has also come under fire, particularly by human rights groups who seek a more codified commitment to offsetting potential discriminatory practices by AI. However, we should view this flexibility as a positive response to the rapidly changing AI and machine learning landscape. As the technology it hopes to regulate updates and develops, the malleability of the proposed legislation allows for regulatory bodies to react and develop alongside.

The regulations jurisdiction covers providers of AI systems in the EU irrespective of where the provider is located, as well as users of AI systems located within the EU, and suppliers and users located outside the EU ‘where the output produced by the system is used in the Union’. This is reflected in Margrethe Vestager’s (executive Vice-President) claim that the legislation will be ‘future-proof and innovation-friendly, [its] rules will intervene where strictly needed: when the safety and fundamental rights of EU citizens are at stake’. Almost any interaction with European data subjects will necessitate adherence.

By positioning its regulation as a defence of European values and contrasting this with the less scrupulous approaches of other AI developers (such as those in China), the Commission might also be able to create trans-Atlantic partnerships and opportunities for cooperation. Indeed, following the proposal’s publication, the White House’s National Security Adviser (Jake Sullivan) welcomed the proposal and noted that it could form the basis for future cooperation.

Whilst the proposed AI regulation is another example of EU technology leadership, its effects upon the global landscape will likely not be measurable in the immediate future. In the face of non-European development and wider trends of regulation, the Commission should seek to work in partnership with like-minded parties, such as the US and UK. Through integrating their respective approaches, the AI regulation might indeed become the GDPR for machine learning.


About Us: Tacita is a leading General Data Protection Regulation (GDPR) compliance specialist operating from their base in the United Kingdom. This company helps clients maintain their GDPR compliance by undertaking independent external GDPR assessments in a cost-effective manner with minimal disruption to the client. Offering clear and actionable solutions, the company offers an unbiased service ensuring their clients save time, money, and energy when it comes to their GDPR requirements. Tacita provides a three-step process, which includes assessments, recommendations and resolutions with detailed reporting and data processing, record processing and privacy policies. Full details can be found at https://www.tacita.io/

Send us a message

Telephone: +44 20 4526 5699
Email: contact@tacita.io

To see how we use your data, see our Privacy Notice.