It is a lofty goal that the EU Commission has set itself: The Brussels institution is seeking to introduce the world’s first legal framework for artificial intelligence. This is supposed to make Europe the “global center for trustworthy AI”, paving the way for ethical technology worldwide.
Photo: The draft has to be approved by the EU Parliament and the member states / Credits © Frederic Köberl on Unsplash
To put it bluntly, the potential of artificial intelligence (AI) is great, but so are the risks associated with it. While many opportunities can be created by it, such as the development of new services or even the reduction of global greenhouse gas emissions, the technology can have equally far-reaching negative effects, e. g. on privacy, data protection or in the area of liability.
The new proposal on AI regulation, presented on 21 April, is mainly dedicated to strengthen general confidence in AI. Specifically, it proposes nothing less than the first EU legal framework to regulate AI applications at a European level. “With artificial intelligence, trust is a must and not an accessory. With these landmark rules, the EU is at the forefront of developing new global standards to ensure that AI is trustworthy,” commented Margrethe Vestager, Vice-President of the EU Commission in charge of digital policy.
Thereby, the EU Commission has chosen a proportionate and risk-based approach following a relatively simple logic: The higher the risk of a specific type of use of AI, the stricter the rules should be – the genesis of the so-called risk pyramid. Together with a clear definition of “high risk”, this is intended to create legal certainty for companies and other actors.
“Can the EU catch up with the US and China and also shape global rules for AI?”
A distinction is made between four approaches:
• Minimal risk: The vast majority of AI systems falls into this category. It allows the free use of applications such as AI-supported video games or spam filters. No interference by the draft regulation.
• Limited risk: Specific transparency obligations are imposed on certain AI systems, e. g. with a clear risk of manipulation. Users should know whenever they are dealing with a machine.
• High risk: A limited number of AI systems with an adverse impact on people’s safety or their fundamental rights. They will be subject to strict requirements that must be met before they can be marketed, e. g. AI systems used for remote biometric identification of individuals.
• Unacceptable risk: Particularly harmful AI applications that pose a clear threat to people’s safety, livelihoods and rights will be completely banned, e. g. social scoring.
The rulebook was praised from many sides. The US, for example, welcomed the approach. However, the rules on facial recognition in public spaces have met with widespread criticism; keyword: biometric mass surveillance. Another fierce debate may revolve around the list of banned uses, which will be classified as high risk. Other critics fear that the Commission is actually over-regulating the young technology.
With the GDPR now considered the gold standard for data protection, it is written in the stars whether this AI vision can repeat the trick. Can the EU catch up with the US and China and also shape global rules for AI? It may take up to two years until we can get close to an answer to this question. The draft has to be approved by the EU Parliament and the member states. Provide your feedback until June 23.