What is Europe’s proposed AI law all about?
- The European Commission wants to boost public trust in AI, via a system of compliance checks and balances steeped in “EU values”, to encourage uptake of so-called “trustworthy” and “human-centric” AI
- Non-compliant companies based in the EU or abroad could face a fine of up to €20 million, or 4% of turnover
- The legislation won’t apply to AI systems used exclusively for military purposes
Using artificial intelligence (AI) software for mass surveillance and to rank social behavior could soon be outlawed in Europe as the European Commission intends to ban certain uses of “high-risk” AI systems altogether while limiting others from entering the bloc if they don’t meet its standards, under a new upcoming proposed law.
According to draft legislation that has been shared online, companies that don’t comply could be fined up to €20 million, or the equivalent of 4% of their turnover. The Commission is expected to unveil its final regulation today. The rules are the first of its kind and the EU doesn’t intend to leave powerful tech companies to their own devices, nor does it want to go the way of China in harnessing AI to fashion a surveillance state. Instead, the bloc says it wants a “human-centric” approach that both boosts applications of the tech – but also keeps it from threatening its strict privacy laws.
In a nutshell, AI systems that streamline manufacturing, model climate change, or make the energy grid more efficient would be welcome. However, many of the technologies currently in use in Europe such as algorithms used to scan CVs, make creditworthiness assessments, hand out social security benefits or asylum and visa applications, would be labeled as “high risk,” and would be subject to extra scrutiny.
List of banned AI systems in the proposed law
The 81-page document, which was first reported by Politico, says “indiscriminate surveillance of natural persons should be prohibited when applied in a generalized manner to all persons without differentiation.” However, the use of AI in the military is exempt, as are systems used by authorities in order to safeguard public security.
The suggested list of banned AI systems includes those designed or used in a manner that manipulates human behavior, opinions, or decisions; AI systems used for indiscriminate surveillance applied in a generalized manner; AI systems used for social scoring and those that exploit information or predictions, or a person or group of persons in order to target their vulnerabilities. A report by Bloomberg suggested that other high-risk AI would pertain to systems that could endanger people’s safety, lives, or fundamental rights, as well as other ethical quandaries in the EU such as self-driving cars and remote surgery, among others.
Another chunk of regulation deals with measures to support AI development in the bloc — this includes pushing EU member states to establish regulatory sandboxing schemes in which startups and SMEs can be prioritized for support to develop and test AI systems before bringing them to market. Competent authorities “shall be empowered to exercise their discretionary powers and levers of proportionality in relation to artificial intelligence projects of entities participating the sandbox, while fully preserving authorities’ supervisory and corrective powers,” the draft notes.
All in all, the European Commission is trying to find the right balance between supporting innovation and ensuring AI benefits the over 500 million inhabitants of the EU If the proposals are adopted then Europe could set itself apart from the US and China, which have yet to introduce any serious AI regulation.