GDPR hinders EU, pushes US and China ahead in AI race
Data that big tech companies and new-age disruptive businesses collect from you is the oil that lubricates the world’s artificial intelligence (AI) machinery.
In other words, data is critical to businesses that want to experiment with and develop, test, train, and fine-tune AI-powered solutions. However, with the arrival of the General Data Protection Regulation (GDPR), companies in the European Union find that they’re increasingly struggling to acquire data to feed their AI projects.
According to the provisions of the GDPR, data that is collected from residents of the EU must be processed in a transparent manner after obtaining explicit consent to do so. And that’s exactly what brings AI algorithms to a grinding halt.
Companies that are investing heavily in AI solutions don’t want to disclose the inferences and conclusions that algorithms will make and create. Given the volume of the data that companies need to support AI projects, obtaining and verifying informed consent can prove to be difficult.
The Center for Data Innovation (CDI) conducted a study that concluded that the EU’s GDPR will have a negative impact on the development and use of AI in Europe, putting EU firms at a competitive disadvantage compared with their competitors in North America and Asia.
“GDPR’s AI-limiting provisions do little to protect consumers, and may, in some cases, even harm them. The EU should reform the GDPR so that these rules do not tie down its digital economy in the coming years,” said Nick Wallace and Daniel Castro of the CDI.
According to the CDI’s report, here are the top 5 ways in which the GDPR will “AI development and use in Europe”:
# 1 | Requiring companies to manually review significant algorithmic decisions raises the overall cost of AI
The most direct challenge that the GDPR puts forth is that it specifically targets the use of AI in Article 22, stating that companies must have humans review certain algorithmic decisions.
This significantly raises labor costs and creates a strong barrier to exploring AI solutions as freeing up humans to do other tasks is a strong motivator for developing AI in the first place.
# 2 | The right to explanation could reduce AI accuracy
Articles 13–15 of the GDPR create an obligation for companies to provide either detailed explanations of individual algorithmic decisions or general information about how the algorithms make decisions—obviously, that’s a big challenge for most businesses working with proprietary AI algorithms.
However, the former would undermine the accuracy of algorithms and, perversely, lead to unfair decisions, as there is inherently a trade-off between accuracy and transparency in algorithmic decisions.
# 3 | The right to erasure could damage AI systems
The “right to erasure” in Article 17(1) will also harm AI in Europe, says the CDI report.
All AI systems that operate using unsupervised machine learning—those that improve themselves, without outside help, by learning from the data they process—will be required to “remember” all the data they used to train themselves in order to sustain rules derived from that data.
However, erasing data that underpins key rules in an AI system’s behavior can both make it less accurate and limit its benefit to other data subjects—or even break it entirely
# 4 | The prohibition on repurposing data will constrain AI innovation
Like its predecessor, the Data Protection Directive, Article 6 of the GDPR imposes a general prohibition on using data for any purposes other than that for which it was first collected, thus making it difficult for firms to innovate using data.
This restriction will limit the ability of companies developing or using AI in the EU to experiment with new functions that could improve their services. As a result, EU consumers and businesses will be slow to receive the benefits of the latest innovations in AI.
# 5 | Vague rules could deter companies from using de-identified data
Although the GDPR appropriately allows exemptions for de-identified data, it fails to clarify which standards of de-identification are acceptable. This is expected to deter companies from attempting to de-identify data — lest they face harsh enforcement by regulators.
This is also expected to undermine companies’ incentives to process and share de-identified data that could be used to improve AI systems, while at the same time driving some firms to process personal data when de-identified data would suffice, and as a result, incur unnecessary compliance costs and restrict their range of legal uses.
US and China: Winning the AI race
America has recently set up a taskforce to drive AI projects to success in the country. A statement by the White House claims that the Federal Government’s investment in unclassified R&D for AI and related technologies has grown by over 40 percent since 2015, in addition to substantial classified investments across the defense and intelligence communities.
In the annual guidance to heads of executive departments and agencies, the Office of Management and Budget (OMB) and the White House Office Science and Technology Policy (OSTP) have directed agencies to focus on emerging technologies including machine learning and autonomous systems.
In fact, the White House OSTP also led US delegations to the 2017 and 2018 G7 Innovation and Technology Ministerials, and is working with allies to recognize the potential benefits of AI and promote AI R&D.
Data privacy laws prevalent in the country, or the new one enacted in California recently, don’t seem to threaten the nation’s AI projects either. And if they did, the Trump Administration seems keen to “remove regulatory barriers to the deployment of AI-powered technologies”.
China, with the support of the Chinese Government, is also keen on catapulting itself ahead of its competitors in the international AI race.
In fact, according to the country’s official news outlet, “China has already been at the forefront of the development of artificial intelligence (AI) and will take the lead in the field over time”.
As of May this year, China had 4,040 AI enterprises. Beijing has 1,070 AI companies, accounting for 26 percent of the national total, according to a report on Beijing AI industry development, released by the Beijing Municipal Commission of Economy and Information Technology (BMCEIT).
“A number of AI products and companies have emerged in Beijing in recent years, making the city an AI innovation hub in China. Beijing has formed an AI industrial cluster, thanks to the favorable policies, an innovation and entrepreneurial atmosphere, capitals, enhanced software development and patent protection it has benefited from,” said You Jing, Deputy Director of BMCEIT’s Software Office.
China is also using AI to turbocharge its space ambitions. Recently, Zhang Duzhou, a member of the Chinese Association of Automation and the Chinese Society of Astronautics, told a space conference in Harbin, capital of northeast China’s Heilongjiang Province that China is stepping up that development of AI technology to support its space programs.
Balancing data privacy and AI
According to an article on the subject in volume 34, issue 4 of the Santa Clara High Technology Law Journal, if the EU wants to remain competitive in the race to develop AI, it must balance its interests in protecting personal data against its interest in developing new AI technologies.
“The implementation of the GDPR, without carveouts to ease the use of personal data in AI systems, demonstrates the EU’s favor toward data privacy,” said Matthew Humerick in his article.
The GDPR’s restrictions on personal data seem to burden the ability of AI to learn and develop, and as a result, is likely to suffocate organizations seeking to create new solutions.
It might also force companies to look for ways to circumvent the GDPR’s provisions. “Until the EU recognizes and addresses the potential impact of the GDPR on its AI industry, the E.U. will fall behind in its AI efforts,” concluded Humerick.
11 December 2019
10 December 2019