US unveils ‘light touch’ approach to AI regulation

The White House requested agencies avoid rules that ‘hamper’ AI innovation and growth.
10 January 2020

US White House CTO Michael Kratsios speaking at Web Summit in Lisbon, 2019. Source: AFP

The discussion of ethics is likely to bring pause to any conversation surrounding the wide adoption of artificial intelligence (AI) in the next decade. 

In light of ‘bad’ data sets leading to no shortage of tangible examples of biased AI, as the technology permeates every industry and becomes part of our daily lives, few would argue that a measured approach to its development is a bad thing. 

Last year, a Vanson Bourne study revealed a sizeable majority (87 percent) of IT heads believe AI development should be regulated to ensure it serves the best interests of business, governments, and citizens alike.

Last April, the European Commission launched an independent AI ethics group aimed at achieving “trustworthy” AI development, calling the ethical dimension of AI “not a luxury feature or add-on.”

“It is only with trust that our society can fully benefit from technologies.”

The EC’s seven-point plan said developers should ensure their technology can “support human agency”, rather than decrease it, and added that citizens should have full control over their data and that it won’t be used in a way to discriminate against them.  

The EC guidelines said AI systems must be transparent and traceable and should promote positive social change, enhance sustainability and ecological sustainability. 

This week, however, the United States has made its own stance on regulation known, cautioning its allies away from regulatory “overreach” which, it said, could stifle innovation. While concerns abound over the weaponization and surveillance power of AI, the US is essentially condoning a hands-off approach. 

In a fact sheet issued earlier this week, the Trump administration laid out 10 principles to guide the development of within a more “light-touch” framework. 

“Regulators must conduct risk assessment and cost-benefit analyses prior to any regulatory action on AI, with a focus on establishing flexible frameworks rather than one-size-fits-all regulation,” the note stated.

While guidelines from the Office of Science and Technology Policy (OSTP), which now go for public feedback for 90 days, similarly stated that agencies should aim to encourage qualities such as “fairness, non-discrimination, openness, transparency, safety, and security.”

But it added that any new rules being introduced should be preceded by “risk assessment and cost-benefit analyses,” and should be founded on “scientific evidence and feedback from the American public.” 

The OSTP urged other nations to follow in the hands-off model of regulation; “Europe and our allies should avoid heavy-handed innovation-killing models, and instead consider a similar regulatory approach.”

That message came as the EC’s new president Ursula von der Leyen has committed to creating new, harder-edged— even GDPR-style— regulations governing the development of AI. 

“The best way to counter authoritarian uses of AI is to make sure America and our international partners remain the global hubs of innovation, shaping the evolution of technology in a manner consistent with our common values.”

The OSTP’s principles were laid out by United States’ Chief Technology Officer Michael Kratsios at CES 2020 on Wednesday, accompanied by a column on Bloomberg.

In the article entitled AI That Reflects American Values, Kratsios wrote; “The U.S. will continue to advance AI innovation based on American values, in stark contrast to authoritarian governments that have no qualms about supporting and enabling companies to deploy technology that undermines individual liberty and basic human rights.

“The best way to counter this dystopian approach is to make sure America and our allies remain the top global hubs of AI innovation. 

“Europe and our other international partners should adopt similar regulatory principles that embrace and shape innovation, and do so in a manner consistent with the principles we all hold dear.”

On what is prompting the Trump administration’s aversion to tight regulation, VentureBeat reported that an administration official criticized regulatory efforts at local and state levels, alluding to the banning of facial recognition technology in San Francisco, which prompted other cities to follow suit. 

“I think the examples in the US today at state and local levels are examples of overregulation, which you want to avoid on the national level. 

“So when particular states and localities make decisions like banning facial recognition across the board, you end up in tricky situations where civil servants may be breaking the law when they try to unlock their government-issued phone,” the official said.