The US and China will soon have laws in place to govern ChatGPT-like AI tools

The US is seeking public comments on potential accountability measures for AI tools, while the Chinese internet watchdog has unveiled a set of strict draft rules targeting ChatGPT-like services.
12 April 2023

The US and China will soon have laws in place to govern ChatGPT-like AI tools

Since the emergence of OpenAI’s ChatGPT, there has been a proliferation of generative artificial intelligence (AI) tools worldwide, especially among Big Tech companies. That has led to generative AI, a category of a system that can be prompted to create wholly novel content, becoming the center of the global “AI race.”

The reality is that we are just at the beginning of a revolution, and generative AI is about to reorient the way we work and engage with the world at a scale unfathomable by many just yet. Some experts even believe that we have yet to grasp the actual risks these AI systems can pose to our societies.

One certainty is that the ChatGPT frenzy has already caught some off guard, with more than 5,000 people signing an open letter urging a pause in AI developments – and saying that if researchers do not pull back in this “out-of-control race,” governments should step in. Italy became the first Western country to ban ChatGPT temporarily a day after the letter was launched. 

What is clear now is that with the launch of hugely influential text and image generative models such as ChatGPT-4, the risks and challenges it poses are clearer. The open letter penned by the Future of Life Institute cautioned that “AI systems with human-competitive intelligence” could become a significant threat to humanity. Among the risks includes the possibility of AI outsmarting humans, rendering us obsolete, and taking control of civilization.

That is where the question of regulating AI comes into play–a necessary but indeed not an easy feat. The battle for regulation has often pitted governments and large technology companies against one another. It appears it might play out the same way when regulating AI and tools affiliated with the technology.

China has carved out regulations for AI tools

It is unsurprising that Europe and China would be the first to chart the path of AI regulations. The Italian data protection authority recently temporarily banned ChatGPT while scrutinizing whether the generative AI chatbot complies with privacy regulations.

Italy opened an investigation into OpenAI, the company behind the massively popular chatbot, citing data privacy concerns after ChatGPT experienced a data breach involving user conversations and payment information. Italy’s decision was followed by the European Consumer Organisation (BEUC) calling on all authorities to investigate all significant AI chatbots.

On the other hand, China, despite ChatGPT being inaccessible there, has unveiled a new set of draft rules targeting ChatGPT-like services. According to the proposed regulation published by the Cyberspace Administration of China (CAC) on April 11, companies that provide generative AI services and tools in China must prevent discriminatory content, false information, and content that harms personal privacy or intellectual property.

In short, providers should avoid various forms of discrimination, fake news, terrorism, and other anti-social content. Providers must re-train their models within three months to prevent a recurrence if any banned content is discovered or reported. The draft regulations also list detailed requirements for manual tagging or labeling of data used to train AI models.

No other country has developed regulations targeting AI tools, but China’s speed is not surprising considering the government’s take on data privacy. Violations of the rules can result in fines of up to 100,000 yuan (approximately US$14,520) and, worse, service termination. The draft regulations are open to public scrutiny until May 10.

Experts told Bloomberg that China would probably even bar foreign AI services, like those from OpenAI or Google, as it did with American search and social media offerings. Separately, even the US is considering walking down the same path as China and Italy with AI tools like ChatGPT.

On the same day the Chinese regulators published its draft regulations, the Biden administration said it sought public comments on potential accountability measures for AI tools and systems. 

Reuters reported that the National Telecommunications and Information Administration, a Commerce Department agency that advises the White House on telecommunications and information policy, wants to know if there are measures that could be put in place to provide assurance “that AI systems are legal, effective, ethical, safe, and otherwise trustworthy.”

“Responsible AI systems could bring enormous benefits, but only if we address their potential consequences and harms. For these systems to reach their full potential, companies and consumers need to be able to trust them,” said NTIA Administrator Alan Davidson.