As generative AI continues to roil the tech industry, governments around the world are starting to consider regulation in order to combat its potential to aid criminality and promote bias. While the US and China are fierce technology trade rivals, they appear to share something new in common: concerns about accountability for, and possible misuse of, AI. On Tuesday, the governments of both countries issued announcements related to regulations for AI development. The National Telecommunications and Information Administration (NTIA), a branch of the US Department of Commerce, put out a formal public request for input on what policies should shape an AI accountability ecosystem. These include questions around data access, measuring accountability, and how approaches to AI might vary in different industry sectors, such as employment or health care. Written comments in response to the request must be provided to NTIA by June 10, 2023, 60 days from the date of publication in the Federal Register. The news comes on the same day that the Cyberspace Administration of China (CAC) unveiled a number of draft measures for managing generative AI services, including making providers responsible for the validity of data used to train generative AI tools. The CAC has said providers should be responsible for the validity of data used to train AI tools and that measures should be taken to prevent discrimination when designing algorithms and training data sets, according to a report by Reuters. Firms will also be required to submit security assessments to the government before launching their AI tools to the public. If inappropriate content is generated by their platforms, companies must update the technology within three months to prevent similar content from being generated again, according to the draft rules. Failure to comply with the rules will results in providers being fined, having their services suspended, or facing criminal investigations. Any content generated by generative AI must be in line with the country’s core socialist values, the CAC said. China’s tech giants have AI development well under way. The CAC announcement was issued on the same day that Alibaba Cloud announced a new large language model, called Tongyi Qianwen, that it will roll out as a ChatGPT-style front end to all its business applications. Last month, another Chinese internet services and AI giant, Baidu, announced a Chinese language ChatGPT alternative, Ernie bot. AI regulation vs. innovation While the Chinese government has set out a clear set of regulatory guidelines, other governments around the world are taking a different approach. Last month, the UK government said that in order to “avoid heavy-handed legislation which could stifle innovation,” it had opted not to give responsibility for AI governance to a new single regulator, instead calling on existing regulators to come up with their own approaches that best suit the way AI is being used in their sectors. However, this approach was criticized by some, with industry experts arguing that existing frameworks may not be able to effectively regulate AI due to the complex and multilayered nature of some AI tools, meaning conflation between different regimes will be inevitable. Furthermore, the UK’s data regulator issued a warning to tech companies about protecting personal information when developing and deploying large language, generative AI models, while Italy’s data privacy regulator banned ChatGPT over alleged privacy violations. A group of 1,100 technology leaders and scientists have also called for a six-month pause in developing systems more powerful than OpenAI’s newly launched GPT-4. When it comes to technology innovation and regulation, there’s a certain natural path that most governments or legislators usually follow, said Frank Buytendijk, an analyst at Gartner. “When there is new technology on the market, we learn how to use it responsibly by making mistakes,” he said. “That’s where we are right now with AI.” After that, Buytendijk said, regulation starts to emerge — allowing developers, users and the legal systems to learn about responsible use through the interpretation of the law and the case law — followed by the final phase, where technologies having responsible use built-in. “We learn about responsible use through those inbuilt best practices, so it’s a process,” Buytendijk said. Related content news analysis The EU has decided to open up iPadOS 'Our market investigation showed that despite not meeting the thresholds, iPadOS constitutes an important gateway on which many companies rely to reach their customers,' said the EU’s lead anti-competition regulator, Margrethe Vestige By Jonny Evans Apr 29, 2024 4 mins Apple Apple App Store iPad how-to A new Windows 11 backup and recovery paradigm? If used properly, new features built into Windows 11 offer safe, nearly complete backup, restore, repair, and recovery operations without third-party tools — but there are some caveats worth knowing. By Ed Tittel Apr 29, 2024 17 mins Windows 11 Backup and Recovery Windows feature Q&A: Georgia Tech dean details why the school needed a new AI supercomputer Georgia Tech partnered with Nvidia to roll out its first supercomputer so students can experiment with AI and machine learning to better prepare for a job market where those skills are now critical to success. By Lucas Mearian Apr 29, 2024 12 mins CPUs and Processors Education Industry Generative AI feature Windows 11 Insider Previews: What’s in the latest build? Get the latest info on new preview builds of Windows 11 as they roll out to Windows Insiders. Now updated for Build 22635.3566 for the Beta Channel, released on April 26, 2024. By Preston Gralla Apr 26, 2024 251 mins Small and Medium Business Microsoft Windows 11 Podcasts Videos Resources Events SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe