As generative AI continues to roil the tech industry, governments around the world are starting to consider regulation in order to combat its potential to aid criminality and promote bias. While the US and China are fierce technology trade rivals, they appear to share something new in common: concerns about accountability for, and possible misuse of, AI. On Tuesday, the governments of both countries issued announcements related to regulations for AI development. The National Telecommunications and Information Administration (NTIA), a branch of the US Department of Commerce, put out a formal public request for input on what policies should shape an AI accountability ecosystem. These include questions around data access, measuring accountability, and how approaches to AI might vary in different industry sectors, such as employment or health care. Written comments in response to the request must be provided to NTIA by June 10, 2023, 60 days from the date of publication in the Federal Register. The news comes on the same day that the Cyberspace Administration of China (CAC) unveiled a number of draft measures for managing generative AI services, including making providers responsible for the validity of data used to train generative AI tools. The CAC has said providers should be responsible for the validity of data used to train AI tools and that measures should be taken to prevent discrimination when designing algorithms and training data sets, according to a report by Reuters. Firms will also be required to submit security assessments to the government before launching their AI tools to the public. If inappropriate content is generated by their platforms, companies must update the technology within three months to prevent similar content from being generated again, according to the draft rules. Failure to comply with the rules will results in providers being fined, having their services suspended, or facing criminal investigations. Any content generated by generative AI must be in line with the country’s core socialist values, the CAC said. China’s tech giants have AI development well under way. The CAC announcement was issued on the same day that Alibaba Cloud announced a new large language model, called Tongyi Qianwen, that it will roll out as a ChatGPT-style front end to all its business applications. Last month, another Chinese internet services and AI giant, Baidu, announced a Chinese language ChatGPT alternative, Ernie bot. AI regulation vs. innovation While the Chinese government has set out a clear set of regulatory guidelines, other governments around the world are taking a different approach. Last month, the UK government said that in order to “avoid heavy-handed legislation which could stifle innovation,” it had opted not to give responsibility for AI governance to a new single regulator, instead calling on existing regulators to come up with their own approaches that best suit the way AI is being used in their sectors. However, this approach was criticized by some, with industry experts arguing that existing frameworks may not be able to effectively regulate AI due to the complex and multilayered nature of some AI tools, meaning conflation between different regimes will be inevitable. Furthermore, the UK’s data regulator issued a warning to tech companies about protecting personal information when developing and deploying large language, generative AI models, while Italy’s data privacy regulator banned ChatGPT over alleged privacy violations. A group of 1,100 technology leaders and scientists have also called for a six-month pause in developing systems more powerful than OpenAI’s newly launched GPT-4. When it comes to technology innovation and regulation, there’s a certain natural path that most governments or legislators usually follow, said Frank Buytendijk, an analyst at Gartner. “When there is new technology on the market, we learn how to use it responsibly by making mistakes,” he said. “That’s where we are right now with AI.” After that, Buytendijk said, regulation starts to emerge — allowing developers, users and the legal systems to learn about responsible use through the interpretation of the law and the case law — followed by the final phase, where technologies having responsible use built-in. “We learn about responsible use through those inbuilt best practices, so it’s a process,” Buytendijk said. Related content feature Windows 11 Insider Previews: What’s in the latest build? Get the latest info on new preview builds of Windows 11 as they roll out to Windows Insiders. Now updated for Build 22635.3566 for the Beta Channel, released on April 26, 2024. By Preston Gralla Apr 26, 2024 251 mins Small and Medium Business Microsoft Windows 11 news Dropbox adds end-to-end encryption for team folders Dropbox this week unveiled a range of features, including security updates and key management, and the ability to co-edit Microsoft 365 documents from within the file-sharing app. By Matthew Finnegan Apr 26, 2024 3 mins Cloud Storage Collaboration Software Productivity Software feature Android versions: A living history from 1.0 to 15 Explore Android's ongoing evolution with this visual timeline of versions, starting B.C. (Before Cupcake) and going all the way to 2024's Android 15 (beta) release. By JR Raphael Apr 26, 2024 23 mins Small and Medium Business Smartphones Android news analysis The unspoken obnoxiousness of Google's Gemini improvements Google's Gemini chatbot is seeing all sorts of upgrades on Android this week, but those advancements reveal a darker underlying reality. By JR Raphael Apr 26, 2024 12 mins Google Assistant Google Android Podcasts Videos Resources Events SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe