Americas

  • United States

Asia

lucas_mearian
Senior Reporter

As Europeans strike first to rein in AI, the US follows

news analysis
May 01, 20237 mins
Artificial IntelligenceAugmented RealityChatbots

The European Union is putting the finishing touches on legislation that would hold accountable companies that create generative AI platforms, like ChatGPT, that can take the content they generate from unnamed sources.

virtual brain / digital mind / artificial intelligence / machine learning / neural network

A proposed set of rules by the European Union would, among other things. require makers of generative AI tools such as ChatGPT, to publicize any copyrighted material used by the technology platforms to create content of any kind.

A new draft of European Parliament’s legislation, a copy of which was attained by The Wall Street Journal, would allow the original creators of content used by generative AI applications to share in any profits that result.

The European Union’s “Artificial Intelligence Act” (AI Act) is the first of its kind by a western set of nations. The proposed legislation relies heavily on existing rules, such as the General Data Protection Regulation (GDPR), the Digital Services Act, and the Digital Markets Act. The AI Act was originally proposed by the European Commission in April 2021.

The bill’s provisions also require that the large language models (LLMs) behind generative AI tech, such as the GPT-4, be designed with adequate safeguards against generating content that violates EU laws; that could include child pornography or, in some EU countries, denial of the Holocaust, according to The Washington Post.

Violations of the AI Act could carry fines of up to 30 million euros or 6% of global profits, whichever is higher.

“For a company like Microsoft, which is backing ChatGPT creator OpenAI, it could mean a fine of over $10 billion if found violating the rules,” a Reuters report said.

But the solution to keeping AI honest isn’t easy, according to Avivah Litan, a vice president and distinguished analyst at Gartner Research. It’s likely that LLM creators, such as San Fransisco-based OpenAI and others, will need to develop powerful LLMs to check that the ones trained initially have no copyrighted materials. Rules-based systems to filter out copyright materials are likely to be ineffective, Liten said.

Meanwhile, the EU is busy refining its AI Act and taking a world-leading approach, Litan said, in creating rules that govern the fair and risk-managed use of AI going forward.

Regulators should consider that LLMs are effectively operating as a black box, she said, and it’s unlikely that the algorithms will provide organizations with the needed transparency to conduct the requisite privacy impact assessment. “This must be addressed,” Litan said.

“It’s interesting to note that at one point the AI Act was going to exclude oversight of Generative AI models, but they were included later,” Litan said  “Regulators generally want to move carefully and methodically so that they don’t stifle innovation and so that they create long-lasting rules that help achieve the goals of protecting societies without being overly prescriptive in the means.”

On April 1, Italy became the first Western nation to ban further development of ChatGPT over privacy concerns (though it just reversed that decision); the initial ban occurred after the natural language processing app experienced a data breach involving user conversations and payment information. ChatGPT is the popular chatbot created by OpenAI and backed by billions of dollars from Microsoft.

Earlier this month, the US and Chinese governments issued announcements related to regulations for AI development, something neither country has established to date.

“The US and the EU are aligned in concepts when it comes to wanting to achieve trustworthy, transparent, and fair AI, but their approaches have been very different,” Litan said.

So far, the US has taken what Litan called a “very distributed approach to AI risk management,” and it has yet to create new regulations or regulatory infrastructure.  The US has focused on guidelines and an AI Risk Management framework.

In January, the National Institute of Standards and Technology (NIST) released the Artificial Intelligence Management Framework. In February, the White House issued an Executive Order directing federal agencies to ensure their use of AI advances equity and civil rights. 

The US Congress is considering the federal Algorithmic Accountability Act, which, if passed, would require employers to perform an impact assessment of any automated decision-making system that has a significant effect on an individual’s access to, terms, or availability of employment. 

The National Telecommunications and Information Administration (NTIA), a branch of the US Department of Commerce, also issued a public request for comment on what policies would best hold AI systems accountable.

States and municipalities are getting into the act, too, eyeing local restrictions on the use of AI-based bots to find, screen, interview, and hire job candidates because of privacy and bias issues. Some states have already put laws on the books.

Microsoft and Google owner Alphabet have been in a race to bring generative AI chatbots to businesses and consumers. The most advanced generative AI engines can create their own content based on user prompts or input. So, for example, AI can be tasked with creating marketing or ad campaigns, writing essays, and generating realistic photo imagery and videos.

Key to the EU’s AI Act is a classification system that determines the level of risk an AI technology could pose to the health and safety or fundamental rights of a person. The framework includes four risk tiers: unacceptable, high, limited, and minimal, according to the World Economic Forum.

Issues around generative AI platforms that regulators should be mindful of, according to Gartner, include:

  • GPT models are not explainable: Model outputs are unpredictable; even the model vendors don’t understand everything about how they work internally. Explainability or interpretability are prerequisites for model transparency.
  • Inaccurate and fabricated answers: To mitigate the risks of inaccuracies and hallucinations, output generated by ChatGPT/GenAI should be assessed for accuracy, appropriateness, and actual usefulness before being accepted.
  • Potential compromise of confidential data: No verifiable data governance and protection assurances that confidential enterprise information– for example, in the form of stored prompts — is not compromised.
  • Model and output bias: Model developers and users must have policies or controls in place to detect biased outputs and deal with them consistent with company policy and any relevant legal requirements.
  • Intellectual property (IP) and copyright risks: Model developers and users must scrutinize their output before further use to ensure it doesn’t infringe on copyright or IP rights, and actively monitor changes in copyright laws that apply to ChatGPT/GenAI. Users are now on their own when it comes to filtering out copyrighted materials in ChatGPT outputs.
  • Cyber and fraud risks: Systems should be hardened to try to ensure criminals are not able to use them for cyber and fraud attacks.

Launched by OpenAI in November, ChatGPT immediately went viral and had 1 million users in just its first five days because of the sophisticated way it generates in-depth, human-like responses to queries. The ChatGPT website currently receives an estimated 1 billion monthly website visitors with an estimated 100 million active users, according to website test company Tooltester.

Though the chatbot’s responses may appear human-like, ChatGPT isn’t sentient — it’s a next-word prediction engine, according Dan Diasio, Ernst & Young global artificial intelligence consulting leader. With that in mind, he urged caution in its use.

But as AI technology advances at breakneck speed, a more sophisticated algorithm is predicted to be on the horizon: artificial general intelligence, which could think for itself and become exponentially smarter over time.

Earlier this month, an open letter from thousands of tech luminaries called for a halt to the development of generative AI technology out of concern that the ability to control it could be lost if it advances too far. The letter has garnered more than 27,000 signatories, including Apple co-founder Steve Wozniak. The letter, published by the Future of Life Institute, called out San Francisco-based OpenAI Lab’s recently announced GPT-4 algorithm in particular, saying the company should halt further development until oversight standards are in place.

While AI has been around for decades, it has “reached new capacities fueled by computing power,” Thierry Breton, the EU’s Commissioner for Internal Market, said in a statement in 2021. The Artificial Intelligence Act, he said, was created to ensure that “AI in Europe respects our values and rules, and harness the potential of AI for industrial use.”