Americas

  • United States

Asia

mike_elgan
Contributing Columnist

6 surprising facts about ChatGPT nobody told you

opinion
Feb 01, 20236 mins
Artificial IntelligenceAugmented RealityGenerative AI

Everyone is talking about this chatty AI sensation. It’s time to really understand what’s going on.

Everybody’s talking about ChatGPT, the OpenAI tool that writes poetry, prose, and even software code like a human.

The chatter mostly gravitates toward amazement, fear, and warning — amazement at what it can do, fear of cheating and replacement by “robots,” and warnings about the perils to humanity of outsourcing creativity to machines.

All that chatter misses the moment. In fact, ChatGPT (The GPT bit stands for Generative Pre-trained Transformer) is less amazing, less scary, and needs fewer warnings than claimed by most of the media coverage. ChatGPT is an AI language model from San Francisco-based OpenAI. It can converse with a person via text and generate a wide range of content upon request.

Generative AI is by far the biggest trend in technology of this decade so far. So, let’s understand it better by taking a look at six ChatGPT facts you probably haven’t heard.

1. ChatGPT isn’t uniquely capable

ChatGPT is special only because it’s public. There are other AI projects just as capable, or even more capable, that you’ve never tried.

Major tech companies, especially Google, Meta, and others — plus myriad startups and university departments — have developed generative AI tools at least as capable as ChatGPT. In some cases, competitors released limited versions of their AI; in others, they didn’t release their technology for public use.

Meta did release a chatbot called Blenderbot even before ChatGPT. But the bot was so cautious in how it was moderated that users found it boring to use.

All that caution stems from real-world misadventures. Microsoft released a chatbot called “Tay” years ago. It was trained on social media posts. The AI hoovered up content from online haters and trolls and spewed racism and vitriol to Microsoft’s embarrassment.

Google fired an engineer last year for publicly (and falsely) proclaiming that Google’s LaMDA (Language Model for Dialogue Applications) was sentient.

ChatGPT is tech’s current flavor of the month because of OpenAI’s boldness in releasing the tool to the public and some boldness in allowing the bot to address some, but not all, controversial subjects. Other companies are too cautious. But now that ChatGPT is getting so much attention, they’re scrambling to prepare their wares for everyone to use.

Google reportedly issued an internal “code red” to rush its AI offerings to public availability. The company plans to release some 20 new AI tools this year. Even China’s Baidu is set to release a ChatGPT competitor in March.

Meanwhile, a site called Futurepedia lists hundreds of AI tools you can try; many are based on ChatGPT, but others that are powerful competitors.

ChatGPT isn’t uniquely powerful. It’s just unique among the powerful AI systems for its boldness and availability.

2. Microsoft controls OpenAI

The for-profit OpenAI LP company that makes ChatGPT is a subsidiary of the non-profit company OpenAI Incorporated. But OpenAI Incorporated does not “own” OpenAI LP.

Microsoft is the biggest investor, and the company is working on throwing another $10 billion at OpenAI LP, giving Microsoft a 49% stake. Only 2% of the company will be owned by its nonprofit parent company; all other investors will share the remaining 49%.

Microsoft is also actively working on building ChatGPT capability in Bing, Azure, PowerPoint, Outlook, and other products.

The bottom line is that Microsoft controls OpenAI.

3. Most ChatGPT criticism isn’t really about ChatGPT

The three biggest concerns about ChatGPT are that it 1) enables students to cheat; 2) will be used for unethical purposes, such as plagiarism and social engineering; and 3) sometimes gives false information.

We can dismiss the first two. Every new generation of generative AI that works like ChatGPT will be attended by tools that can detect AI-generated content. AI chatbots like ChatGPT are perfectly inevitable and will only get better. We were always going to have to deal with their unethical use.

In any event, the moral failings of humans aren’t the fault of AI tools.

The third criticism is misguided. When ChatGPT or other chatbots say false things, it’s only because the training datasets contained that false information haven’t been caught yet by an extensive vetting process, which includes the current public “beta.” Slamming ChatGPT for bad data is like slamming movies as a medium because you saw a bad movie. And it misses the point.

In the future, companies will use generative AI either with their own data, or with thoroughly vetted data. In other words, the technology is one piece and the data is another.

Imagine an organization like the CIA pouring all its intelligence reports, political analysis and the transcript of millions of phone calls into something like ChatGPT. The power to quickly gain insights will be unimaginably high. The technology has breathtaking potential. But today’s ChatGPT training data is irrelevant.

4. ChatGPT is made of people

It’s tempting to perceive the use of text-based AI as interacting with a computer’s thoughts instead of a human’s. In fact, projects like ChatGPT use human programming to harvest human-made content, which is then vetted and prioritized by humans. It therefore contains human errors, human biases and human conclusions.

5. ChatGPT takes user skill to realize its full potential

In fact, all text-input generative AI tools need skillful inputs for maximum effect. The newish skill of “prompt engineering” — the art and science of knowing how to talk to each AI tool, will be akin to SEO — a cottage industry of tools and consultants will emerge for developing and using knowledge about which words products what result.

“Prompt engineering” is a new speciality. But it’s also something most of us will dabble in. You can get started by checking out this so-called “cheat sheet.”

6. Some industries already depend on ChatGPT

Some types of professional writing is “robotic,” automatic, and formulaic — whether AI does it or not. Some professionals have already discovered that AI does this dull chore faster and cheaper.

One profession already addicted to ChatGPT is the real estate industry. When listing a property, there are only so many parameters one can use to describe a home for sale. The fundamentals include numerical data (square footage, total acreage, number of bedrooms and bathrooms, price, etc.) and descriptive and factual terms, such as “renovated,” “open concept,” “formal dining room,” etc.

Real estate agents now just plug all that data into ChatGPT and let the AI write the overview for the listing websites.

Real estate professionals aren’t unique, they’re just ahead of the curve. Many professionals will become totally dependent on AI tools.

ChatGPT is getting everyone’s attention. But the reality is that we’re in the earliest of the early days of generative AI. And understanding the reality of what’s happening with this technology is everybody’s business.