đź”’ ChatGPT – eloquent robot or misinformation machine?

By Olivia Solon

Chatbots have been replacing humans in call centers, but they’re not so good at answering more complex questions from customers. That may be about to change, if the release of ChatGPT is anything to go by. The program trawls vast amounts of information to generate natural-sounding text based on queries or prompts. It can write and debug code in a range of programming languages and generate poems and essays — even mimicking literary styles. Some experts have declared it a ground-breaking feat of artificial intelligence that could replace humans for a multitude of tasks, and a potential disruptor of huge businesses like Google. Others warn that tools like ChatGPT could flood the Web with clever-sounding misinformation. 

1. Who is behind ChatGPT?

___STEADY_PAYWALL___

It was developed by San Francisco-based research laboratory OpenAI, co-founded by programmer and entrepreneur Sam Altman, Elon Musk and other wealthy Silicon Valley investors in 2015 to develop AI technology that “benefits all of humanity.” OpenAI has also developed software that can beat humans at video games and a tool known as Dall-E that can generate images â€“ from the photorealistic to the fantastical – based on text descriptions. ChatGPT is the latest iteration of GPT (Generative Pre-Trained Transformer), a family of text-generating AI programs. It’s currently free to use as a “research preview” on OpenAI’s website but the company wants to find ways to monetize the tool.

OpenAI investors include Microsoft Corp., which invested $1 billion in 2019, LinkedIn co-founder Reid Hoffman’s charitable foundation and Khosla Ventures. Although Musk was a co-founder and an early donor to the non-profit, he ended his involvement in 2018 and has no financial stake, OpenAI said. OpenAI shifted to create a for-profit entity in 2019 but it has an unusual financial structure — returns on investment are capped for investors and employees, and any profits beyond that go back to the original non-profit.

2. How does it work?

The GPT tools can read and analyze swathes of text and generate sentences that are similar to how humans talk and write. They are trained in a process called unsupervised learning, which involves finding patterns in a dataset without being given labeled examples or explicit instructions about what to look for. The most recent version, GPT-3, ingested text from across the web, including Wikipedia, news sites, books and blogs in an effort to make its answers relevant and well-informed. ChatGPT adds a conversational interface on top of GPT-3. 

3. What’s been the response?

More than a million people signed up to use ChatGPT in the days following its launch in late November. Social media has been abuzz with users trying fun, low-stakes uses for the technology. Some have shared its responses to obscure trivia questions. Others marveled at its sophisticated historical arguments, college “essays,” pop song lyrics, poems about cryptocurrency, meal plans that meet specific dietary needs and solutions to programming challenges. 

1/ Just fell into the ChatGPT rabbit hole. Sorry, can’t help myself.

London History, in the style of Dr Seuss pic.twitter.com/R92clq2vdX— 6529 (@punk6529) December 1, 2022

4. What else could it be used for?

One potential use case is as a replacement for a search engine like Google. Instead of scouring dozens of articles on a topic and firing back a line of relevant text from a website, it could deliver a bespoke response. It could push automated customer service to a new level of sophistication, producing a relevant answer the first time so users aren’t left waiting to speak to a human. It could draft blog posts and other types of PR content for companies that would otherwise require the help of a copywriter. 

5. What are its limitations?

The answers pieced together by ChatGPT from second-hand information can sound so authoritative that users may assume it has verified their accuracy. What it’s really doing is spitting out text that reads well and sounds smart but might be incomplete, biased, partly wrong or, occasionally, nonsense. The system is only as good as the data that it’s trained with. Stripped from useful context such as the source of the information, and with few of the typos and other imperfections that can often signal unreliable material, the content could be a minefield for those who aren’t sufficiently well-versed in a subject to notice a flawed response. This issue led StackOverflow, a computer programming website with a forum for coding advice, to ban ChatGPT responses because they were often inaccurate. 

6. What about ethical risks?

As machine intelligence becomes more sophisticated, so does its potential for trickery and mischief-making. Microsoft’s AI bot Tay was taken down in 2016 after some users taught it to make racist and sexist remarks. Another developed by Meta Platforms Inc. encountered similar issues in 2022. OpenAI has tried to train ChatGPT to refuse inappropriate requests, limiting its ability to spout hate speech and misinformation. Altman, OpenAI’s chief executive officer, has encouraged people to â€śthumbs down” distasteful or offensive responses to improve the system. But some users have found work-arounds. At its heart, ChatGPT generates chains of words, but has no understanding of their significance. It might not pick up on gender and racial biases that a human would notice in books and other texts. It’s also a potential weapon for deceit. College teachers worry about students getting chatbots to do their homework. Lawmakers may be inundated with letters apparently from constituents complaining about proposed legislation and have no idea if they’re genuine or generated by a chatbot used by a lobbying firm. 

The Reference Shelf

© 2023 Bloomberg L.P.

Read more:

Visited 118 times, 1 visit(s) today