Google Exec warns of AI chatbot "hallucinations". What's that supposed to mean?

A Google executive told a German newspaper that the current form of generative AI, such as ChatGPT, may be unreliable and enter a dreamlike, zoned state.

“This type of artificial intelligence that we are talking about right now can sometimes lead to what we call hallucinations,” Prabhakar Raghavan, senior vice president of Google and head of Google search, told Welt am Sonntag.

“It then expresses itself in such a way that a machine provides a convincing but completely made-up answer,” he said.

Indeed, many ChatGPT users, including Apple co-founder Steve Wozniak, have complained that the AI ​​is often wrong.

Encoding and decoding errors between text and representations can cause artificial intelligence hallucinations.

It was unclear if Raghavan was referring to Google's own forays into generative AI.

Related: Will robots replace us? 4 jobs that artificial intelligence can't beat (yet!)

Last week, the company announced that it was testing a chatbot called Bard Apprentice. The technology is based on LaMDA technology, which is the same as OpenAI's big language model for ChatGPT.

The protest in Paris was considered a public relations disaster, as investors were largely disappointed.

Google developers have been under intense pressure since the launch of OpenAI's ChatGPT, which has taken the world by storm and threatens Google's core business.

"We obviously feel the urgency, but we also feel the great responsibility," Raghavan told the newspaper. "We certainly don't want to mislead the public."

Google Exec warns of AI chatbot "hallucinations". What's that supposed to mean?

A Google executive told a German newspaper that the current form of generative AI, such as ChatGPT, may be unreliable and enter a dreamlike, zoned state.

“This type of artificial intelligence that we are talking about right now can sometimes lead to what we call hallucinations,” Prabhakar Raghavan, senior vice president of Google and head of Google search, told Welt am Sonntag.

“It then expresses itself in such a way that a machine provides a convincing but completely made-up answer,” he said.

Indeed, many ChatGPT users, including Apple co-founder Steve Wozniak, have complained that the AI ​​is often wrong.

Encoding and decoding errors between text and representations can cause artificial intelligence hallucinations.

It was unclear if Raghavan was referring to Google's own forays into generative AI.

Related: Will robots replace us? 4 jobs that artificial intelligence can't beat (yet!)

Last week, the company announced that it was testing a chatbot called Bard Apprentice. The technology is based on LaMDA technology, which is the same as OpenAI's big language model for ChatGPT.

The protest in Paris was considered a public relations disaster, as investors were largely disappointed.

Google developers have been under intense pressure since the launch of OpenAI's ChatGPT, which has taken the world by storm and threatens Google's core business.

"We obviously feel the urgency, but we also feel the great responsibility," Raghavan told the newspaper. "We certainly don't want to mislead the public."

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow