The hidden danger of ChatGPT and generative AI | The Rhythm of AI

Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and gain efficiencies by improving and scaling citizen developers. Watch now.

Since OpenAI launched its first demo of ChatGPT last Wednesday, the tool already has more than a million users, according to CEO Sam Altman — a milestone, he points out, that has put nearly 24 months to achieve GPT-3 and DALL-E over 2 months.

The "interactive and conversational model," based on the company's GPT-3.5 text generator, certainly has the tech world swooning. Aaron Levie, CEO of Box, tweeted that "ChatGPT is one of those rare times in tech where you see a glimmer of how everything is going to be different in the future." Y Combinator co-founder Paul Graham tweeted that "clearly something big is happening." Alberto Romero, author of The Algorithmic Bridge, calls it "by far the best chatbot in the world". And even Elon Musk weighed in, tweeting that ChatGPT is “scary good. We are not far from a dangerously powerful AI.

But there's a hidden problem lurking in ChatGPT: that is, it quickly spits out eloquent and confident answers that often sound plausible and true even though they're not.

Like other great generative language models, ChatGPT invents facts. Some call it "hallucination" or "stochastic repetition", but these models are trained to predict the next word for a given input, not to determine whether a fact is correct or not.

Event

Smart Security Summit

Learn about the essential role of AI and ML in cybersecurity and industry-specific case studies on December 8. Sign up for your free pass today.

Register now

Some have noted that what sets ChatGPT apart is that it's so good at making its hallucinations reasonable.

Tech analyst Benedict Evans, for example, asked ChatGPT to “write a biography for Benedict Evans.” The result, he tweeted, was "plausible, almost entirely false".

More troubling is the fact that there are obviously countless queries where the user will only know if the answer is wrong if they already know the answer to the question being asked.

That's what Arvind Narayanan, professor of computer science at Princeton, pointed out in a tweet: "People are excited to use ChatGPT to learn. It's often very good. But the danger is that you can't tell when it's wrong unless you already know the answer.I tried some basic information security questions.In most cases the answers sounded plausible but were actually BS .

Generative AI Fact Checking

During the decline of print magazines in the 2000s, I spent several years as a fact checker for publications such as GQ and Rolling Stone em>. Every fact had to include authoritative primary or secondary sources – and Wikipedia was frowned upon.

Few publications now have fact checkers, putting pressure on journalists and editors to ensure they clarify the facts, especially at a time when misinformation is already spreading like lightning. .

The hidden danger of ChatGPT and generative AI | The Rhythm of AI

Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and gain efficiencies by improving and scaling citizen developers. Watch now.

Since OpenAI launched its first demo of ChatGPT last Wednesday, the tool already has more than a million users, according to CEO Sam Altman — a milestone, he points out, that has put nearly 24 months to achieve GPT-3 and DALL-E over 2 months.

The "interactive and conversational model," based on the company's GPT-3.5 text generator, certainly has the tech world swooning. Aaron Levie, CEO of Box, tweeted that "ChatGPT is one of those rare times in tech where you see a glimmer of how everything is going to be different in the future." Y Combinator co-founder Paul Graham tweeted that "clearly something big is happening." Alberto Romero, author of The Algorithmic Bridge, calls it "by far the best chatbot in the world". And even Elon Musk weighed in, tweeting that ChatGPT is “scary good. We are not far from a dangerously powerful AI.

But there's a hidden problem lurking in ChatGPT: that is, it quickly spits out eloquent and confident answers that often sound plausible and true even though they're not.

Like other great generative language models, ChatGPT invents facts. Some call it "hallucination" or "stochastic repetition", but these models are trained to predict the next word for a given input, not to determine whether a fact is correct or not.

Event

Smart Security Summit

Learn about the essential role of AI and ML in cybersecurity and industry-specific case studies on December 8. Sign up for your free pass today.

Register now

Some have noted that what sets ChatGPT apart is that it's so good at making its hallucinations reasonable.

Tech analyst Benedict Evans, for example, asked ChatGPT to “write a biography for Benedict Evans.” The result, he tweeted, was "plausible, almost entirely false".

More troubling is the fact that there are obviously countless queries where the user will only know if the answer is wrong if they already know the answer to the question being asked.

That's what Arvind Narayanan, professor of computer science at Princeton, pointed out in a tweet: "People are excited to use ChatGPT to learn. It's often very good. But the danger is that you can't tell when it's wrong unless you already know the answer.I tried some basic information security questions.In most cases the answers sounded plausible but were actually BS .

Generative AI Fact Checking

During the decline of print magazines in the 2000s, I spent several years as a fact checker for publications such as GQ and Rolling Stone em>. Every fact had to include authoritative primary or secondary sources – and Wikipedia was frowned upon.

Few publications now have fact checkers, putting pressure on journalists and editors to ensure they clarify the facts, especially at a time when misinformation is already spreading like lightning. .

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow