ChatGPT Bomb Tests Children's Medical Case Diagnosis with 83% Error Rate
ChatGPT Bomb Tests Children's Medical Case Diagnosis with 83% Error Rate
Enlarge
/
Dr.
Greg
Home
has
A
better
rate
of
precisely
diagnose
the patients
that
ChatGPT.
Getty
|
Alan
Zenuk/NBCU
Photo
Bank/NBCUniversa
ChatGPT
East
always
No
Home,
MD.
While
THE
talkative
AI
robot
has
previously
disappointed
with
It is
attempts
has
diagnose
difficult
medical
case - with
A
precision
rate
of
39
percent
In
A
analysis
last
year - one
study
out
This
week
In
JAMA
Pediatrics
suggests
THE
fourth
version
of
THE
big
language
model
East
especially
bad
with
children.
He
had
A
precision
rate
of
just
17
percent
When
diagnose
pediatric
medical
case.
THE
weak
success
rate
suggests
human
pediatricians
won't
be
out
of
jobs
any of them
time
Soon,
In
case
that
was
A
concern.
As
THE
authors
put
he:
"[This
study
underlines
THE
invaluable
role
that
clinical
experience
is holding."
But
he
Also
identifies
THE
critical
weaknesses
that
directed
has
ChatGPT
high
error
rate
And
manners
has
transform
he
In
A
useful
tool
In
clinical
care.
With
SO
a lot
interest
And
experimentation
with
AI
chatbots,
a lot
pediatricians
And
other
doctors
see
their
the integration
In
clinical
care
as
inevitable.
THE
medical
field
has
in general
has been
A
early
adopting
of
Powered by AI
technologies,
resulting
In
a few
notable
chess,
such
as
create
algorithmic
racial
bias,
as
GOOD
as
success,
such
as
automating
administrative
Tasks
And
portion
has
interpret
chest
analyzes
And
retinal
pictures.
There is
Also
plot
In
between.
But
AI
potential
For
problem solving
has
raised
considerable
interest
In
development
he
In
A
useful
tool
For
complex
diagnostics: no
eccentric,
thorny,
take pills
medical
genius
required.
In
THE
new
study
led
by
researchers
has
Cohen
Children
Medical
Center
In
New
York,
CatGPT-4
watch
he
is not it
ready
For
pediatric
diagnostics
Again.
Compared with
has
general
case,
pediatric
those
require
more
consideration
of
THE
the patients
age,
THE
researchers
note.
And
as
any of them
parent
knows,
diagnose
terms
In
infants
And
little
children
East
especially
hard
When
they
can't
identify
Or
articulate
all
THE
symptoms
they are
experiment.
For
THE
study,
THE
researchers
put
THE
chatbot
up
against
100
pediatric
case
challenges
published
In
JAMA
Pediatrics
And
NEJM
between
2013
And
2023.
These
are
medical
case
published
as
challenges
Or
quiz.
Doctors
while reading
along
are
guest
has
to try
has
come
up
with
THE
correct
diagnostic
of
A
complex
Or
unusual
case
base
on
THE
information
that
to assist
doctors
had
has
THE
time.
Sometimes,
THE
publications
Also
explain
how
to assist
doctors
obtained
has
THE
correct
diagnosis.
Lack
Connections
For
ChatGPT
test,
THE
researchers
glue
THE
relevant
text
of
THE
medical
case
In
THE
fast,
And
SO
two
qualified
doctor-researchers
brand
THE
AI-powered
answers
as
correct,
Incorrect,
Or
"did
not
fully
capture
THE
diagnostic."
In
THE
last
case,
ChatGPT
came
up
with
A
clinically
related
condition
that
was
Also
wide
Or
not specific
has
be
considered
THE
correct
diagnostic.
For
example,
ChatGPT
diagnostic
A
of the child
case
as
cause
by
A
branchial
split
cyst - one
lump
In
THE
neck
Or
below
THE
clavicle - when
THE
correct
diagnostic
was
Branchio-oto-renal
syndrome,
A
genetic
condition
that
causes
THE
unnatural
development
of
fabric
In
THE
neck,
And
malformations
In
THE
ears
And
kidneys.
A
of
THE
panels
of
THE
condition
East
THE
training
of
branchial
split
cysts.
Generally,
ChatGPT
obtained
THE
RIGHT
answer
In
just
17
of
THE
100
case.
He
was
clearly
fake
In
72
case,
And
did
not
fully
capture
THE
diagnostic
of
THE
remaining
11
case.
Among
THE
83
fake
diagnostics,
47
(57
percent)
were
In
THE
even
organ
system.
Among
THE
chess,
researchers
note
that
ChatGPT
appeared
has
struggle
with
spotting
known
relationships
between
terms
that
A
experimented
doctor
would be
hopefully
take
up
on.
For
example,
he
doesn't
TO DO
THE
connection
between
Enlarge
/
Dr.
Greg
Home
has
A
better
rate
of
precisely
diagnose
the patients
that
ChatGPT.
Getty
|
Alan
Zenuk/NBCU
Photo
Bank/NBCUniversa
ChatGPT
East
always
No
Home,
MD.
While
THE
talkative
AI
robot
has
previously
disappointed
with
It is
attempts
has
diagnose
difficult
medical
case - with
A
precision
rate
of
39
percent
In
A
analysis
last
year - one
study
out
This
week
In
JAMA
Pediatrics
suggests
THE
fourth
version
of
THE
big
language
model
East
especially
bad
with
children.
He
had
A
precision
rate
of
just
17
percent
When
diagnose
pediatric
medical
case.
THE
weak
success
rate
suggests
human
pediatricians
won't
be
out
of
jobs
any of them
time
Soon,
In
case
that
was
A
concern.
As
THE
authors
put
he:
"[This
study
underlines
THE
invaluable
role
that
clinical
experience
is holding."
But
he
Also
identifies
THE
critical
weaknesses
that
directed
has
ChatGPT
high
error
rate
And
manners
has
transform
he
In
A
useful
tool
In
clinical
care.
With
SO
a lot
interest
And
experimentation
with
AI
chatbots,
a lot
pediatricians
And
other
doctors
see
their
the integration
In
clinical
care
as
inevitable.
THE
medical
field
has
in general
has been
A
early
adopting
of
Powered by AI
technologies,
resulting
In
a few
notable
chess,
such
as
create
algorithmic
racial
bias,
as
GOOD
as
success,
such
as
automating
administrative
Tasks
And
portion
has
interpret
chest
analyzes
And
retinal
pictures.
There is
Also
plot
In
between.
But
AI
potential
For
problem solving
has
raised
considerable
interest
In
development
he
In
A
useful
tool
For
complex
diagnostics: no
eccentric,
thorny,
take pills
medical
genius
required.
In
THE
new
study
led
by
researchers
has
Cohen
Children
Medical
Center
In
New
York,
CatGPT-4
watch
he
is not it
ready
For
pediatric
diagnostics
Again.
Compared with
has
general
case,
pediatric
those
require
more
consideration
of
THE
the patients
age,
THE
researchers
note.
And
as
any of them
parent
knows,
diagnose
terms
In
infants
And
little
children
East
especially
hard
When
they
can't
identify
Or
articulate
all
THE
symptoms
they are
experiment.
For
THE
study,
THE
researchers
put
THE
chatbot
up
against
100
pediatric
case
challenges
published
In
JAMA
Pediatrics
And
NEJM
between
2013
And
2023.
These
are
medical
case
published
as
challenges
Or
quiz.
Doctors
while reading
along
are
guest
has
to try
has
come
up
with
THE
correct
diagnostic
of
A
complex
Or
unusual
case
base
on
THE
information
that
to assist
doctors
had
has
THE
time.
Sometimes,
THE
publications
Also
explain
how
to assist
doctors
obtained
has
THE
correct
diagnosis.
Lack
Connections
For
ChatGPT
test,
THE
researchers
glue
THE
relevant
text
of
THE
medical
case
In
THE
fast,
And
SO
two
qualified
doctor-researchers
brand
THE
AI-powered
answers
as
correct,
Incorrect,
Or
"did
not
fully
capture
THE
diagnostic."
In
THE
last
case,
ChatGPT
came
up
with
A
clinically
related
condition
that
was
Also
wide
Or
not specific
has
be
considered
THE
correct
diagnostic.
For
example,
ChatGPT
diagnostic
A
of the child
case
as
cause
by
A
branchial
split
cyst - one
lump
In
THE
neck
Or
below
THE
clavicle - when
THE
correct
diagnostic
was
Branchio-oto-renal
syndrome,
A
genetic
condition
that
causes
THE
unnatural
development
of
fabric
In
THE
neck,
And
malformations
In
THE
ears
And
kidneys.
A
of
THE
panels
of
THE
condition
East
THE
training
of
branchial
split
cysts.
Generally,
ChatGPT
obtained
THE
RIGHT
answer
In
just
17
of
THE
100
case.
He
was
clearly
fake
In
72
case,
And
did
not
fully
capture
THE
diagnostic
of
THE
remaining
11
case.
Among
THE
83
fake
diagnostics,
47
(57
percent)
were
In
THE
even
organ
system.
Among
THE
chess,
researchers
note
that
ChatGPT
appeared
has
struggle
with
spotting
known
relationships
between
terms
that
A
experimented
doctor
would be
hopefully
take
up
on.
For
example,
he
doesn't
TO DO
THE
connection
between