The Deep Danger of Conversational AI

Check out all the Smart Security Summit on-demand sessions here.

When researchers consider the risks that AI poses to human civilization, we often refer to the "control problem." This refers to the possibility that an artificial super-intelligence could emerge that is so much smarter than humans that we are rapidly losing control of it. The fear is that a sentient AI with superhuman intellect may pursue goals and interests that conflict with our own, becoming a dangerous rival to humanity.

While this is a valid concern that we should strive to protect ourselves against, is this really the greatest threat that AI poses to society? Probably not. A recent survey of more than 700 AI experts found that most think human-level artificial intelligence (HLMI) is at least 30 years away.

On the other hand, I am deeply concerned about another type of control problem that is already within our grasp and could pose a major threat to society unless policymakers act quickly. I refer to the growing possibility that currently available AI technologies can be used to target and manipulate individual users with extreme precision and efficiency. Worse still, this new form of personalized manipulation could be deployed on a massive scale by corporate interests, state actors, or even rogue despots to influence large populations.

The "handling problem"

To contrast this threat with the traditional control problem described above, I call this emerging AI-related risk the "manipulation problem". It's a danger I've been watching for nearly two decades, but in the past 18 months it has gone from a theoretical long-term risk to an urgent short-term threat.

Event

On-Demand Smart Security Summit

Learn about the essential role of AI and ML in cybersecurity and industry-specific case studies. Watch the on-demand sessions today.

look here

That's because the most effective deployment mechanism for AI-based human manipulation is through conversational AI. And, over the past year, a remarkable AI technology called Large Language Models (LLM) has rapidly reached a level of maturity. This suddenly made natural conversational interactions between targeted users and AI-based software a viable means of persuasion, coercion and manipulation.

Of course, AI technologies are already being used to run influencer campaigns on social media platforms, but that's primitive compared to the direction the technology is taking. Indeed, current campaigns, although described as "targeted", are more like spraying buckshot at flocks of birds. This tactic directs a barrage of propaganda or misinformation at broadly defined groups in the hope that a few influential elements

The Deep Danger of Conversational AI

Check out all the Smart Security Summit on-demand sessions here.

When researchers consider the risks that AI poses to human civilization, we often refer to the "control problem." This refers to the possibility that an artificial super-intelligence could emerge that is so much smarter than humans that we are rapidly losing control of it. The fear is that a sentient AI with superhuman intellect may pursue goals and interests that conflict with our own, becoming a dangerous rival to humanity.

While this is a valid concern that we should strive to protect ourselves against, is this really the greatest threat that AI poses to society? Probably not. A recent survey of more than 700 AI experts found that most think human-level artificial intelligence (HLMI) is at least 30 years away.

On the other hand, I am deeply concerned about another type of control problem that is already within our grasp and could pose a major threat to society unless policymakers act quickly. I refer to the growing possibility that currently available AI technologies can be used to target and manipulate individual users with extreme precision and efficiency. Worse still, this new form of personalized manipulation could be deployed on a massive scale by corporate interests, state actors, or even rogue despots to influence large populations.

The "handling problem"

To contrast this threat with the traditional control problem described above, I call this emerging AI-related risk the "manipulation problem". It's a danger I've been watching for nearly two decades, but in the past 18 months it has gone from a theoretical long-term risk to an urgent short-term threat.

Event

On-Demand Smart Security Summit

Learn about the essential role of AI and ML in cybersecurity and industry-specific case studies. Watch the on-demand sessions today.

look here

That's because the most effective deployment mechanism for AI-based human manipulation is through conversational AI. And, over the past year, a remarkable AI technology called Large Language Models (LLM) has rapidly reached a level of maturity. This suddenly made natural conversational interactions between targeted users and AI-based software a viable means of persuasion, coercion and manipulation.

Of course, AI technologies are already being used to run influencer campaigns on social media platforms, but that's primitive compared to the direction the technology is taking. Indeed, current campaigns, although described as "targeted", are more like spraying buckshot at flocks of birds. This tactic directs a barrage of propaganda or misinformation at broadly defined groups in the hope that a few influential elements

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow