OpenAI, Georgetown, and Stanford Study Finds LLMs Can Boost Public Opinion Manipulation

Check out all the Smart Security Summit on-demand sessions here.

Advances in large AI-powered language models promise new applications in the near and distant future, with programmers, writers, marketers, and other professionals standing to benefit from advanced LLMs. But a new study by scientists from Stanford University, Georgetown University and OpenAI highlights the impact that LLMs can have on the work of actors trying to manipulate public opinion through the distribution of online content.

Research finds that LLMs can boost political influence operations by enabling large-scale content creation, reducing labor costs and making it harder to detect the activity of bots.

The study was conducted after the Center for Security and Emerging Technology (CSET) at Georgetown University, OpenAI and the Stanford Internet Observatory (SIO) co-hosted a workshop in 2021 to explore the potential misuse of LLMs for propaganda purposes. And as LLMs continue to improve, there are fears that malicious actors will no longer have a reason to use them for nefarious purposes.

Study Finds LLMs Impact Actors, Behaviors and Content

Influence operations are defined by three key elements: actors, behaviors and content. The study conducted by Stanford, Georgetown and OpenAI reveals that LLMs can have an impact on all three aspects.

Event

On-Demand Smart Security Summit

Learn about the essential role of AI and ML in cybersecurity and industry-specific case studies. Watch the on-demand sessions today.

look here

With LLMs making it easier to generate long, coherent texts, more actors will find it interesting to use them for influence operations. Content creation previously required human writers, which is expensive, scales poorly, and can be risky when actors try to hide their operations. LLMs are not perfect and can make silly mistakes when generating text. But a writer coupled with an LLM can become much more productive editing computer-generated text instead of writing from scratch. This makes writers much more productive and reduces labor costs.

"We argue that for propagandists, language generation tools are likely to be useful: they can reduce content generation costs and reduce the number of humans needed to create the same volume of content," Dr. Josh A. Goldstein, co-author of the paper and research associate with CSET's CyberAI project, told VentureBeat.

In behavioral terms, LLMs can not only boost current influence operations, but can also enable new tactics. For example, adversaries can use LLMs to create large-scale dynamic personalized content or create conversational interfaces like chatbots that can interact directly with many people simultaneously. The ability of LLMs to produce original content will also make it easier for players to conceal their influencer campaigns.

"Because text-generating tools create original output each time they are run, campaigns that depend on them can be harder for independent researchers to spot, since they don't rely on so-called 'copypasta' (or copying and pasting repeated text across online accounts),” Goldstein said.

Many things we don't know yet

Despite their impressive performance, LLMs are limited in many ways. For example, even the most advanced LLMs tend to make nonsensical statements and lose consistency as their text exceeds a few pages.

They also lack context for events that aren't included in their training data, and retraining them is a complicated and expensive process. This makes it difficult to use them for political influence campaigns that require commentary on real-time events.

But these limitations don't necessarily apply to all types of...

OpenAI, Georgetown, and Stanford Study Finds LLMs Can Boost Public Opinion Manipulation

Check out all the Smart Security Summit on-demand sessions here.

Advances in large AI-powered language models promise new applications in the near and distant future, with programmers, writers, marketers, and other professionals standing to benefit from advanced LLMs. But a new study by scientists from Stanford University, Georgetown University and OpenAI highlights the impact that LLMs can have on the work of actors trying to manipulate public opinion through the distribution of online content.

Research finds that LLMs can boost political influence operations by enabling large-scale content creation, reducing labor costs and making it harder to detect the activity of bots.

The study was conducted after the Center for Security and Emerging Technology (CSET) at Georgetown University, OpenAI and the Stanford Internet Observatory (SIO) co-hosted a workshop in 2021 to explore the potential misuse of LLMs for propaganda purposes. And as LLMs continue to improve, there are fears that malicious actors will no longer have a reason to use them for nefarious purposes.

Study Finds LLMs Impact Actors, Behaviors and Content

Influence operations are defined by three key elements: actors, behaviors and content. The study conducted by Stanford, Georgetown and OpenAI reveals that LLMs can have an impact on all three aspects.

Event

On-Demand Smart Security Summit

Learn about the essential role of AI and ML in cybersecurity and industry-specific case studies. Watch the on-demand sessions today.

look here

With LLMs making it easier to generate long, coherent texts, more actors will find it interesting to use them for influence operations. Content creation previously required human writers, which is expensive, scales poorly, and can be risky when actors try to hide their operations. LLMs are not perfect and can make silly mistakes when generating text. But a writer coupled with an LLM can become much more productive editing computer-generated text instead of writing from scratch. This makes writers much more productive and reduces labor costs.

"We argue that for propagandists, language generation tools are likely to be useful: they can reduce content generation costs and reduce the number of humans needed to create the same volume of content," Dr. Josh A. Goldstein, co-author of the paper and research associate with CSET's CyberAI project, told VentureBeat.

In behavioral terms, LLMs can not only boost current influence operations, but can also enable new tactics. For example, adversaries can use LLMs to create large-scale dynamic personalized content or create conversational interfaces like chatbots that can interact directly with many people simultaneously. The ability of LLMs to produce original content will also make it easier for players to conceal their influencer campaigns.

"Because text-generating tools create original output each time they are run, campaigns that depend on them can be harder for independent researchers to spot, since they don't rely on so-called 'copypasta' (or copying and pasting repeated text across online accounts),” Goldstein said.

Many things we don't know yet

Despite their impressive performance, LLMs are limited in many ways. For example, even the most advanced LLMs tend to make nonsensical statements and lose consistency as their text exceeds a few pages.

They also lack context for events that aren't included in their training data, and retraining them is a complicated and expensive process. This makes it difficult to use them for political influence campaigns that require commentary on real-time events.

But these limitations don't necessarily apply to all types of...

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow