Apocalyptic panic and AI doomerism must give way to analysis of real risks

Join senior executives in San Francisco on July 11-12 to learn how leaders are integrating and optimizing AI investments for success. Find out more

The rapid advancement of generative AI marks one of the most promising technological advancements of the past century. It evoked excitement and, like almost every other technological breakthrough of the past, fear. It is promising to see Congress and Vice President Kamala Harris, among others, taking the issue so seriously.

At the same time, much of the AI ​​discourse has shifted more towards alarmism, detached from the reality of technology. Many prefer tales that cling to familiar science fiction tales of doom and destruction. The anxiety over this technology is understandable, but the doomsday panic needs to give way to a thoughtful, rational conversation about the real risks and how to mitigate them.

So what are the risks of AI?

First, there are concerns that AI will facilitate online identity theft and the creation of content that makes it difficult to distinguish between real and fake news. These are legitimate concerns, but they are also additional challenges to existing problems. Unfortunately, we already have a wealth of misinformation online. Deep fakes and edited media exist in abundance, and phishing emails began decades ago.

Similarly, we know the impact that algorithms can have on news bubbles, amplifying misinformation and even racism. AI might make these problems more difficult, but it has barely created them, and AI is simultaneously being used to mitigate them.

Event

Transform 2023

Join us in San Francisco on July 11-12, where senior executives will discuss how they've integrated and optimized AI investments for success and avoided common pitfalls.

Register now

The second bucket is the fanciest area: this AI could accumulate superhuman intelligence and potentially overtake society. These are the kind of worst-case scenarios that have been steeped in society's imagination for decades, if not centuries.

We can and should consider all theoretical scenarios, but the idea that humans will accidentally create a malevolent, omnipotent AI strains credulity and strikes me as the version of AI of the claim that the Large Hadron Collider at CERN could open a black hole and consume the Earth.

Technology always wants to develop

A proposed solution, slowing down technological development, is a crude and clumsy response to the rise of AI. Technology always continues to develop. It's about who develops it and how they deploy it.

The hysterical responses ignore the real opportunity for this technology to profoundly benefit society. For example, it enables the most promising advances in healthcare we've seen in over a century, and recent work suggests that the

Apocalyptic panic and AI doomerism must give way to analysis of real risks

Join senior executives in San Francisco on July 11-12 to learn how leaders are integrating and optimizing AI investments for success. Find out more

The rapid advancement of generative AI marks one of the most promising technological advancements of the past century. It evoked excitement and, like almost every other technological breakthrough of the past, fear. It is promising to see Congress and Vice President Kamala Harris, among others, taking the issue so seriously.

At the same time, much of the AI ​​discourse has shifted more towards alarmism, detached from the reality of technology. Many prefer tales that cling to familiar science fiction tales of doom and destruction. The anxiety over this technology is understandable, but the doomsday panic needs to give way to a thoughtful, rational conversation about the real risks and how to mitigate them.

So what are the risks of AI?

First, there are concerns that AI will facilitate online identity theft and the creation of content that makes it difficult to distinguish between real and fake news. These are legitimate concerns, but they are also additional challenges to existing problems. Unfortunately, we already have a wealth of misinformation online. Deep fakes and edited media exist in abundance, and phishing emails began decades ago.

Similarly, we know the impact that algorithms can have on news bubbles, amplifying misinformation and even racism. AI might make these problems more difficult, but it has barely created them, and AI is simultaneously being used to mitigate them.

Event

Transform 2023

Join us in San Francisco on July 11-12, where senior executives will discuss how they've integrated and optimized AI investments for success and avoided common pitfalls.

Register now

The second bucket is the fanciest area: this AI could accumulate superhuman intelligence and potentially overtake society. These are the kind of worst-case scenarios that have been steeped in society's imagination for decades, if not centuries.

We can and should consider all theoretical scenarios, but the idea that humans will accidentally create a malevolent, omnipotent AI strains credulity and strikes me as the version of AI of the claim that the Large Hadron Collider at CERN could open a black hole and consume the Earth.

Technology always wants to develop

A proposed solution, slowing down technological development, is a crude and clumsy response to the rise of AI. Technology always continues to develop. It's about who develops it and how they deploy it.

The hysterical responses ignore the real opportunity for this technology to profoundly benefit society. For example, it enables the most promising advances in healthcare we've seen in over a century, and recent work suggests that the

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow