Dumb AI is a bigger risk than strong AI

Couldn't attend Transform 2022? Check out all the summit sessions in our on-demand library now! Look here.

The year is 2052. The world has avoided the climate crisis thanks to the final adoption of nuclear energy for the majority of electricity production. The conventional wisdom now is that nuclear power plants are a problem of complexity; Three Mile Island is now a punchline rather than a disaster. Fears about nuclear waste and factory explosions have been alleviated primarily through better software automation. What we didn't know was that software for all nuclear power plants, made by a few different vendors around the world, all share the same bias. After two decades of flawless operation, several independent factories all fail in the same year. The Nuclear Power CEO Council has realized that everyone who knows how to operate Class IV nuclear power plants is either dead or retired. We must now choose between modernity and unacceptable risk.

Artificial intelligence, or AI, is having a moment. After a decades-long "AI winter", machine learning has awakened from its slumber to find a world of technical advancements like reinforcement learning, transformers and more with computing resources that are now fully prepared and can use these advancements.

The ancestry of AI has not gone unnoticed; in fact, it has sparked much debate. The conversation is often dominated by those who are afraid of AI. These people range from ethical AI researchers scared of bias to rationalists contemplating extinction events. Their concerns tend to revolve around AI being hard to understand or too smart to control, ultimately defeating our purposes, its creators. Usually, AI boosters will respond with a techno-optimistic tactic. They claim that these worries are basically wrong, pointing to their own abstract arguments as well as hard data regarding the good work the AI ​​has done for us so far to imply that it will continue to do good for us in the future. future.

Both views miss the point. An ethereal form of strong AI isn't here yet and probably won't be for some time. Instead, we face a bigger risk, one that is here today and only getting worse: we are deploying lots of AI before they are fully cooked. In other words, our greatest risk is not too smart AI, but rather too dumb AI. Our biggest risk is like the thumbnail above: AI that's not malicious but dumb. And we ignore it.

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to advise on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

register here Stupid AI is already here

Dumb AI poses a greater risk than strong AI, mainly because the former actually exists, while it is not yet clear whether the latter is actually possible. Eliezer Yudkowsky perhaps summed it up best: "The greatest danger of artificial intelligence is that people conclude too soon that they understand it."

Real AI is actually being used, from manufacturing shops to translation departments. According to McKinsey, 70% of companies said they generated revenue through the use of AI. These aren't trivial applications either: AI is being deployed in critical functions today, functions that most people still mistakenly think are remote, and there are many examples.

The US military already deploys autonomous weapons (specifically, quadcopter mines) that do not require human kill decisions, even though we do not yet have an autonomous weapons treaty. Amazon actually rolled out an AI-based resume sorting tool before

Dumb AI is a bigger risk than strong AI

Couldn't attend Transform 2022? Check out all the summit sessions in our on-demand library now! Look here.

The year is 2052. The world has avoided the climate crisis thanks to the final adoption of nuclear energy for the majority of electricity production. The conventional wisdom now is that nuclear power plants are a problem of complexity; Three Mile Island is now a punchline rather than a disaster. Fears about nuclear waste and factory explosions have been alleviated primarily through better software automation. What we didn't know was that software for all nuclear power plants, made by a few different vendors around the world, all share the same bias. After two decades of flawless operation, several independent factories all fail in the same year. The Nuclear Power CEO Council has realized that everyone who knows how to operate Class IV nuclear power plants is either dead or retired. We must now choose between modernity and unacceptable risk.

Artificial intelligence, or AI, is having a moment. After a decades-long "AI winter", machine learning has awakened from its slumber to find a world of technical advancements like reinforcement learning, transformers and more with computing resources that are now fully prepared and can use these advancements.

The ancestry of AI has not gone unnoticed; in fact, it has sparked much debate. The conversation is often dominated by those who are afraid of AI. These people range from ethical AI researchers scared of bias to rationalists contemplating extinction events. Their concerns tend to revolve around AI being hard to understand or too smart to control, ultimately defeating our purposes, its creators. Usually, AI boosters will respond with a techno-optimistic tactic. They claim that these worries are basically wrong, pointing to their own abstract arguments as well as hard data regarding the good work the AI ​​has done for us so far to imply that it will continue to do good for us in the future. future.

Both views miss the point. An ethereal form of strong AI isn't here yet and probably won't be for some time. Instead, we face a bigger risk, one that is here today and only getting worse: we are deploying lots of AI before they are fully cooked. In other words, our greatest risk is not too smart AI, but rather too dumb AI. Our biggest risk is like the thumbnail above: AI that's not malicious but dumb. And we ignore it.

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to advise on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

register here Stupid AI is already here

Dumb AI poses a greater risk than strong AI, mainly because the former actually exists, while it is not yet clear whether the latter is actually possible. Eliezer Yudkowsky perhaps summed it up best: "The greatest danger of artificial intelligence is that people conclude too soon that they understand it."

Real AI is actually being used, from manufacturing shops to translation departments. According to McKinsey, 70% of companies said they generated revenue through the use of AI. These aren't trivial applications either: AI is being deployed in critical functions today, functions that most people still mistakenly think are remote, and there are many examples.

The US military already deploys autonomous weapons (specifically, quadcopter mines) that do not require human kill decisions, even though we do not yet have an autonomous weapons treaty. Amazon actually rolled out an AI-based resume sorting tool before

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow