Doomer's advisor joins Musk's xAI, the 4th cutting-edge research lab focused on apocalypse AI

Access our on-demand library to view VB Transform 2023 sessions. Sign up here

Elon Musk has tapped Dan Hendrycks, a machine learning researcher who serves as director of the nonprofit Center for AI Security, as an advisor to his new startup, xAI.

Hendrycks, whose organization sponsored an AI risk statement in May that was signed by the CEOs of OpenAI, DeepMind, Anthropic and hundreds of other AI experts, receives more than 90% of its funding from Open Philanthropy, a nonprofit led by a prominent couple (Dustin Moskovitz and Cari Tuna) in the controversial Effective Altruism (EA) movement. EA is defined by the Center for Effective Altruism as "an intellectual project, using evidence and reason to determine how to benefit others as much as possible". According to many EA adherents, the overriding concern facing humanity is to avoid a doomsday scenario where a man-made AGI eradicates our species.

Musk's nomination of Hendrycks is significant because it's the clearest sign yet that four of the world's most celebrated and best-funded AI research labs — OpenAI, DeepMind, Anthropic, and now xAI — are bringing these kinds of ideas of existential risk, or x-risk, on AI systems to the mainstream.

That's the case even though many top AI researchers and computer scientists disagree that this "catastrophic" narrative deserves so much attention.

Event

VB Transform 2023 on demand

Did you miss a session of VB Transform 2023? Sign up to access the on-demand library for all of our featured sessions.

Register now

For example, Sara Hooker, Cohere's head of AI, told VentureBeat in May that x-risk "is a fringe topic." And Mark Riedl, a professor at the Georgia Institute of Technology, said that existential threats are “often reported as fact,” which he added “goes a long way to normalizing, through repetition, the belief that only scenarios that endanger civilization as a whole matter and that further harm does not occur or have consequence.”

Kyunghyun Cho, NYU AI researcher and professor, agrees, telling VentureBeat in June that he thinks these "catastrophic stories" distract from the real problems, both positive and negative, posed by today's AI.

"I'm disappointed with a lot of this discussion about existential risk; now they even call it 'extinction' in the literal sense," he said. "It sucks the air out of the room."

Other AI experts have also pointed out, both publicly and privately, that they are concerned about the companies' publicly acknowledged ties to the EA community, which supports...

Doomer's advisor joins Musk's xAI, the 4th cutting-edge research lab focused on apocalypse AI

Access our on-demand library to view VB Transform 2023 sessions. Sign up here

Elon Musk has tapped Dan Hendrycks, a machine learning researcher who serves as director of the nonprofit Center for AI Security, as an advisor to his new startup, xAI.

Hendrycks, whose organization sponsored an AI risk statement in May that was signed by the CEOs of OpenAI, DeepMind, Anthropic and hundreds of other AI experts, receives more than 90% of its funding from Open Philanthropy, a nonprofit led by a prominent couple (Dustin Moskovitz and Cari Tuna) in the controversial Effective Altruism (EA) movement. EA is defined by the Center for Effective Altruism as "an intellectual project, using evidence and reason to determine how to benefit others as much as possible". According to many EA adherents, the overriding concern facing humanity is to avoid a doomsday scenario where a man-made AGI eradicates our species.

Musk's nomination of Hendrycks is significant because it's the clearest sign yet that four of the world's most celebrated and best-funded AI research labs — OpenAI, DeepMind, Anthropic, and now xAI — are bringing these kinds of ideas of existential risk, or x-risk, on AI systems to the mainstream.

That's the case even though many top AI researchers and computer scientists disagree that this "catastrophic" narrative deserves so much attention.

Event

VB Transform 2023 on demand

Did you miss a session of VB Transform 2023? Sign up to access the on-demand library for all of our featured sessions.

Register now

For example, Sara Hooker, Cohere's head of AI, told VentureBeat in May that x-risk "is a fringe topic." And Mark Riedl, a professor at the Georgia Institute of Technology, said that existential threats are “often reported as fact,” which he added “goes a long way to normalizing, through repetition, the belief that only scenarios that endanger civilization as a whole matter and that further harm does not occur or have consequence.”

Kyunghyun Cho, NYU AI researcher and professor, agrees, telling VentureBeat in June that he thinks these "catastrophic stories" distract from the real problems, both positive and negative, posed by today's AI.

"I'm disappointed with a lot of this discussion about existential risk; now they even call it 'extinction' in the literal sense," he said. "It sucks the air out of the room."

Other AI experts have also pointed out, both publicly and privately, that they are concerned about the companies' publicly acknowledged ties to the EA community, which supports...

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow