Doomer AI Advisor Joins Musk's xAI, the 4th Leading Research Lab Focused on Apocalypse AI

Access our on-demand library to view VB Transform 2023 sessions. Sign up here

Elon Musk has tapped Dan Hendrycks, a machine learning researcher who serves as director of the nonprofit Center for AI Security, as an advisor to his new startup, xAI.

The Center for AI Safety sponsored an AI Risk Statement in May, which was signed by the CEOs of OpenAI, DeepMind, Anthropic, and hundreds of other AI experts. The organization receives over 90% of its funding through Open Philanthropy, a non-profit run by a couple (Dustin Moskovitz and Cari Tuna) prominent in the controversial Effective Altruism (EA) movement. EA is defined by the Center for Effective Altruism as "an intellectual project, using evidence and reason to determine how to benefit others as much as possible". According to many EA adherents, the overriding concern facing humanity is to avoid a doomsday scenario where a man-made AGI eradicates our species.

Musk's nomination of Hendrycks is significant because it's the clearest sign yet that four of the world's most celebrated and best-funded AI research labs — OpenAI, DeepMind, Anthropic, and now xAI — are bringing these kinds of ideas of existential risk, or x-risk, on AI systems to the mainstream.

That's the case even though many top AI researchers and computer scientists disagree that this "catastrophic" narrative deserves so much attention.

Event

VB Transform 2023 on demand

Did you miss a session of VB Transform 2023? Sign up to access the on-demand library for all of our featured sessions.

Register now

For example, Sara Hooker, Cohere's head of AI, told VentureBeat in May that x-risk "is a fringe topic." And Mark Riedl, a professor at the Georgia Institute of Technology, said there is...

Doomer AI Advisor Joins Musk's xAI, the 4th Leading Research Lab Focused on Apocalypse AI

Access our on-demand library to view VB Transform 2023 sessions. Sign up here

Elon Musk has tapped Dan Hendrycks, a machine learning researcher who serves as director of the nonprofit Center for AI Security, as an advisor to his new startup, xAI.

The Center for AI Safety sponsored an AI Risk Statement in May, which was signed by the CEOs of OpenAI, DeepMind, Anthropic, and hundreds of other AI experts. The organization receives over 90% of its funding through Open Philanthropy, a non-profit run by a couple (Dustin Moskovitz and Cari Tuna) prominent in the controversial Effective Altruism (EA) movement. EA is defined by the Center for Effective Altruism as "an intellectual project, using evidence and reason to determine how to benefit others as much as possible". According to many EA adherents, the overriding concern facing humanity is to avoid a doomsday scenario where a man-made AGI eradicates our species.

Musk's nomination of Hendrycks is significant because it's the clearest sign yet that four of the world's most celebrated and best-funded AI research labs — OpenAI, DeepMind, Anthropic, and now xAI — are bringing these kinds of ideas of existential risk, or x-risk, on AI systems to the mainstream.

That's the case even though many top AI researchers and computer scientists disagree that this "catastrophic" narrative deserves so much attention.

Event

VB Transform 2023 on demand

Did you miss a session of VB Transform 2023? Sign up to access the on-demand library for all of our featured sessions.

Register now

For example, Sara Hooker, Cohere's head of AI, told VentureBeat in May that x-risk "is a fringe topic." And Mark Riedl, a professor at the Georgia Institute of Technology, said there is...

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow