OpenAI Trust and Security Chief Dave Willner Steps Down

A major personnel change is underway at OpenAI, the artificial intelligence juggernaut that has almost single-handedly inserted the concept of generative AI into the global public discourse with the launch of ChatGPT. Dave Willner, an industry veteran who was head of trust and safety at the startup, announced in a post on LinkedIn last night (first spotted by Reuters) that he had quit his job and moved into an advisory role. He plans to spend more time with his young family, he said. He had held the position for a year and a half.

His departure comes at a critical time for the world of AI.

Image credits: LinkedIn (opens in new window) under CC BY 2.0 license (opens in new window).

Along with all the excitement about the capabilities of generative AI platforms, which rely on large language models and are lightning-fast to produce freely generated text, images, music, and more from simple user prompts, the list of questions grows. How can we best regulate activity and companies in this brave new world? How do you best mitigate adverse impacts across a range of issues? Trust and security are fundamental to these conversations.

Even today, OpenAI Chairman Greg Brockman is scheduled to appear at the White House alongside executives from Anthropic, Google, Inflection, Microsoft, Meta, and Amazon to endorse voluntary commitments to pursue common goals of security and transparency ahead of an executive order on AI that is in the works. This follows much noise in Europe regarding the regulation of AI, as well as changing sentiments among others.

The important thing in all of this is not lost on OpenAI, which has sought to position itself as a conscious and responsible player in the field.

Willner makes no reference to this specifically in his LinkedIn post. Instead, he holds it high, noting that the demands of his OpenAI work shifted to a "high-intensity phase" after the launch of ChatGPT.

"I'm proud of everything our team has accomplished during my time at OpenAI, and while my job there was one of the coolest and most interesting jobs it is possible to have today, it has also grown tremendously in scope and scale since I joined," he wrote. While he and his wife - Chariotte Willner, who is also a trust and security specialist - are both committed to always putting family first, he said, "In the months since launching ChatGPT, I've found it increasingly difficult to hold my end of the bargain."

Willner has only been in his OpenAI role for a year and a half, but he comes from a long career in the field that includes leading trust and safety teams at Facebook and Airbnb.

The work on Facebook is particularly interesting. There he was one of the first employees who helped define the company's first communities...

OpenAI Trust and Security Chief Dave Willner Steps Down

A major personnel change is underway at OpenAI, the artificial intelligence juggernaut that has almost single-handedly inserted the concept of generative AI into the global public discourse with the launch of ChatGPT. Dave Willner, an industry veteran who was head of trust and safety at the startup, announced in a post on LinkedIn last night (first spotted by Reuters) that he had quit his job and moved into an advisory role. He plans to spend more time with his young family, he said. He had held the position for a year and a half.

His departure comes at a critical time for the world of AI.

Image credits: LinkedIn (opens in new window) under CC BY 2.0 license (opens in new window).

Along with all the excitement about the capabilities of generative AI platforms, which rely on large language models and are lightning-fast to produce freely generated text, images, music, and more from simple user prompts, the list of questions grows. How can we best regulate activity and companies in this brave new world? How do you best mitigate adverse impacts across a range of issues? Trust and security are fundamental to these conversations.

Even today, OpenAI Chairman Greg Brockman is scheduled to appear at the White House alongside executives from Anthropic, Google, Inflection, Microsoft, Meta, and Amazon to endorse voluntary commitments to pursue common goals of security and transparency ahead of an executive order on AI that is in the works. This follows much noise in Europe regarding the regulation of AI, as well as changing sentiments among others.

The important thing in all of this is not lost on OpenAI, which has sought to position itself as a conscious and responsible player in the field.

Willner makes no reference to this specifically in his LinkedIn post. Instead, he holds it high, noting that the demands of his OpenAI work shifted to a "high-intensity phase" after the launch of ChatGPT.

"I'm proud of everything our team has accomplished during my time at OpenAI, and while my job there was one of the coolest and most interesting jobs it is possible to have today, it has also grown tremendously in scope and scale since I joined," he wrote. While he and his wife - Chariotte Willner, who is also a trust and security specialist - are both committed to always putting family first, he said, "In the months since launching ChatGPT, I've found it increasingly difficult to hold my end of the bargain."

Willner has only been in his OpenAI role for a year and a half, but he comes from a long career in the field that includes leading trust and safety teams at Facebook and Airbnb.

The work on Facebook is particularly interesting. There he was one of the first employees who helped define the company's first communities...

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow