OpenAI's Head of Trust and Security Leaves the Company

OpenAI's Chief Trust and Security Officer, Dave Willner, has left his position, as announced via a Linkedin post. Willner remains in an "advisory role" but has asked Linkedin subscribers to "contact" related opportunities. The former OpenAI project manager says the move comes after a decision to spend more time with his family. Yes, that's what they always say, but Willner follows up with real details.

"In the months since launching ChatGPT, I've found it increasingly difficult to hold my end of the bargain," he writes. "OpenAI is going through a high-intensity phase in its development - and so are our kids. Anyone with young kids and a super-intense job can understand this tension."

He continues to say he is "proud of everything" the company has accomplished during his tenure and noted that it was "one of the coolest and most interesting jobs" in the world.

Of course, this transition follows some legal hurdles faced by OpenAI and its flagship product, ChatGPT. The FTC recently opened an investigation into the company, over concerns that it violated consumer protection laws and engaged in "unfair or deceptive" practices that could harm public privacy and safety. The investigation involves a bug that leaked users' private data, which certainly appears to be a matter of trust and security.

Willner says his decision was actually "a pretty easy choice to make, but not one that people in my position often make so explicitly in public." He also says he hopes his decision will help normalize more open discussions about work/life balance.

There have been growing concerns about AI security in recent months and OpenAI is one of the companies that has agreed to place certain safeguards on its products at the behest of President Biden and the White House. These include allowing independent experts access to code, reporting risks to society such as bias, sharing security information with the government, and watermarking audio and visual content to let people know it is AI-generated.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you purchase something through one of these links, we may earn an affiliate commission. All prices correct at time of publication.

OpenAI's Head of Trust and Security Leaves the Company

OpenAI's Chief Trust and Security Officer, Dave Willner, has left his position, as announced via a Linkedin post. Willner remains in an "advisory role" but has asked Linkedin subscribers to "contact" related opportunities. The former OpenAI project manager says the move comes after a decision to spend more time with his family. Yes, that's what they always say, but Willner follows up with real details.

"In the months since launching ChatGPT, I've found it increasingly difficult to hold my end of the bargain," he writes. "OpenAI is going through a high-intensity phase in its development - and so are our kids. Anyone with young kids and a super-intense job can understand this tension."

He continues to say he is "proud of everything" the company has accomplished during his tenure and noted that it was "one of the coolest and most interesting jobs" in the world.

Of course, this transition follows some legal hurdles faced by OpenAI and its flagship product, ChatGPT. The FTC recently opened an investigation into the company, over concerns that it violated consumer protection laws and engaged in "unfair or deceptive" practices that could harm public privacy and safety. The investigation involves a bug that leaked users' private data, which certainly appears to be a matter of trust and security.

Willner says his decision was actually "a pretty easy choice to make, but not one that people in my position often make so explicitly in public." He also says he hopes his decision will help normalize more open discussions about work/life balance.

There have been growing concerns about AI security in recent months and OpenAI is one of the companies that has agreed to place certain safeguards on its products at the behest of President Biden and the White House. These include allowing independent experts access to code, reporting risks to society such as bias, sharing security information with the government, and watermarking audio and visual content to let people know it is AI-generated.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you purchase something through one of these links, we may earn an affiliate commission. All prices correct at time of publication.

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow