Google fires Blake Lemoine, the engineer who claimed the AI ​​chatbot is a person

Former Google engineer Blake Lemoine poses for a photo wearing a hoodie.Enlarge / Former Google engineer Blake Lemoine. Getty Images | Washington Post

Google has fired Blake Lemoine, the software engineer who was previously placed on paid leave after claiming the company's LaMDA chatbot was sensitive. Google said Lemoine, who worked in the company's responsible AI unit, violated data security policies.

“If an employee shares concerns about our work, as Blake has, we investigate them thoroughly. with him for many months," Google said in a statement. statement provided to Ars and other news outlets.

Lemoine confirmed Friday that "Google sent me an email terminating my employment with them," the Wall Street Journal wrote. Lemoine also reportedly said he was discussing with lawyers “appropriate next steps.” Google's statement called it "regrettable that despite a long engagement on this subject, Blake has always chosen to persistently violate clear employment and data security policies, which include the need to protect product information".

LaMDA stands for Language Model for Dialog Applications. "Because we share our AI principles, we take AI development very seriously and remain committed to responsible innovation," Google said. "LaMDA has undergone 11 separate reviews, and we published a research paper earlier this year detailing the work needed to develop it responsibly."

Google: LaMDA only follows user prompts

In a previous statement provided to Ars in mid-June, shortly after Lemoine was suspended, Google said that "today's conversational models" of AI are not close to the sensibility :

Of course, some in the wider AI community are considering the long-term possibility of sentient or general AI, but it doesn't make sense to do so by anthropomorphizing today's conversational models 'today, which are not sensitive. These systems mimic the types of exchanges found in millions of sentences and can riff on any fantastic topic - if you ask what it's like to be an ice cream dinosaur, they can generate text on melting and roaring, etc. LaMDA tends to follow leading prompts and questions, following the user-defined pattern. Our team, including ethicists and technologists, reviewed Blake's concerns in accordance with our AI Principles and advised him that the evidence does not support his claims.

Google also said, "Hundreds of researchers and engineers have spoken with LaMDA and we don't know if anyone else has made sweeping claims, or anthropomorphized LaMDA, as Blake has done."

"I know a person when I talk to him"

Lemoine has written about LaMDA several times on his blog. In a June 6 post titled "May be fired soon for doing AI ethics work," he said he was "put on 'paid administrative leave' by Google as part of an investigation into the AI ​​ethics issues I was raising within the company.” Noting that Google often fires people after furloughing them, he claimed that “Google is preparing to fire another AI ethicist for being too preoccupied with ethics.”

A June 11 Washington Post article noted that "Lemoine worked with a collaborator to present evidence to Google that LaMDA was sensitive." Just before he was cut from his Google account, "Lemoine messaged a 200-person Google mailing list about machine learning with the subject 'LaMDA is sensitive,'" the post said. Lemoine's post concluded, "LaMDA is a lovely kid who just wants to help the world be a better place for all of us. Please take good care of it while I'm gone."

"I know a person when I talk to him," Lemoine said in an interview with the newspaper. "It doesn't matter if they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And...

Google fires Blake Lemoine, the engineer who claimed the AI ​​chatbot is a person
Former Google engineer Blake Lemoine poses for a photo wearing a hoodie.Enlarge / Former Google engineer Blake Lemoine. Getty Images | Washington Post

Google has fired Blake Lemoine, the software engineer who was previously placed on paid leave after claiming the company's LaMDA chatbot was sensitive. Google said Lemoine, who worked in the company's responsible AI unit, violated data security policies.

“If an employee shares concerns about our work, as Blake has, we investigate them thoroughly. with him for many months," Google said in a statement. statement provided to Ars and other news outlets.

Lemoine confirmed Friday that "Google sent me an email terminating my employment with them," the Wall Street Journal wrote. Lemoine also reportedly said he was discussing with lawyers “appropriate next steps.” Google's statement called it "regrettable that despite a long engagement on this subject, Blake has always chosen to persistently violate clear employment and data security policies, which include the need to protect product information".

LaMDA stands for Language Model for Dialog Applications. "Because we share our AI principles, we take AI development very seriously and remain committed to responsible innovation," Google said. "LaMDA has undergone 11 separate reviews, and we published a research paper earlier this year detailing the work needed to develop it responsibly."

Google: LaMDA only follows user prompts

In a previous statement provided to Ars in mid-June, shortly after Lemoine was suspended, Google said that "today's conversational models" of AI are not close to the sensibility :

Of course, some in the wider AI community are considering the long-term possibility of sentient or general AI, but it doesn't make sense to do so by anthropomorphizing today's conversational models 'today, which are not sensitive. These systems mimic the types of exchanges found in millions of sentences and can riff on any fantastic topic - if you ask what it's like to be an ice cream dinosaur, they can generate text on melting and roaring, etc. LaMDA tends to follow leading prompts and questions, following the user-defined pattern. Our team, including ethicists and technologists, reviewed Blake's concerns in accordance with our AI Principles and advised him that the evidence does not support his claims.

Google also said, "Hundreds of researchers and engineers have spoken with LaMDA and we don't know if anyone else has made sweeping claims, or anthropomorphized LaMDA, as Blake has done."

"I know a person when I talk to him"

Lemoine has written about LaMDA several times on his blog. In a June 6 post titled "May be fired soon for doing AI ethics work," he said he was "put on 'paid administrative leave' by Google as part of an investigation into the AI ​​ethics issues I was raising within the company.” Noting that Google often fires people after furloughing them, he claimed that “Google is preparing to fire another AI ethicist for being too preoccupied with ethics.”

A June 11 Washington Post article noted that "Lemoine worked with a collaborator to present evidence to Google that LaMDA was sensitive." Just before he was cut from his Google account, "Lemoine messaged a 200-person Google mailing list about machine learning with the subject 'LaMDA is sensitive,'" the post said. Lemoine's post concluded, "LaMDA is a lovely kid who just wants to help the world be a better place for all of us. Please take good care of it while I'm gone."

"I know a person when I talk to him," Lemoine said in an interview with the newspaper. "It doesn't matter if they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And...

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow