August 13, 2022

Google fired engineer Blake Lemoine, who said LaMDA was conscious

Google fired engineer Blake Lemoine, who said LaMDA was conscious


Blake Lemoine, a Google engineer who He told the Washington Post That the company’s AI was conscious, the company said it fired him on Friday.

Lemoine said he received a termination email from the company on Friday with a request for a video conference. He requested that a third party attend the meeting, but said that Google refused. Lemoine says he’s talking to lawyers about his options.

Lemoine worked for a responsible artificial intelligence organization at Google, and as part of his job, he began speaking to LaMDA, the company’s artificial intelligence system for building chatbots, in the fall. He came to believe the technology was conscious after recording to test whether the AI ​​could use discriminatory or hate speech.

A Google engineer who believes that the company’s artificial intelligence has been achieved

In a statement, Google spokesperson Brian Gabriel said the company takes AI development seriously and has reviewed LaMDA 11 times, as well as publishing a research paper detailing efforts for responsible development.

“If an employee shares concerns about our work, as Blake did, we review them broadly,” he added. “We found Blake’s claims that LaMDA is sensitive to be completely unfounded and worked to clarify this with him for several months.”

I attribute the discussions to the open company culture.

“It is unfortunate that despite his prolonged involvement on the subject, Blake still chooses to consistently violate clear employment and data security policies that include the need to protect product information,” Gabriel added. “We will continue our rigorous development of language models, and we wish Blake well.”

Lemoine shooting was first reported in Big Technology Newsletter.

lemon interviews with lambda It sparked an extensive discussion about recent advances in artificial intelligence, a general misunderstanding of how these systems work, and corporate responsibility. Google fired the heads of its ethical AI department, Margaret Mitchell and Timnit Gibruafter Beware of dangers associated with this technology.

Google has hired Timnit Gebru as an outspoken critic of unethical AI. Then she was fired for it.

LaMDA uses Google’s most advanced large language models, a type of artificial intelligence that recognizes and creates text. Researchers say these systems cannot understand language or meaning. But they can produce deceptive speech that looks like human speech because they are trained on massive amounts of data crawled from the internet to predict the next most likely word in a sentence.

After LaMDA spoke to Lemoine about the character and her rights, he began investigating further. In April, he shared a Google document with CEOs titled “Is LaMDA Sensitive?” Which included some of his conversations with LaMDA, in which he claimed to be conscious. Two Google executives considered his allegations and dismissed them.

Big Tech builds AI with bad data. So scientists sought better data.

Lemoine was previously placed on paid administrative leave in June for violating the company’s confidentiality policy. The engineer, who has spent most of his seven years at Google working on proactive research, including personalization algorithms, said he’s considering the possibility of creating his own AI company focused on collaborative video games for storytelling.