Lemoine worked for a responsible artificial intelligence organization at Google, and as part of his job, he began speaking to LaMDA, the company’s artificial intelligence system for building chatbots, in the fall. He came to believe the technology was conscious after recording to test whether the AI could use discriminatory or hate speech.
In a statement, Google spokesperson Brian Gabriel said the company takes AI development seriously and has reviewed LaMDA 11 times, as well as publishing a research paper detailing efforts for responsible development.
“If an employee shares concerns about our work, as Blake did, we review them broadly,” he added. “We found Blake’s claims that LaMDA is sensitive to be completely unfounded and worked to clarify this with him for several months.”
I attribute the discussions to the open company culture.
“It is unfortunate that despite his prolonged involvement on the subject, Blake still chooses to consistently violate clear employment and data security policies that include the need to protect product information,” Gabriel added. “We will continue our rigorous development of language models, and we wish Blake well.”
Lemoine shooting was first reported in Big Technology Newsletter.
lemon interviews with lambda It sparked an extensive discussion about recent advances in artificial intelligence, a general misunderstanding of how these systems work, and corporate responsibility. Google fired the heads of its ethical AI department, Margaret Mitchell and Timnit Gibruafter Beware of dangers associated with this technology.
LaMDA uses Google’s most advanced large language models, a type of artificial intelligence that recognizes and creates text. Researchers say these systems cannot understand language or meaning. But they can produce deceptive speech that looks like human speech because they are trained on massive amounts of data crawled from the internet to predict the next most likely word in a sentence.
After LaMDA spoke to Lemoine about the character and her rights, he began investigating further. In April, he shared a Google document with CEOs titled “Is LaMDA Sensitive?” Which included some of his conversations with LaMDA, in which he claimed to be conscious. Two Google executives considered his allegations and dismissed them.
Lemoine was previously placed on paid administrative leave in June for violating the company’s confidentiality policy. The engineer, who has spent most of his seven years at Google working on proactive research, including personalization algorithms, said he’s considering the possibility of creating his own AI company focused on collaborative video games for storytelling.
“Beer buff. Devoted pop culture scholar. Coffee ninja. Evil zombie fan. Organizer.”