Google has fired Blake Lemoine from the company’s responsible AI department after it was previously shut down. He made headlines in June over allegations that the LaMDA language model had taken on a life of its own and was self-aware.
It was Lemoine’s resignation Brought to you by Big Technology Based on an unpublished special podcast episode in which the engineer talked about his recent firing from Google. The company confirmed his departure and also provided a detailed response, with Blake’s section outlined below:
“If an employee shares a concern about our work, as Blake did, we review them extensively. We found Blake’s claims that LaMDA was intentionally unfounded and spent months trying to make that clear to him. These conversations were part of the open culture that helps us innovate responsibly.” It’s unfortunate that despite his longstanding commitment to this topic, Blake still chooses to violate ongoing clear employment and data security policies, including the need to protect product information. We will continue our rigorous development of language models and wish Blake the best.”
This case started in June and revolves around it lambda, a language or conversational model that has been trained on large amounts of text. In it, LaMDA wrote, among other things, that it sees itself as a person and wants to have the same rights as other Googlers. Together with a colleague, Lemoine came to the conclusion that there is self-awareness. The two Google employees tried to convince the company’s vice president and chief innovation officer responsible internally, but they rejected the allegations.
Finally, Lemoine started posting about this in the form of conversation records he had with LaMDA. Posted by ‘an interview’ With LaMDA, which is basically a composite of multiple chat sessions. also Went to the Washington Post Claiming that the linguistic model has become self-aware. Violating the duty of secrecy, the engineer soon came broken.
Google revealed him Language Model for Dialogue Applications, or LaMDA, at its I/O 2021 conference. The language model allows for fluent conversations on many topics, similar to the way people talk to each other online. LaMDA is trained on large amounts of data for such conversations.
“Lifelong zombie fanatic. Hardcore web practitioner. Thinker. Music expert. Unapologetic pop culture scholar.”