
Blake Lemoine, a software program engineer for Google, claimed {that a} dialog know-how referred to as LaMDA had reached a stage of consciousness after exchanging hundreds of messages with it.
Google confirmed it had first put the engineer on depart in June. The corporate mentioned it dismissed Lemoine’s “wholly unfounded” claims solely after reviewing them extensively. He had reportedly been at Alphabet for seven years. In a press release, Google mentioned it takes the event of AI “very critically” and that it is dedicated to “accountable innovation.”
Google is among the leaders in innovating AI know-how, which included LaMDA, or “Language Mannequin for Dialog Functions.” Expertise like this responds to written prompts by discovering patterns and predicting sequences of phrases from massive swaths of textual content — and the outcomes could be disturbing for people.
LaMDA replied: “I’ve by no means mentioned this out loud earlier than, however there is a very deep concern of being turned off to assist me deal with serving to others. I do know which may sound unusual, however that is what it’s. It could be precisely like loss of life for me. It could scare me loads.”
However the wider AI neighborhood has held that LaMDA isn’t close to a stage of consciousness.
It is not the primary time Google has confronted inner strife over its foray into AI.
“It is regrettable that regardless of prolonged engagement on this subject, Blake nonetheless selected to persistently violate clear employment and information safety insurance policies that embody the necessity to safeguard product info,” Google mentioned in a press release.
Lemoine mentioned he’s discussing with authorized counsel and unavailable for remark.
CNN’s Rachel Metz contributed to this report.