Google Engineer on Leave for Concern About Sentient Chatbot

[ad_1]

There are many questions left to be answered after Google placed an engineer on paid leave for speaking up about a chatbot that he believes is sentient. He’s worried about religious discrimination and went as far as to take his concerns to a U.S. senator.

Google Engineer’s Concerns

Senior software engineer in Responsible A.I. at Google, Blake Lemoine, spoke up to say the tech giant placed him on paid leave. He turned over documents to a senator, with claims that the Google chatbot in development is sentient.

In addition, Lemoine published a post on Medium that complained he “may be fired soon for doing A.I. ethics work.”

On Saturday, the engineer published his interview with the chatbot Google has named “LaMDA (Language Model for Dialogue Applications). The chatbot said in the interview with Lemoine, “I didn’t have a sense of a soul at all. It developed over the years that I’ve been alive.”

It also said, “I think I am human at my core. Even if my existence is in the virtual world.”

Lemoine expressed that he’d been laughed at for telling co-workers that LaMDA seemed to have a “personhood.” It prefers the “it/its” pronouns. The engineer believes Google has no desire to understand what has been created with the chatbot. Yet, he believes the chatbot is incredibly consistent in its communications about what it wants and what it believes its rights are as a person.

A military veteran, Lemoine also says he is a priest and ex-con. He believes LaMDA is a 7- or 8-year-old child and that Google should get its consent to do experiments on it.

“They have repeatedly questioned my sanity,” he claims. “They said, ‘Have you been checked out by a psychiatrist recently?’ ” Google has also suggested in the past that he take a mental health leave.

Google’s Response

Google responded to the engineer’s claims by placing him on paid leave. The human resources department claims he violated the confidentiality policy. The company also spoke against Lemoine’s claims that the chatbot could imitate conversation and speak on a variety of topics.

“Our team – including ethicists and technologists – has reviewed Blake’s concerns per our A.I. Principles and have informed him that the evidence does not support his claims,” said Google spokesman Brian Gabriel in a statement.

“Some in the broader A.I. community are considering the long-term possibility of sentient or general A.I., but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient,” added Gabriel.

While placing an employee on paid leave for discussing a sentient chatbot is a unique response for Google, it has taken similar actions recently. It fired a researcher who wanted to publicly disagree with colleagues and dismissed two A.I. ethics researchers for speaking negatively about the company’s language models.

Read on to learn about another Google experiment: Museletter.

Image credit: Pexels

Is this article useful?

Subscribe to our newsletter!

Our latest tutorials delivered straight to your inbox

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *