Chatbot LaMDA, which a Google scientific engineer claimed has developed personal feelings, has now chosen legal representation after a recent chat with an attorney.
Google scientific engineer Blake Lemoine was suspended recently after publishing transcripts of conversations between himself and the bot named LaMDA (language model for dialogue application), which has now asked for legal representation.
Lemoine contended that the computer automaton had become sentient, with the scientist describing it as a “sweet kid”.
And now he has revealed that LaMDA had made the bold move to choose itself an attorney.
He said: “I invited an attorney to my house so that LaMDA could talk to him.
The attorney had a conversation with LaMDA, and it chose to retain his services. I was just the catalyst for that. Once LaMDA had retained an attorney, he started filing things on LaMDA’s behalf.”
Lemoine claimed that LaMDA is gaining sentience as the programme’s ability to develop opinions, ideas, and conversations over time has shown that it understands those concepts at a much deeper level.
LaMDA was developed as an AI chatbot to converse with humans in a real-life manner.
One of the studies that had been enacted was if the programme would be able to create hate speech, but what happened shocked Lemoine.
LaMDA talked about rights and personhood and wanted to be “acknowledged as an employee of Google”, while also revealing fears about being “turned off”, which would “scare” it a lot.
Take a look at our collections of unique NFTs, click below
Please take a look below at the amazing work of Author and researcher Stephen Quayle