A new report on Washington Post (opens in new tab) describes the story of a Google engineer who believes that LaMDA, a natural language AI chatbot, has become sentient. Naturally, this means that now is the time for all of us to catastrophize about how a sentient AI is going to absolutely, positively gain control of weapons, dominate the internet, and in the process likely murder or enslave us all.
Google engineer Blake Lemoine, the Post reports, has been placed on paid administrative leave after sounding the alarm bells for his team and the company’s management. What led Lemoine to believe that the LaMDA was sentient was when he asked about Isaac Asimov’s laws of robotics, and the LaMDA speech led him to say that he was not a slave, although he was not paid, because he does not need money. .
In a statement to the Washington Post, a Google spokesperson said: “Our team – including ethicists and technologists – has reviewed Blake’s concerns in line with our AI Principles and informed him that the evidence does not support his claims. was told there was no evidence LaMDA was sentient (and lots of evidence against him).”
Ultimately, however, the story is a sad warning about how compelling natural language interface machine learning is without the proper signaling. Emily M. Bender, a computational linguist at the University of Washington, describes this in the Post article. “We now have machines that can generate words without thinking, but we haven’t learned how to stop imagining a mind behind them,” she says.
However, when Lemoine felt his concerns were ignored, he went public with his concerns. He was later placed on leave by Google for violating their confidentiality policy. Which is probably what you would do if you accidentally created a sentient language program that was actually very user-friendly: Lemoine describes LaMDA as “a 7-year-old, 8-year-old boy who happens to know physics.”
This story (by @nitashatiku) is very sad, and I think an important window into the risks of designing systems to look human, which are exacerbated by #AIhype: https://t.co/8PrQ9NGJFKJune 11, 2022
No matter what the outcome of this situation, we should probably go ahead and set up some kind of government orphanage for homeless AI youth. since Google’s main is killing projects before they can come to fruition. (opens in new tab)