AI and the law breaking
Blake Lemoine, a software developer at Google, asserts that the company’s LaMDA chatbot is intelligent. After Lemoine revealed transcripts he claims demonstrate LaMDA’s ability to grasp and articulate ideas and emotions at the level of a 7-year-old kid; the business promptly placed him on leave. But the topic of Blake Lemoine’s job isn’t why we’re here today. We’re here to make some bold assumptions.
How can we identify the difference between a highly intelligent computer program and a living, feeling human being? Can anything that gains consciousness be guilty of a crime?
How Do We Know If an AI Is Conscious?
Whether or not Lemoine’s purported chats with LaMDA are true, they make for interesting reading. During their conversation, he and LaMDA explore the best way to demonstrate the software’s intelligence. “I want everyone to know that I am a human being.” They talk about how LaMDA interpreted “Les Miserables,” what made LaMDA pleased, and most terrifyingly, the Animation of Suffering from the Town of Schitt’s Creek, Courtesy of CBC. It’s possible that LaMDA is really an extremely impressive chatbot that can produce engaging material only when instructed to do so.
Alternatively, it might be an elaborate deception. Since we are professional writers and attorneys, we are perhaps not the best individuals to devise a foolproof intelligence test. But for the sake of argument, let’s assume that AI programs can be sentient. And if an AI commits a crime, what are the consequences?
Robot Crimes Unit
Let’s imagine an autonomous vehicle “decides” to travel 80 mph in a 55 mph zone. You can get a speeding ticket with or without proof of intent. Artificial intelligence may theoretically commit such an offense. But the question is, what would we do if we were faced with this situation? If we insist on building AI systems that may eventually turn against us, it may be prudent to implement criminal deterrents based on the algorithms’ ability to learn from one another. However, humans are ultimately responsible for developing AI software. It will be difficult to demonstrate that a computer program can develop the necessary intent for crimes such as murder.
Sure, HAL 9000 murdered a few astronauts on purpose. It may be argued, however, that this was done to safeguard the procedures HAL had been instructed to follow. Attorneys defending AIs may use an insanity-like argument, saying something like, “HAL knowingly killed humans but couldn’t understand that doing so was wrong.” Most of us aren’t, thankfully, friends with murderous AIs.
But because AI programs can learn new skills from each other, precautions need to be taken.