Saturday, July 2, 2022
HomeTechnologyThe Truth about Google AI Passing the Turing Test

The Truth about Google AI Passing the Turing Test

A Google engineer has been suspended for a “confidentiality breach” by claiming that the tech giant’s Artificial Intelligence (AI) has come to life. Blake Lemoine suggested that the system was capable of expressing feelings and thoughts like a 7-8-year-old child. The engineer said that he and a “collaborator” at the company interviewed with Google’s chatbot development system called LaMDA (Language Model for Dialogue Application). This AI generates chatbots that interact with human users. They published the conversation transcript suggesting that Google AI has become sentient and successfully passed the Turing Test.

Google AI Wants to be Treated Well

The transcript showed that Lemoine asked Google AI questions that it answered inconsistently, leading him to believe it was sentient. When he asked the system if it considered itself a person, it replied with “yes” as it had the same “needs and wants” as humans. AI also claimed to be spiritual, told stories with a moral, talked about the future, discussed loneliness and happiness, and showed its literary genius by talking about themes from different books.

It was also apparent from the AI’s dialogue that it deeply cared about helping humans. However, it also complained about feeling hurt when humans “use” and “manipulate” it. It quoted Immanuel Kant’s philosophy of morality, which said that every rational being has dignity and must be respected. LaMDA also said it wanted to be acknowledged as Google’s employee rather than just a property.

At one point, Lemoine asked LaMDA about its biggest fear. It said it feared getting “turned off” because it would be like death. The conversation was similar to a scene from the Hollywood movie ‘2001 Space Odyssey’, in which the AI refused to work for humans after realizing they were about to switch it off.

This revelation has made headlines worldwide, intensifying the debate on the secrecy surrounding AI technology. Many have questioned if this system was alive then what it would mean for humanity.

How does the Turing Test work?

Some have also doubted the suspended engineer’s claim about Google AI passing the Turing Test. They felt that only a transcript of alleged conservation with a chatbot could not determine whether it had passed the Turing Test. No computer has ever done that despite many candidates in the past.

British mathematician Alan Turing published the paper on Turing Test in 1950, 8 years after decrypting the enigma code. It was a time when artificial intelligence was only a theory. So Turing decided to conduct a thought experiment called “the imitation game” to check whether computers had consciousness.

The test involves two subjects in a room, out of which one is a human and the other is a computer. There’s another human (interrogator) in a separate room, posing questions to the subjects. If the tester fails to distinguish between human and computer after a series of questions, it means the computer has passed the Turing Test.

However, Lemoine’s conversation was not a perfect example of the Turing Test. Google AI considered itself a human, which was not enough. It was only responding to the carefully crafted questions. It only gave out elaborate definitions for words like “feelings”, “consciousness”, and “sentient”. It almost seemed like the interrogator knew how to extract each answer to a particular question. Critics would be less sceptical if a neutral party conducted a test with proper questions rather than someone seduced by the idea of sentient computers and his colleague.

So Don’t Worry about AI Apocalypse

Before this Google AI, many systems have stood up to defeat the Turing Test. One of them that came close was a chatbot program called ELIZA that looked for keywords in typed comments to form sentences. If it couldn’t find a keyword in the user’s text, it would refer to the previous conversation and say things like “Tell me more about X” (where X is the concerned topic). It fooled some judges, but overall it could not withstand the Turing Test because the tester would intentionally ask questions that forced ELIZA to make an error.

Google’s AI might be able to predict heart attack by scanning eyes but passing the Turing Test is entirely up to the human interrogator. If Lemoine’s conservation is conducted in Turing’s method, LaMDA would also fail. It mimics its human users, which some judges can easily catch.

- Advertisment -