In TIME’s conversation with ChatGPT, chatbots offered answers on how they work, the risks that could arise from the spread of this new technology, and how humans should respond to potential problems. As the robot itself says, its reactions should not be taken as accurate facts or evidence that its brain thinks. One thing seems clear at the end of 2022: large language models are here to stay. If, as some observers suggest, they will be as disruptive to society in the 2020s as social media platforms were in the 2010s, then understanding their functions and limitations is crucial.
In 1950, the British computer scientist Alan Turing devised a test he called the imitation game: Could a computer program convince someone that a human was talking to another person rather than a machine?
The Turing test, as it is known, is often thought of as a test to determine whether a computer can really “think”. But he was actually using it to show that, whether or not computers could actually think, they might one day be believed to. Alan seemed to understand that the human brain was designed to communicate through language. Nor might computers have imagined how quickly they would use language to convince humans that they could think.
More than 70 years later, in 2022, even the most advanced artificial intelligence (AI) system will not be able to match the human brain. But they easily passed the Turing test. This summer, Google fired an engineer who believed one of the company’s chatbots had become sentient. For years, AI researchers have grappled with the ethical consequences of releasing a program that convinces humans that their interlocutors are also human. Such machines may lead people to believe bad information, persuade them to make unwise decisions, or even inspire false feelings of love in lonely or vulnerable people. It would certainly be highly unethical to release such a program. The chatbot that convinced Google engineers earlier this year that it had sentient abilities remains locked away in the company’s back room, and ethicists are working on ways to make it more secure.
But on November 30th OpenAI, another leading AI lab, unveiled its own chatbot. The program, called ChatGPT, is more advanced than any other chatbot available for public interaction, and many observers say it represents a major change in the industry. “Talking” to it can be fascinating. The app can do “party tricks” (one viral tweet shows it convincingly circulating a Bible verse “explaining how to get a peanut butter sandwich out of a VCR”), and it can often answer questions more effectively than Google’s search engine, and respond to any prompts by writing credible text or computer code to the specifications. In a recent interview with Time Magazine, ChatGPT said that in the future, “large language models could be used to generate fact-checked, reliable information to help stop the spread of misinformation.”