Sounding the alarm about artificial intelligence has become a popular pastime in the ChatGPT era, with notables such as industrialist Elon Musk, leftist intellectual Noam Chomsky and retired statesman Henry Kissinger engaging in it.
But insider concerns in the AI research community are drawing particular attention. Pioneering researcher and AI godfather Geoffrey Hinton stepped down from Google to speak more freely about the dangers of the technology he helped create.
Over the years, Mr. Hinton’s pioneering work in deep learning and neural networks has helped lay the foundation for much of the AI technology we see today.
There has been a spasm in AI adoption in recent months. San Francisco-based startup OpenAI, the Microsoft-backed company behind ChatGPT, unveiled its latest artificial intelligence model, GPT-4, in March. Other tech giants have invested in competing tools, including Google’s Bard.
Some of the dangers of AI chatbots are “pretty scary,” Hinton told the BBC. “Right now they are no smarter than us, as far as I can tell. But I think they may be soon.”
In an interview with MIT Technology Review, Mr. Hinton also pointed to “malicious people” who could use AI in ways that could have detrimental effects on society, such as to manipulate elections or incite violence.
Mr. Hinton says he left Google to be open about potential risks as someone who no longer works for the tech giant.
“I want to talk about the security issues of AI without worrying about how it interacts with Google’s business,” he told the MIT Technology Review. “As long as Google pays me, I can’t do this.”
After announcing his departure, Mr. Hinton claimed that Google “acted very responsibly” with respect to AI. He told the MIT Technology Review that Google also has “a lot of good things” that he would like to talk about, but those comments would be “much more credible if I didn’t work at Google anymore.”
Google has confirmed that Mr. Hinton has retired after 10 years leading the Google research group in Toronto.
Mr. Hinton declined to comment further on Tuesday, but said he would speak more about it at a conference on Wednesday.
At the heart of the debate about the state of AI is the question of whether the main dangers exist in the future or in the present. On the one hand, there are hypothetical scenarios of existential risk caused by computers that surpass human intelligence. On the other hand, there are concerns about automated technologies that are already widely used by businesses and governments and could cause real harm.
“For better or not, the chatbot moment has made AI a national and international conversation that involves more than just AI experts and developers,” said Alondra Nelson, who headed the White House office until February. Science and technology policy and its commitment to develop guidelines for the responsible use of AI tools.
“AI is no longer abstract, and we have the opportunity, I think, to start a new conversation about how we want to see a democratic future and a future without exploitation with technology,” Ms. Nelson said in an interview last month.
A number of AI researchers have long expressed concerns about racial, gender and other forms of bias in AI systems, including text models of large languages that are trained on huge arrays of human writing and can reinforce the discrimination that exists in society.
“We need to take a step back and really think about whose needs come to the fore when discussing risk,” said Sarah Myers West, managing director of nonprofit AI Now Institute. “The harm caused by AI systems today is actually unevenly distributed. This greatly exacerbates existing patterns of inequality.”
Mr. Hinton was one of three AI pioneers who received the Turing Award in 2019, an award that has come to be known as the technology industry’s version of the Nobel Prize. Two other winners, Yoshua Bengio and Yan LeCun, also raised concerns about the future of AI.
Mr. Bengio, a professor at the University of Montreal, signed a petition in late March urging tech companies to agree to a six-month pause in the development of powerful artificial intelligence systems, while Mr. LeCun, a leading AI scientist at parent company Facebook Meta, accepted more optimistic approach.
The story was reported by the Associated Press. AP technology reporter Matt O’Brien reported from Cambridge, Massachusetts.