Geoffrey Hinton was a pioneer in the field of artificial intelligence. In 2012, Dr. Hinton and two of his graduate students at the University of Toronto created a technology that has become the intellectual basis for artificial intelligence systems that the tech industry’s largest companies believe is key to their future.
However, on Monday he formally joined a growing chorus of critics who say these companies are racing toward danger with their aggressive campaign to create products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT.
Dr. Hinton said he quit his job at Google, where he worked for more than a decade and became one of the most respected voices in the field, so he can speak freely about the dangers of AI. He said, now regretting his life’s work.
A short distance from where he and his students are making great progress, Dr. Hinton said during a lengthy interview last week in the dining room of his home in Toronto.
Dr. Hinton’s journey from AI pioneer to killer marks a remarkable moment for the technology industry at perhaps its most significant inflection point in decades. Industry leaders believe that new AI systems could be as important as the introduction of the web browser in the early 1990s and could lead to breakthroughs in areas ranging from drug research to education.
But nibbling on many industry insiders is a fear of releasing something dangerous into the wild. Generative AI can indeed be a tool for disinformation. Soon, it could pose a risk to jobs. Somewhere down the line, the tech’s two biggest concerns say, it could pose a danger to humanity.
“It’s hard to see how you can prevent bad actors from using them for bad things,” said Dr. Hinton.
After San Francisco startup OpenAI released a new version of ChatGPT in March, more than 1,000 technology leaders and researchers signed an open letter calling for a six-month moratorium on developing new systems because AI technologies pose “grave risks to society” and humanity. ”
Several days later, 19 current and former leaders of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic association, issued a letter of their own warning of the dangers of AI. which has deployed OpenAI technology across a wide range of products, including the Bing search engine.
Dr. Hinton, who is often called the “godfather of artificial intelligence,” did not sign any of these letters and said he did not want to publicly criticize Google or other companies until he had left his job. He notified the company last month that he was resigning, and on Thursday, he spoke on the phone with Sundar Pichai, CEO of Google’s parent company, Alphabet. He declined to publicly discuss the details of his conversation with Mr Pichai.
“We remain committed to a responsible approach to AI, constantly learning to understand emerging risks while also innovating boldly,” Google Chief Scientist Jeff Dean said in a statement.
Dr. Hinton, a 75-year-old British expatriate, is a lifelong academic whose career has been driven by his personal convictions about the development and use of artificial intelligence. In 1972, as a postgraduate student at the University of Edinburgh, Dr. Hinton embraced an idea called a neural network. A neural network is a mathematical system that learns skills by analyzing data. At the time, few researchers believed in the idea. But it became his life’s work.
In the 1980s, Dr. Hinton was a professor of computer science at Carnegie Mellon University, but left the university for Canada because he said he was reluctant to get funding from the Pentagon. At the time, most AI research in the United States was funded by the Department of Defense. Dr. Hinton is strongly against the use of artificial intelligence on the battlefield – what he calls “robot soldiers”.
In 2012, Dr. Hinton and two of his students in Toronto, Ilya Sutskifer and Alex Kreshevsky, created a neural network that could analyze thousands of images and teach itself to identify common objects, such as flowers, dogs, and cars.
Google spent $44 million to acquire a company started by Dr. Hinton and his students. And their system has created increasingly powerful technologies, including new chatbots like ChatGPT and Google Bard. Mr. Sutskever went on to become the chief scientist at OpenAI. In 2018, Dr. Hinton and two other longtime collaborators were awarded the Turing Prize, often called the “Nobel Prize for Computing,” for their work on neural networks.
Around the same time, Google, OpenAI, and other companies began building neural networks that learned from vast amounts of digital text. Dr. Hinton thought it was a powerful way for machines to understand and generate language, but that it was inferior to the way humans handle language.
Then, last year, as Google and OpenAI built systems that used larger amounts of data, his view changed. He still believed that systems were inferior to the human brain in some respects, but he believed that they were superior to human intelligence in others. He said, “Maybe what’s going on in these systems is actually a lot better than what’s going on in the brain.”
He believes that as companies improve their AI systems, they are becoming increasingly dangerous. “Look what it was like five years ago and how it is now,” he said of AI technology. “Take the difference and spread it forward. That’s scary.”
Until last year, he said, Google acted as the “proper steward” of the technology, taking care not to release something that could cause harm. But now that Microsoft has beefed up its Bing search engine with a chatbot—a challenge to Google’s core business—Google is racing to deploy the same kind of technology. Dr Hinton said the tech giants are locked in a competition that may be impossible to stop.
His immediate concern is that the Internet will be inundated with fake images, videos, and texts, and the average person “will not be able to tell what is true any longer.”
He is also concerned that artificial intelligence technologies will upend the job market in due course. Today, chatbots like ChatGPT tend to supplement human workers, but they can replace paralegals, personal assistants, translators, and others who handle tasks by heart. “It kills hard work,” he said. “It may take more than that.”
In the future, he worries that future versions of the technology pose a threat to humanity because they often learn unpredictable behavior from the huge amount of data they analyze. This becomes a problem, he said, because individuals and companies allow AI systems to not only create their own computer code but actually run that code themselves. And he fears a day when truly autonomous weapons — those killer robots — will become a reality.
“The idea that these things can actually become smarter than people – few people believed that,” he said. But most people thought it was too far fetched. And I thought it was unattainable. I thought it was 30 to 50 years old or even longer. Obviously, I don’t think so anymore.”
Many other experts, including many of his students and colleagues, say this threat is hypothetical. But Dr. Hinton believes that the race between Google, Microsoft and others will escalate into a global race that will not stop without some kind of global regulation.
He said but this may be impossible. Unlike nuclear weapons, he said, there is no way to know if companies or countries are working on the technology in secret. The best hope is for the world’s top scientists to collaborate on ways to control the technology. “I don’t think they should increase this until they understand if they can control it,” he said.
Dr. Hinton said that when people would ask him how he could work on potentially dangerous technology, he would paraphrase Robert Oppenheimer, who led the US effort to build the atomic bomb: “When you see something technically nice, you go ahead and do it.”
He doesn’t say that anymore.