Geoffrey Hinton, a pioneer in machine learning who is best-known for his research of neural networks, has resigned from Google to speak freely about the dangers of AI.
Hinton, 75, a computer science professor at the University of Toronto and a former Google top researcher, began studying neural nets long before they were in vogue. He has spent decades developing novel artificially intelligent algorithms and architectures, creating techniques to train models and process data. His research has paved the way for the current machine learning boom.
In 2018, he won the prestigious Turing Award alongside Yoshua Bengio, a computer science professor at the University of Montreal, and Yann LeCun, chief AI scientist at Meta, for their work on deep learning.
Hinton said he resigned from Google last month and that a part of him now regrets his lifelong work in the field. “I console myself with the normal excuse: if I hadn’t done it, somebody else would have,” he told former Reg vulture Cade Metz, who now writes for the NYT.
If I hadn’t done it, somebody else would have
Hinton said he has grown progressively concerned about the risks of AI, especially after Google began building and rolling out Bard, its own web search chatbot interface, to rival Microsoft’s machine-learning powered Bing.
To us at least, it appears Microsoft grabbed the chatbot ball from OpenAI and ran with it, very publicly splashing the tech all over its software empire to impress netizens, and forcing Google to reluctantly play the game of catch up. Even though Google invented the transformer architecture underneath today’s chat interfaces, uses machine learning widely behind the scenes, and seems uncertain about the positive impact of this technology on the world, the search king is nonetheless keen to give the appearance that it is not left behind. Which means chat interfaces in everything Google now does.
Commercial interest drives competition between companies and inevitably advances the technology and pushes it into everyday life, making it difficult to mitigate its impacts on society, as Hinton put it.
“Look at how it was five years ago and how it is now. Take the difference and propagate it forwards. That’s scary,” he said.
Hinton added that generative AI tools that make it easy for anyone to create fake images, text, videos, and audio that people won’t be able to tell what’s true or not on the internet anymore.
These types of models can be instructed to write code, too, and that they could autonomously run their own programs in the future, he suggested. If left unchecked, the technology could one day create software and machines to the detriment of humans, so-called killer robots, he thought.
“The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
Today’s ML tools are being deployed across various industries and affecting mainly white-collar jobs, he opined. Analysts have predicted such models will increase the productivity of employees and companies; the technology could replace jobs while creating new ones. Hinton said AI “takes away the drudge work,” but that the disruption of labor “might take away more than that.”
Hinton said he quit Google so that he could talk about the dangers of AI without upsetting his employer, and thought that the Silicon Valley goliath has “acted very responsibly.” ®