
CONCERNS
Some concerns about AI + Quantum Computing
Geoffrey Hinton
"I think they're very close to that now and they're going to be a lot smarter than us in the future. How are we going to survive that?"
Hinton is not alone in his concerns. In February, even Sam Altman, executive director of OpenAI, the company that develops ChatGPT, stated that the world may not be "that far away from potentially scary AI tools" and that regulation will be fundamental, but it will take time to be discovered.
Shortly after the Microsoft-backed start-up launched its latest AI model, called GPT-4, in March, more than 1,000 researchers and technologists signed a letter calling for a six-month pause in AI development because, according to them, this represents "profound risks for society and humanity".
Here are the main concerns expressed by Hinton and other experts:
1. AI may already be smarter than us
Our human brains can solve equations, drive cars and binge-watch Netflix series thanks to their innate talent for organizing and storing information and finding solutions to thorny problems.
The approximately 86 billion neurons we have in our skull - and, more importantly, the 100 trillion connections that these neurons establish with each other - make this possible.
By contrast, the technology underlying ChatGPT has between 500 billion and a trillion connections, Hinton said. While this seems to put it at a huge disadvantage relative to us, Hinton notes that GPT-4, OpenAI's latest AI model, knows "hundreds of times more" than any human. Perhaps, he suggests, it has a "much better learning algorithm" than ours, making it more efficient at cognitive tasks.
Researchers have long noted that artificial neural networks take much longer to absorb and apply new knowledge than people do, as training them requires enormous amounts of energy and data.
This is no longer the case, argues Hinton, noting that systems like GPT-4 can learn new things very quickly, once properly trained by researchers. This is not unlike the way a trained professional physicist can wrap his brain around new experimental discoveries much more quickly than a typical high school science student.
This leads Hinton to conclude that AI systems may already be smarter than us: not only can they learn things faster, they can also share copies of their knowledge with each other almost instantly.
"It's a completely different form of intelligence," he told MIT Technology Review. "A new and better form of intelligence".
2. AI can “enhance” the spread of disinformation
What would make AI systems smarter than humans? One disturbing possibility is that malicious individuals, groups or nation-states could simply co-opt them to further their own goals.
Dozens of fake news websites have now spread across the web in multiple languages, some publishing hundreds of AI-generated articles a day, according to a new report from NewsGuard, which rates the credibility of websites and tracks misinformation on the internet.
Hinton is particularly concerned that AI tools could be trained to influence elections and even wage wars.
Election misinformation spread through AI chatbots, for example, could be the future version of election misinformation spread through Facebook and other social media platforms.
And this could be just the beginning.
"Don't think for a moment that Putin wouldn't create hyperintelligent robots with the aim of killing Ukrainians," Hinton said in the article. "He wouldn't hesitate."
3. Will AI make us redundant?
OpenAI estimates that 80% of workers in the United States could see their jobs affected by AI, and a report from Goldman Sachs states that the technology could put 300 million full-time jobs at risk around the world.
Humanity's survival is threatened when "smart things manage to outsmart us," according to Hinton.
"It's possible they'll keep us here for a while to keep the power plants running," Hinton told MIT Technology Review's EmTech Digital conference on Wednesday from his home via video. "But after this, maybe not."
"These things they will have learned from us, through reading every novel that ever existed and everything Machiavelli ever wrote, how to manipulate people," Hinton said. "Even if they can't pull the levers directly, they can certainly get us to pull the levers."
4. We don't really know how to stop
"I wish I had a nice, simple solution I could come up with, but I don't,".

Born Geoffrey Everest Hinton 6 December 1947 Wimbledon, London, England
Education
Known for
-
Applications of backpropagation
Awards
-
AAAI Fellow (1990)
-
Rumelhart Prize (2001)
-
IEEE Frank Rosenblatt Award (2014)
-
James Clerk Maxwell Medal (2016)
-
Turing Award (2018)
-
Dickson Prize (2021)
-
Princess of Asturias Award (2022)
Scientific career
Fields
Institutions
-
Google
ThesisRelaxation and its role in vision (1977)
Doctoral advisorChristopher Longuet-Higgins[2][3][4]
\\\\
Geoffrey Everest Hinton CC FRS FRSC[12] (born 6 December 1947) is a British-Canadian cognitive psychologist and computer scientist, most noted for his work on artificial neural networks. From 2013 to 2023, he divided his time working for Google (Google Brain) and the University of Toronto, before publicly announcing his departure from Google in May 2023, citing concerns about the risks of artificial intelligence (AI) technology.[13] In 2017, he co-founded and became the chief scientific advisor of the Vector Institute in Toronto.[14][15]
With David Rumelhart and Ronald J. Williams, Hinton was co-author of a highly cited paper published in 1986 that popularised the backpropagation algorithm for training multi-layer neural networks,[16] although they were not the first to propose the approach.[17] Hinton is viewed as a leading figure in the deep learning community.[18][19][20][21][22] The dramatic image-recognition milestone of the AlexNet designed in collaboration with his students Alex Krizhevsky[23] and Ilya Sutskever for the ImageNet challenge 2012[24] was a breakthrough in the field of computer vision.[25]
Hinton received the 2018 Turing Award (often referred to as the "Nobel Prize of Computing"), together with Yoshua Bengio and Yann LeCun, for their work on deep learning.[26] They are sometimes referred to as the "Godfathers of Deep Learning",[27][28] and have continued to give public talks together.[29][30]
In May 2023, Hinton announced his resignation from Google to be able to "freely speak out about the risks of A.I."[31] He has voiced concerns about deliberate misuse by malicious actors, technological unemployment, and existential risk from artificial general intelligence.[32]
Education[edit]
Hinton was educated at King's College, Cambridge. After repeatedly changing his degree between different subjects like natural sciences, history of art, and philosophy, he eventually graduated in 1970 with a bachelor of arts in experimental psychology.[11] He continued his study at the University of Edinburgh where he was awarded a PhD in artificial intelligence in 1978 for research supervised by Christopher Longuet-Higgins.[2][33]
Career and research[edit]
After his PhD, Hinton worked at the University of Sussex and, (after difficulty finding funding in Britain),[34] the University of California, San Diego and Carnegie Mellon University.[11] He was the founding director of the Gatsby Charitable Foundation Computational Neuroscience Unit at University College London[11] and is currently[35] a professor in the computer science department at the University of Toronto. He holds a Canada Research Chair in Machine Learning and is currently[when?] an advisor for the Learning in Machines & Brains program at the Canadian Institute for Advanced Research. Hinton taught a free online course on Neural Networks on the education platform Coursera in 2012.[36] He joined Google in March 2013 when his company, DNNresearch Inc., was acquired, and was at that time planning to "divide his time between his university research and his work at Google".[37]
Hinton's research concerns ways of using neural networks for machine learning, memory, perception, and symbol processing. He has written or co-written more than 200 peer reviewed publications.[1][38] At the Conference on Neural Information Processing Systems (NeurIPS) he introduced a new learning algorithm for neural networks that he calls the "Forward-Forward" algorithm. The idea of the new algorithm is to replace the traditional forward-backward passes of backpropagation with two forward passes, one with positive (i.e. real) data and the other with negative data that could be generated solely by the network.[39]
While Hinton was a postdoc at UC San Diego, David E. Rumelhart and Hinton and Ronald J. Williams applied the backpropagation algorithm to multi-layer neural networks. Their experiments showed that such networks can learn useful internal representations of data.[16] In an interview of 2018,[40] Hinton said that "David E. Rumelhart came up with the basic idea of backpropagation, so it's his invention". Although this work was important in popularising backpropagation, it was not the first to suggest the approach.[17] Reverse-mode automatic differentiation, of which backpropagation is a special case, was proposed by Seppo Linnainmaa in 1970, and Paul Werbos proposed to use it to train neural networks in 1974.[17]
During the same period, Hinton co-invented Boltzmann machines with David Ackley and Terry Sejnowski.[41] His other contributions to neural network research include distributed representations, time delay neural network, mixtures of experts, Helmholtz machines and Product of Experts. In 2007, Hinton coauthored an unsupervised learning paper titled Unsupervised learning of image transformations.[42] An accessible introduction to Geoffrey Hinton's research can be found in his articles in Scientific American in September 1992 and October 1993.[43]
In October and November 2017 respectively, Hinton published two open access research papers on the theme of capsule neural networks,[44][45] which according to Hinton, are "finally something that works well".[46]
In May 2023, Hinton publicly announced his resignation from Google. He explained his decision by saying that he wanted to "freely speak out about the risks of A.I." and added that a part of him now regrets his life's work.[13][31]
Notable former PhD students and postdoctoral researchers from his group include Peter Dayan,[47] Sam Roweis,[47] Max Welling,[47] Richard Zemel,[2][5] Brendan Frey,[6] Radford M. Neal,[7] Yee Whye Teh,[8] Ruslan Salakhutdinov,[9] Ilya Sutskever,[10] Yann LeCun,[48] Alex Graves,[47] and Zoubin Ghahramani.

Michio Kaku (Japanese: カク ミチオ, 加來 道雄, /ˈmiːtʃioʊ ˈkɑːkuː/; born January 24, 1947) is an American theoretical physicist, activist, futurologist, and popular-science writer. He is a professor of theoretical physics in the City College of New York and CUNY Graduate Center. Kaku is the author of several books about physics and related topics and has made frequent appearances on radio, television, and film.
He is also a regular contributor to his own blog, as well as other popular media outlets. For his efforts to bridge science and science fiction, he is a 2021 Sir Arthur Clarke Lifetime Achievement Awardee.[1]
His books Physics of the Impossible (2008), Physics of the Future (2011), The Future of the Mind (2014), and The God Equation: The Quest for a Theory of Everything (2021) became New York Times best sellers. Kaku has hosted several television specials for the BBC, the Discovery Channel, the History Channel, and the Science Channel.
Kaku at the USA science and engineering festival 2014 at Walter E Convention Center, DC. As part of the research program in 1975 and 1977 at the department of physics at the City College of the City University of New York, Kaku worked on research on quantum mechanics.[11][12]
He was a Visitor and Member (1973 and 1990) at the Institute for Advanced Study in Princeton and New York University. As of 2014, he holds the Henry Semat Chair and Professorship in theoretical physics at the City College of New York.[13]
Between 1970 and 2000, Kaku had papers published in physics journals covering topics such as superstring theory, supergravity, supersymmetry, and hadronic physics.[14] In 1974, Kaku and Prof. Keiji Kikkawa of Osaka University co-authored the first papers describing string theory in a field form.[15]
Kaku is the author of several textbooks on string theory and quantum field theory. An explicit description of the second-quantization of the light-cone string was given by Kaku and Keiji Kikkawa.[16][17]
Kaku is most widely known as a popularizer of science[18] and physics outreach specialist. He has written books and appeared on many television programs as well as film. He also hosts a weekly radio program.
Kaku is the author of various popular science books:
-
Beyond Einstein: The Cosmic Quest for the Theory of the Universe (with Jennifer Thompson) (1987)
-
Hyperspace: A Scientific Odyssey through Parallel Universes, Time Warps, and the Tenth Dimension (1994)
-
Visions: How Science Will Revolutionize the 21st Century (1997)
-
Einstein's Cosmos: How Albert Einstein's Vision Transformed Our Understanding of Space and Time (2004)
-
Parallel Worlds: A Journey through Creation, Higher Dimensions, and the Future of the Cosmos (2004)
-
Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100 (2011)
-
The Future of the Mind: The Scientific Quest to Understand, Enhance, and Empower the Mind (2014)
-
The Future of Humanity: Terraforming Mars, Interstellar Travel, Immortality, and Our Destiny Beyond Earth (2018) ISBN 978-0525589532
-
The God Equation: The Quest for a Theory of Everything (2021) ISBN 978-0385542746
-
Quantum Supremacy: How the Quantum Computer Revolution Will Change Everything (2023) ISBN 9780593744239
Hyperspace was a bestseller and voted one of the best science books of the year by The New York Times[18] and The Washington Post. Parallel Worlds was a finalist for the Samuel Johnson Prize for nonfiction in the UK.[19]
His 2023 book on Quantum Supremacy has been criticized by quantum computer scientist Scott Aaronson on his blog. Aaronson states "Kaku appears to have had zero prior engagement with quantum computing, and also to have consulted zero relevant experts who could’ve fixed his misconceptions."[20]