He holds a
Canada Research Chair in Machine Learning and is currently[when?] an advisor for the Learning in Machines & Brains program at the
Canadian Institute for Advanced Research. Hinton taught a free online course on Neural Networks on the education platform
Coursera in 2012.[37] He joined Google in March 2013 when his company, DNNresearch Inc., was acquired, and was at that time planning to "divide his time between his university research and his work at Google".[38]
Hinton's research concerns ways of using neural networks for
machine learning,
memory,
perception, and symbol processing. He has written or co-written more than 200
peer reviewed publications.[1][39] At the
Conference on Neural Information Processing Systems (NeurIPS) he introduced a new learning algorithm for neural networks that he calls the "Forward-Forward" algorithm. The idea of the new algorithm is to replace the traditional forward-backward passes of backpropagation with two forward passes, one with positive (i.e. real) data and the other with negative data that could be generated solely by the network.[40]
While Hinton was a postdoc at UC San Diego,
David E. Rumelhart and Hinton and
Ronald J. Williams applied the
backpropagation algorithm to multi-layer neural networks. Their experiments showed that such networks can learn useful
internal representations of data.[16] In a 2018 interview,[41] Hinton said that "
David E. Rumelhart came up with the basic idea of backpropagation, so it's his invention". Although this work was important in popularising backpropagation, it was not the first to suggest the approach.[17] Reverse-mode
automatic differentiation, of which backpropagation is a special case, was proposed by
Seppo Linnainmaa in 1970, and
Paul Werbos proposed to use it to train neural networks in 1974.[17]
In October and November 2017 respectively, Hinton published two
open access research papers on the theme of
capsule neural networks,[45][46] which according to Hinton, are "finally something that works well".[47]
In May 2023, Hinton publicly announced his resignation from Google. He explained his decision by saying that he wanted to "freely speak out about the risks of A.I." and added that a part of him now regrets his life's work.[13][31]
Geoffrey E. Hinton is internationally distinguished for his work on artificial neural nets, especially how they can be designed to learn without the aid of a human teacher. This may well be the start of autonomous intelligent brain-like machines. He has compared effects of brain damage with effects of losses in such a net, and found striking similarities with human impairment, such as for recognition of names and losses of categorisation. His work includes studies of mental imagery, and inventing puzzles for testing originality and creative intelligence. It is conceptual, mathematically sophisticated, and experimental. He brings these skills together with striking effect to produce important work of great interest.[51]
He won the
BBVA Foundation Frontiers of Knowledge Award (2016) in the Information and Communication Technologies category, "for his pioneering and highly influential work" to endow machines with the ability to learn.[58]
Together with
Yann LeCun, and
Yoshua Bengio, Hinton won the 2018
Turing Award for conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing.[59][60][61]
In 2023, Hinton expressed concerns about the rapid
progress of AI.[32][31] Hinton previously believed that
artificial general intelligence (AGI) was "30 to 50 years or even longer away."[31] However, in a March 2023 interview with
CBS, he stated that "general-purpose AI" may be fewer than 20 years away and could bring about changes "comparable in scale with the
Industrial Revolution or
electricity."[32]
In an interview with The New York Times published on 1 May 2023,[31] Hinton announced his resignation from Google so he could "talk about the dangers of AI without considering how this impacts Google."[66] He noted that "a part of him now regrets his life's work" due to his concerns and he expressed fears about a race between Google and
Microsoft.[31]
In early May 2023, Hinton revealed in an interview with BBC that AI might soon surpass the information capacity of the human brain. He described some of the risks posed by these chatbots as "quite scary". Hinton explained that chatbots have the ability to learn independently and share knowledge. This means that whenever one copy acquires new information, it is automatically disseminated to the entire group. This allows AI chatbots to have the capability to accumulate knowledge far beyond the capacity of any individual.[67]
Existential risk from AGI
Hinton expressed concerns about
AI takeover, stating that "it's not inconceivable" that AI could "wipe out humanity."[32] Hinton states that AI systems capable of
intelligent agency will be useful for military or economic purposes.[68] He worries that generally intelligent AI systems could "create sub-goals" that are
unaligned with their programmers' interests.[69] He states that AI systems may become
power-seeking or prevent themselves from being shut off, not because programmers intended them to, but because those sub-goals are
useful for achieving later goals.[67] In particular, Hinton says "we have to think hard about how to control" AI systems capable of
self-improvement.[70]
Catastrophic misuse
Hinton worries about deliberate misuse of AI by malicious actors, stating that "it is hard to see how you can prevent the bad actors from using [AI] for bad things."[31] In 2017, Hinton called for an international ban on
lethal autonomous weapons.[71]
Economic impacts
Hinton was previously optimistic about the economic effects of AI, noting in 2018 that: "The phrase 'artificial general intelligence' carries with it the implication that this sort of single robot is suddenly going to be smarter than you. I don't think it's going to be that. I think more and more of the routine things we do are going to be replaced by AI systems."[72] Hinton also previously argued that AGI won't make humans redundant: "[AI in the future is] going to know a lot about what you're probably going to want to do... But it's not going to replace you."[72]
In 2023, however, Hinton became "worried that AI technologies will in time upend the job market" and take away more than just "drudge work."[31]
Politics
Hinton moved from the U.S. to Canada in part due to disillusionment with
Ronald Reagan-era politics and disapproval of military funding of artificial intelligence.[35]
Personal life
Hinton's second wife, Rosalind Zalin, died of
ovarian cancer in 1994; his third wife, Jackie, died in September 2018, also of cancer.[73]
Hinton is the great-great-grandson of the mathematician and educator
Mary Everest Boole and her husband, the logician
George Boole,[74]. George Boole's work eventually became one of the foundations of modern computer science. Another great-great-grandfather of his was the surgeon and author
James Hinton,[75] who was the father of the mathematician
Charles Howard Hinton.
^
abZemel, Richard Stanley (1994). A minimum description length framework for unsupervised learning (PhD thesis). University of Toronto.
OCLC222081343.
ProQuest304161918.
^
abFrey, Brendan John (1998). Bayesian networks for pattern classification, data compression, and channel coding (PhD thesis). University of Toronto.
OCLC46557340.
ProQuest304396112.
^
abNeal, Radford (1995). Bayesian learning for neural networks (PhD thesis). University of Toronto.
OCLC46499792.
ProQuest304260778.
^Krizhevsky, Alex;
Sutskever, Ilya; Hinton, Geoffrey E. (3 December 2012).
"ImageNet classification with deep convolutional neural networks". In F. Pereira; C. J. C. Burges; L. Bottou; K. Q. Weinberger (eds.). NIPS'12: Proceedings of the 25th International Conference on Neural Information Processing Systems. Vol. 1. Curran Associates. pp. 1097–1105.
Archived from the original on 20 December 2019. Retrieved 13 March 2018.
^Hinton, Geoffrey E. (6 January 2020).
"Curriculum Vitae"(PDF). University of Toronto: Department of Computer Science.
Archived(PDF) from the original on 23 July 2020. Retrieved 30 November 2016.
Rothman, Joshua, "Metamorphosis: The godfather of A.I. thinks it's actually intelligent – and that scares him", The New Yorker, 20 November 2023, pp. 29–39.