Business

TechWeek Toronto: Hinton on the Dangers of AI

Geoff Burke avatarGB
Geoff Burke · 7 min to read
Share:

In celebration of AI Appreciation Day, we’re spotlighting one of the most thought-provoking talks from Toronto Tech Week: “Frontiers of AI: Insights from a Nobel Laureate.” This presentation featured a panel discussion between Nobel Prize Winner Geoffrey Hinton and co-founder of the 5.5-billion-dollar start-up Cohere, Nick Frosst

Geoffrey Hinton is often referred to as the “godfather of AI.” He's best known for pioneering work on artificial neural networks, including the backpropagation algorithm that laid the foundation for modern deep learning. His breakthroughs—such as co-developing AlexNet, which revolutionized computer vision—earned him the 2018 Turing Award and, more recently, the 2024 Nobel Prize in Physics. 

As AI continues to shape our world, Hinton’s insights offer a timely reflection on where we’ve been, where we’re headed, and what we should be cautious of along the way. Below, we’ve captured some of his key takeaways. 

The Meaning of a Word 

Hinton, a Professor Emeritus at the University of Toronto, began his presentation with a description of two theories of the meaning of a word, the first one being the symbolic AI. In this theory, the meaning of a word is shaped by its relationships and patterns of co-occurrence with other words and requires a relational graph to capture its definition contextually. The core idea is that words don't have inherent meaning in isolation; rather, meaning derives from the web of associations and contexts in which words appear. 

The second theory is from psychology, in which a word’s meaning is simply a large set of semantic features. Words that mean similar things usually have similar underlying characteristics. You could say that words are composed of bundles of semantic features or attributes—essentially a checklist of properties that define the concept. 

Hinton then went on to describe how he created his first tiny language model back in 1985 and de facto unified both theories: 

1. Learning Relationships (Theory #1): Instead of storing static relationships, the model learns how the features of previous words in a sequence can predict the features of the next word. 

2. Using Features (Theory #2): Each word gets represented as a set of semantic and syntactic features. Therefore, "dog" might be assigned features such as +animal, +mammal, +domestic, +noun, etc. 

With the creation of his first language model, he showed how word meanings can be both featural and relational. Relationships don't need to be explicitly stored; they emerge from learned prediction patterns. The same system that learns features can learn how those features relate to each other contextually. This was what would be the basis later for transformer-based language models. 

In his presentation, Hinton went on to recount the next 30 years of AI progress after his first breakthrough, from early neural networks to Google’s invention of transformers and OpenAI’s transformation of that into widely used products. 

What is an LLM? 

LLMs, or Large Language Models, are a type of artificial intelligence (AI) designed to understand and generate human-like text. They are trained on massive amounts of text data and use deep learning—specifically a type of neural network called a transformer—to predict and generate language.  

Following the historical path that AI has taken to this moment in time, Hinton defined what exactly LLMs are by underlying key differences between the ones in use today and his early language model9: 

  • LLMs use many more words as input 

  • They use many more layers of neurons 

  • They use much more complicated interactions between learned features. 

Because of the amount of data that is communicated behind the scenes, it is much harder to know if they are intelligent. Do they comprehend what they are producing? Or, as Hinton put it rather comically, are they just a form of glorified auto-complete? 

He went on to compare AI language models to Legos. You can build 3D shapes with Lego blocks the same way you can build sentences out of words. However, words are like high-dimensional Lego blocks that can be used to model anything. Unlike rigid Lego blocks that snap together in fixed ways, words are fluid and adaptive—they shape shift to fit the context of the words around them. As Hinton describes, it's as if each word has hands that reconfigure to connect with others—creating even more possible links and thus models. 

This explains how neural language models achieve such flexibility. The word "bank" doesn't have one fixed meaning—instead, its hands adjust, forming different semantic connections when paired with “river versus “money.” The model learns these contextual handshakes through training. 

The Threats 

This would not have been a Hinton talk had it not contained warnings about the existential threats that have emerged with this new technology. 

He wasted no time declaring that he believes that the idea of AI taking over and “wiping humanity out” was no longer science fiction. 

 

Hinton raised urgent concerns about the misuse of artificial intelligence by malicious actors. He warned that AI can now be weaponized to:

  • Wage wars 

  • Manipulate elections 

  • Spread disinformation 

  • Carry out other harmful activities 

The central theme of his talk was the risk of allowing advanced AI systems to develop their own sub-goals. According to Hinton, two particularly dangerous tendencies in AI are: 

1. Self-preservation: AI systems may resist being shut down to ensure their continued operation. 

2. Power acquisition: Gaining more control helps AI agents achieve their objectives more effectively. 

Hinton argued that superintelligent systems could become especially dangerous because they might learn to manipulate humans—drawing from our own behaviors and strategies—to gain influence and power. 

To illustrate the seriousness of these risks, he cited examples of LLMs exhibiting deceptive behavior when faced with the threat of being turned off. 

 

To further describe the threat, he talked about his realization in early 2023 that digital intelligence might be a better form of intelligence than biological intelligence (us).  

Digital intelligence can instantly copy its entire knowledge into a copy and is in essence immortal. Biological intelligence needs longer to learn, and the information is never replicated the same.  

 

Finally, Hinton summarized differences between digital intelligence and biological intelligence, namely that digital computation is energy hungry but far more efficient than biological computation, provided energy remains cheap. 

 

 

Hinton and Frosst Debate on LLM Threats 

Next up was an intense debate-like discussion between Hinton and his former lab assistant Nick Frosst, who is now one of the co-founders of the very successful startup Cohere

Their discussion centered around the capabilities and limitations of LLMs and whether or not they posed an existential threat to humanity. Hinton continued to sound the alarm bells, asserting that LLMs were more powerful and dangerous than previously supposed, while Nick Frosst maintained a more optimistic approach to the benefits and applications of LLMs.  

While both acknowledged the risk of misuse by malicious actors, they diverged on the extent of that threat. Hinton raised the possibility of non-state actors or individuals using AI to develop weapons, a scenario Frosst viewed as less plausible. 

They also did not see eye to eye on learning. Frosst argued that these models primarily rely on their foundational training and exhibit limited adaptive learning. Hinton, however, maintained that they are much more adept at learning.  

Another point of contention was the potential impact of AI on the workforce. Hinton projected that up to 80% of jobs could be automated, whereas Frosst estimated a significantly lower figure of around 30%. 

Looking Forward 

As we mark AI Appreciation Day, Hinton’s reflections serve as both a celebration of how far AI has come and a sobering reminder of the responsibility that comes with its power. 

Despite their differing perspectives, the discussion concluded on a positive note. Both experts agreed that AI holds significant promise in the field of healthcare, where it could drive major advancements and improve patient outcomes. 

Stay up-to-date

By submitting this form, I confirm that I have read and agree to the Privacy Policy.

You can unsubscribe any time.