Jeff Hawkins Explores a New Theory of Cortical Function

My Brain Science podcast was partially inspired by Jeff Hawkins bestseller On Intelligence, so I am very pleased to post a new interview (BS 139) in which we discuss the exciting work he has been doing at Numenta. Hawkins is committed to understanding how the neocortex generates intelligence and he feels that his latest paper marks an important landmark in that work.

We started our conversation by discussing some of the work that Hawkins published in 2016 including two key papers. One presents a new model of the neuron that incorporates active dendrites and the concept that not all synapses can generate an action potential.

The Numenta model proposes that synapses located more distally actually help prepare the neurons to fire, thus presenting a prediction signal that includes context. Another important concept is sparseness, which Hawkins explains very clearly in this interview.

Finally we discuss the latest paper (and model), which proposes the that a key feature of cortical function is that a second input to the cortical column (beside the primary sensory signal), represents the location of the signal relative to the object itself.

Hawkins explained, "as soon as you add this location signal, then all kinds of things make sense, and all kinds of mysteries get resolved, and all of a sudden we can understand what all these layers are doing, and it tells us that the cortex, even a single column of the cortex, is much more powerful than people thought.  It allows even a small section of the cortex to model complete objects and know the entire structure of objects."

According to Hawkins this " totally flips around the way we think about the neocortex."
"Today's theory, what most people think about it is is like each region in the cortex is extracting some small feature, and it sends it to the next region, which extracts bigger features, which sends it to the next region, extracts bigger features. And somewhere, somewhere up in the hierarchy of neocortical regions, you recognize the coffee cup" (for example).

But once your realize that a lot of neural machinery is dedicated to this location signal, Hawkins says "you start thinking about the brain completely differently, you start thinking about regions differently, you start thinking about columns differently, you start thinking about the hierarchy differently, and it kind of comes together. It's not what we thought."

An important feature of this model is that the goal is to represent the actual structure and function of the mammalian neocortex. This means that the model is generating testable predictions. It also makes the model very different from the over simplified models used in artificial intelligence.

How to get this episode:

  • FREE: audio mp3 (click to stream, right click to download)
  • Episode Transcript [Buy for $2]
  • Premium Subscribers  have unlimited access to ALL old episodes and transcripts, as well as extra content for selected episodes.  BS 139 Premium Transcript (for subscribers)
  • New episodes of Brain Science  are ALWAYS FREE.  The most recent 50 episodes are also free.  See the individual show notes for links to the audio files.

References and links

  • On Intelligence by Jeff Hawkins (Discussed in BS 2 and BSP 38)
  •  Video demonstration: Why Does the Neocortex Have Layers and Columns?
  • J Hawkins,  Ahmad, S & CuiA,Y. "Theory of How Columns in the Neocortex Enable Learning the Structure of the World” Frontiers in Neural Circuits Journal•2017/10/25
  • J  Hawkins & Ahmad, S. "Why Neurons Have Thousands of Synapses, A Theory of Sequence Memory in Neocortex.” Frontiers in Neural Circuits Journal•2016/03/30
  • S Ahmad & Hawkins, J. "How do neurons operate on sparse distributed representations? A mathematical theory of sparsity, neurons and active dendrites.” May 2016 arXiv:1601.00720 [q-bio.NC]
  • All papers are available at
  • See Episode Transcript for additional links and references.