My Notes About On Intelligence Book By Jeff Hawkins. Part 1

Fri Sep 04 2020 15:49:05 GMT+0200 (Hora de verano romance), David Ibáñez

My Notes About On Intelligence Book By Jeff Hawkins

on intelligence.jpg

Amazon page of the book

"On Intelligence"" By Jeff Hawkins @JeffCHawkins on Twitter and Sandra Blakeslee @bysblakeslee is a very inspiring book for me.

This book encouraged me to start my new path into neuroscience. As Jeff Hawkins writes in the prologue, I'm one of those readers!

"I hope that some readers will be inspired to focus their careers on building intelligent machines based on the principles outlined in these pages."
Hawkins, Jeff. On Intelligence (p. 4). Henry Holt and Co.. Kindle Edition.

Having this final "naive" goal in mind I started digging into neuroscience more seriously and I started learning for myself through online courses, books and papers.

What I will post here will be my notes and highlights through the chapters of the book. Occasionally some of my naive thoughts about the content.

Let's start then!


In the prologue Jeff Hawking explores the current (2004) state of the machine learning field and in general the idea if a computer can be intelligent.

His first statement about if the computers can be intelligent, He argues that brains and computers do fundamentally different things. He adds that without understanding what brain does is not possible to design a thinking machines.

Probably the study about how brain works is hard due to a incorrect assumptions, for instance to belief that intelligence is defined by intelligent behavior.

What is intelligence?
"It is the ability to make predictions about the future that is the crux of intelligence"
Hawkins, Jeff. On Intelligence (p. 6). Henry Holt and Co.. Kindle Edition.

Talking about how brain works, the author thinks that "the seat of intelligence is the neocortex" and it is "surprisingly regular in its structure details.". The different parts of the neocortex work on the same principles, no matter if the region is related to vision, touch or hearing. "The key to understanding the neocortex is understanding these common principles and, in particular, its hierarchical structure"

The goal of the book is to explain his theory of intelligence and how the brain works, and can help to explain how we are creative, how we learn, why we feel conscious and why the elders have more trouble to remember new things.

And on of the fundamental questions of the book is:

Can we build intelligence machines?
Yes. We can and we will.
Hawkins, Jeff. On Intelligence (p. 7). Henry Holt and Co.. Kindle Edition.

Chapter 1. Artificial Intelligence

Possible misconceptions

  • Neurons work as AND and OR gates to produce logic.
  • Understanding can not be measured by external behavior. It is instead, an internal metric on how the brain remember things and uses its memories to make predictions.

Chapter 2. Neural Networks

Missing pieces in Neural Networks (2004)

  1. Inclusion the time in brain functions: Real brains process rapidly changing streams of information. There's nothing static about the flow of information into and out of the brain.

  2. Importance of feedback: Brain is saturated with feedback connections. For example, in the circuit between the neocortex and the thalamus, connections going backward exceed the connections going forward by a factor of ten.

  3. A model of the brain should account for physical architecture of the brain. The neocortex is organized as a repeating hierarchy

  4. Neural Networks have backpropagation, but that's not a feedback. It only works on training. There's no feedback between outputs and inputs. The relation between the two is totally static. There's no history or record in the network of what happened even in a short time earlier.

  5. There're some models that don't focus on behavior. They are called Auto-Associative Memories (AAM). These models take feedback into account. Those have simple neurons connected to each other and fire when they reached a certain threshold. AAM feed the output of each neuron back into the input. When a pattern of activity was imposed on the artificial neurons, they formed a memory of this pattern. AAM presents the following properties:

    • The most important property is that you don’t have to have the entire pattern you want to retrieve in order to retrieve it.
    • Auto-associative memory can be designed to store sequences of patterns, or temporal patterns.
    • These models take the potential importance of feedback and time changin inputs, vastly ignored in neural networks.

An artificial system that used the same functional architecture as an intelligent, living brain should be likewise intelligent. As functionalists, we (the author) believe that there's nothing inherently special or magical about the brain that allows it to be intelligent.

You can find the part 2 here.

Sources and other resources

Amazon page of the book

Jeff Hawkins @JeffCHawkins on Twitter

Sandra Blakeslee @bysblakeslee on Twitter

Numenta Homepage

About me

I'm David Ibañez from Barcelona. Feel free to get in touch, here or:

  • You can get in touch with me on Twitter


Follow us

Latest blogs
My Notes About On Intelligence Book By Jeff Hawkins. Part 2.
Thu Sep 17 2020 23:12:01 GMT+0200 (Hora de verano romance)
My Notes About On Intelligence Book By Jeff Hawkins. Part 1
Fri Sep 04 2020 15:49:05 GMT+0200 (Hora de verano romance)
New Blog Section... Neuroscience
Wed Sep 02 2020 19:43:02 GMT+0200 (Hora de verano romance)
My Notes About Intro to Tensorflow Course
Sat Jun 22 2019 19:43:02 GMT+0200 (Hora de verano romance)
How to train YOLOv3 using Darknet on Colab notebook and speed up load times
Wed Apr 10 2019 14:57:55 GMT+0200 (Hora de verano romance)

Latest comments
Hi David, Thank you very much, this blog saved my life!
Thu Apr 16 2020 11:06:33 GMT+0200 (Hora de verano romance)