I finished my second read-though of On Intelligence by Jeff Hawkins several weeks ago. I got a lot out of this reading, and even was able to follow the details of Chapter 6 which eluded me the first time. Despite good intentions I neglected to write down my much-provoked thoughts until now. I have been spending most of that time indexing the new edition of Windows Internals by Mark Russinovich et. al., which is interesting in an entirely different way: one sees the products of human intelligence, but it is obvious that there is no danger of a Windows operating system of the current style becoming intelligent or conscious, ever.
In addition to following Jeff's suggestions about noticing how my own mind works, I have been thinking about these matters while watching my dog, Hugo. Let's say he represents mammals in general. He may not have the big old cortex that Jeff admires so much, but he seems to be constantly using his little one to make predictions. Hugo has to make a lot of decisions, and he often freezes in place while making them. Come when called? Maybe, maybe not. A treat in hand might just mean being captured and taken indoors, or left out of a car ride. To make such decisions, I believe, Hugo has to predict outcomes.
Like most dogs, Hugo likes to chase thrown toys. He has come to associate arm movements with probable outcomes. He knows if you are throwing in a particular direction, and begins his run in that direction without waiting for the toy to be released. He expects the toy to appear in front of him. If it does not, he looks back. Will I go ahead and throw the toy past him, or throw it in another direction.
Do this a few times, and he stops dashing as soon as my arm is moving. He waits to see if and where where I actually throw the toy.
I also believe Hugo has a construct of the world very similar to our human construct. He navigates the real world with an ease that can only come from having an internal map of the world. He understands the three-dimensional nature of the world, and in particular that obstacles like a tree or a house can have space behind them.
All this means is that if we want to build cortex like machine designs, we can do a lot without having to recreate a human brain. A car as smart as Hugo could go anywhere it wants on roads without smashing into other cars. This reminds me of science fiction stories where human brains are disembodied and plugged directly into space ships.
So maybe our first goal should be to create animal-brain equivalents and see what can be done with them.
I just happen to be reviewing the branch of mathematics that deals with changes of coordinates. I've always wanted to understand quantum physics and general relativity better, so I occasionally break open a math book, because at some point you must do the math to know what the smart guys are talking about. It is clear to me that human brains, and probably mammal brains too, are pretty good at changes of coordinates. The fact that we construct a mental map of the world and so easily map visual, audio, and tactile coordinates to it and back again is pretty remarkable.
I am again about to review how invariants are treated in tensor mathematics. Maybe that has nothing to do with the ability of the cortex to navigate the world, but it just might. Our brains certainly are good at creating invariant memories and comparing them to real world experiences.
Using the cortex to analyze the cortex: now that is a wonder.