Monday, May 10, 2010

New Algorithms from Numenta

My study of Machine Understanding was on pause for a couple of weeks while I compiled an index for a 802.11n networking book. On May 5 I received a Numenta Newsletter, the key point of which is that Jeff Hawkins and crew have been working on a better algorithm for their HTM systems. Sadly, I still have not gone into the details of the old algorithm!

I'll quote the key passage from Jeff:

"Last fall I took a fresh look at the problems we faced. I started by
returning to biology and asking what the anatomy of the neocortex
suggests about how the brain solves these problems. Over the course
of three weeks we sketched out a new set of node learning algorithms
that are much more biologically grounded than our previous algorithms
and have the promise of dramatically improving the robustness and
performance of our HTM networks. We have been implementing these new
algorithms for the past six months and they continue to look good."

Sure. Even my own limited reading so mostly-outdated neurology texts seemed to indicate that the early versions of HTM are simplistic (compared to systems of human brain neurons). The new version, styled FDR (Fixed-sparsity Distributed Representation), are somewhat more complicated, but Jeff believes they are more capable. In particular, they deal better with noise and variable-length sequences.

On the other hand, we are certainly hoping to get machines to actually understand the world without having to duplicate (in software) a human brain molecule by molecule.

Jeff gave a lecture on the new algorithm at the University of British Columbia, which will have to do for the rest of us until details are posted at the Numenta web site:

http://www.youtube.com/watch?v=TDzr0_fbnVk

See also my Machine Understanding main web page.

In the meantime I intend, in addition to doing my own thinking & tinkering, to resume my program of going through the already-posted, earlier version examples of HTM.

No comments:

Post a Comment