Tuesday, January 12, 2010

Bayesian Wasteland?

Despite being overloaded with other concerns, I continue to slog through two books that may (or may not) be helpful with the Machine Understanding project. On the information technology front we have Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference by Judea Pearl. I'm at page 116 as of last night. On the biological front I continue From Neuron to Brain by Stephen W. Kuffler; I'm on page 386.

Regarding Probabilistic Reasoning, so far I have seen a lot of interesting work on the problem of combining probability calculations with logic. I just finished the section on Markov Networks and am about to read up on Bayesian Networks. My problem so far is that I don't see any advantages to marrying anything I've seen of probabilistic reasoning to Jeff Hawkin's theory of predictive memory. Despite having read "Towards a Mathematical Theory of Cortical Micro-circuits," as reported in previous entries. Then again sometimes I have trouble taking up novel ideas. But my impression so far is that neural networks are not operating on a probability basis. The closest I can get, so far, to that kind of model is a signal-mixing basis, where analog functions might represent Pearl's probabilies. Nor do I see how the probability network models can cope with invariants. The word invariant is not found in the index to Probablistic Reasoning. I have higher hopes for a Tensor model, even though my own work on that is very preliminary. [I am reminded of the two seemingly totally different mathematical methods used in early quantum mechanics, which then were proven to be equivalants.]

In From Neuron, I have been reading astonishing details about how synapses and single nerve cells work, including how experiments were conducted, in mind-numbing detail. I am just getting to where how neurons and sets of neurons have been shown to operate, with the first example being neurons that sense the stretching of muscles. Again, I am pointed to tensors, which can be used to represent how multiple muscles representing various degrees of freedom of motion can lead to a coherent knowledge of where a body part is in three-dimensional, euclidean-modeled space.

More oddly, Neuron has basically nothing about Hebbian learning. True, the book is dated 1984. But did no one even try to find a physical basis for Hebbian learning as of that date? If you know of a definitive paper that appears to prove a biochemical mechanism for Hebbian learning, let me and my readers know.

I intended to explain what I was reading, in suitable chunks, in this blog. But it is easier to just keep reading at this point, rather than writing about details that I am not even sure are important yet. I keep finding I have to go back to basics. Today, worrying about tensors, muscles, feedback, and a neuron-level learning model, I am revisiting a two-neuron learning model that, oddly, I first worked on back in the 1980's. If anything comes of it, I'll let you know here.