Monday, December 26, 2011

Stanford AI Class wrap up

I managed to muddle my way through the free Internet version of the Stanford Introduction to Artificial Intelligence (AI) course. "Congratulations! You have successfully completed the Advanced Track of Introduction to Artificial Intelligence ..." says my Statement of Accomplishment.

Before putting on my analyst mask, I would like to thank Stanford University and particularly the instructors, Sebastian Thrun and Peter Norvig, for conducting the course. In particular, for making it free of charge. I am hoping they will leave the instruction videos up for a while, there are some I would like to go over again.

I got a good review of Bayes Rule, basic probability, and some simple machine learning algorithms. I had not worked on Planning algorithms before, so that was of some interest. Markov Models had always been a bit vague to me, so that section helped me nail down the idea. Games and game theory seem to have made little progress since the 1940's, but I guess they have to be covered, and I did get clear on how MinMax works. Computer vision seemed kind of weak, but then you can't assume students know basic optics and at least we learned how to recognize simple features. Robotics was a prior interest for me, and I did not know about Thrun's obvious favorite, Particle Filters, which are a useful paradigm for spatial positioning (aka localization).

The last two units were on Natural Language Processing, and that is a good place to start a critique (keeping in mind that all this material was introductory). Apparently you can do a lot of tricks processing language, both in the form of sounds/speech and written text, without the algorithms understanding anything. They showed a way to do pretty decent inter-ethnic language translations, but the usefulness depends on humans being able to understand at least one language.

Plenty of humans do plenty of things, including paid work, without understanding what they are doing. I suppose that could be called a form of artificial intelligence. Pay them and feed them and they'll keep up those activities. But when people do things without understanding (I am pretty sure some of my math teachers fell into that category), danger lurks.

The Google Car that drives itself around San Francisco (just like Science Fiction!) just demonstrates that driving a Porsche proves little about your intelligence capabilities. Robot auto-driving was a difficult problem for human engineers to solve. They were able to solve it because they understood a whole lotta stuff. Particle Filters, which involve probability techniques combined with sensory feedback to map and navigate an environment, are a cool part of the solution. If I say "I understand now: I have been walking through a structure, and to get to the kitchen I just turn left at the Picasso reproduction," I may be using the word understand in a way that compares well with what we call the AI capabilities of the Google Car. Still, I don't think the Car meets my criteria for machine understanding. The car might even translate from French to English for its human cargo, but I still classify it as dumb as a brick.

Hurray! Despite my advancing age, lack of a PhD., less than brilliant business model, and tendency to be interested in too many different things to be successful in this age of specialization, no one seems to have gotten to the essence of how humans understand, and are aware of, the world and themselves.

If the human brain, or its neural network subcomponents, did Particle Filters, how would that work? I know from much practice that bumping around in the dark can lead to orientation or disorientation, depending on circumstances. On the other hand the random micro-fine movements of the eye might be a physical way of generating randomness to test micro-hypotheses that we are not normally consciously aware of.

We sometimes say (hear Wittgenstein in my voice) that someone has a shallow understanding of something. "Smart enough to add columns of numbers, not smart enough to do accounting," or "Good at game basics, but unable to make strategic decisions." Let me put it another way: in some ways the course itself was an intelligence test. I imagine it would be very rough for anyone without a background in algebra and basic probability theory. The students in the class already knew a lot, and had to learn difficult things.

I want to know how our bodies, our brains, learn difficult things. The only way I will be able to be sure that I understand how that is done is if I can build a machine that can do the same thing.