Aping as the Basis of Intelligence (cont.)
Specifications for the Language
Machine
Typically
in systems of artificial intelligence designed for language there is a
front-end feature detection system. Thus the slight fluctuations in air
pressure we call sound are analyzed for features. In the case of human language
these features are often quite complex, but at this point they are well-studied.
Thus detectors have been devised for common syllables and voice ranges.
In a
developing human there are likely some very generalized feature detectors, but
they are also very flexible. This would also be true in mammals and birds that
have shown they can learn some human words. Thus a human baby can learn a
primitive, click-based tongue from Africa, the simple syllables of modern
English, or a tone-based Asian language system. In effect feature detectors evolve
based on exposure to language.
Voicing
is also complex, controlled by a wide range of muscles. It too is learned, and
requires considerable practice to achieve perfection. Aping the voices of other
humans is the primary method of learning to speak so as to be understood.
Four
major input/output streams can be defined for a human-like language machine.
There is the audio input from the ears. There is output to a variety of muscles
that produce sounds and speech. There are other inputs ultimately external to
the body, necessary to provide positive and negative behavior reinforcement,
such as touch. There are internal desire (or rejection) type inputs, notably
hunger and other discomforts or wants. There is also a need for decision
making: given all the other inputs, deciding what sounds to make and when. This
decision making could be incorporated into the language machine or it could be
external, and probably is some combination of both in humans.
No comments:
Post a Comment