Wednesday, July 11, 2012

Constructing the ApeWorm World

In this draft I will try to avoid using any particular programming language to illustrate the mechanisms of action. If necessary, I will use pseudocode. When the analysis is finished and it is time to try to run simulations I will try to remember to insert links to the code samples.

The ApeWorm world (or dual worlds) does not need to be defined explicitly. The upper and lower limits for the ApeWorm coordinates will suffice. As shown in Figure 1, the allowable coordinates run from 0 to 4 on both axes.

Figure 1, ApeWorm
Figure 1

ApeWorms themselves have 4 segments, but require five coordinate points to describe fully. Each point has an x1 component and an x2 component. The points will be called A0(x1,x2), A1(x1,x2) , A2(x1,x2) , A3(x1,x2) , and A4(x1,x2).

The data we gods will use to track ApeWorms are not the same data that they keep track of themselves. An ApeWorm will know where its head, or segment one, is on the world grid. In other words, it will know A0(x1,x2) and A1(x1,x2). Each intersegment node (joint) will be able to convey to the control system one of three states: left, center, or right (L, C, R). Handedness will be determined from the higher numbered segment. In other words for the 1 joint, we look towards the 1 segment from the 2 segment. Thus in the curled ApeWorm example in Figure 1 the joints are all in the right or R configuration.

The joints are controlled by sets of two opposing virtual muscles. When there is no signal to a muscle, it is relaxed, and it contracts as signals increase. An algorithm will determine which of the three states the joint is in based on the relative strengths of the control signals to the muscles.

The ApeWorm's ability to "view" another ApeWorm is stereoscopic but otherwise simple. The virtual retinas coincide with the x1 and x2 axes. Each segment of these axes can only "see" a segment in its row or column, and if there are two segments in a column can only see the closest one. [But a second segment in a column might be inferred by the control system.] The sensor can see the segment number, if any, and can see the distance, or what cross-coordinate the segment lies on. Thus in the upper-left example in Figure 1, the sensors on the x1 (horizontal) axis will record nothing. The x2 sensor closest to the origin will record a segment 4 that is at x1 = 1. The x2 sensor between 1 and 2 will record a segment 3 also at x1 = 2, etc.

Next: ApeWorm Brain Overview (to be constructed)

Tuesday, July 10, 2012

ApeWorm, an Initial Machine Ape Project

The core of the system we seek is quite complex. We would like to minimize the inputs and outputs in order to focus our first project on the core aping system. At the same time, we want a general solution that is scalable to projects with larger, more complex input/output and memory requirements.

We will call our first attempt ApeWorms. These creature constructs will exist on a 5 x 5 grid and will have, for bodies, four connected line segments, the ends of which line up on the grid. Any two adjacent line segments will be able to have three relative positions: left, straight, and right {L, S, R}. An ApeWorm cannot have two segments overlapping each other. Figure 1 shows a few possible ApeWorm configurations.

Figure 1, ApeWorm
Figure 1


While ApeWorms are physically simple, their neural networks, or control software, can be as complex as we need. What should an ApeWorm know about itself, and what should it be allowed to know about an exemplar ApeWorm, if aping it is to be a non-trivial task requiring an extensible aping machine?

The ApeWorm will have one end designated the zero {0} end, with joints {1, 2, 3} and a {4} end. Each segment will be designated by its terminal joint, {1, 2, 3, 4}. Each joint will signal its relative position, going from the lower to the higher numbers, as L, S, or R (after training). The zero end and 1 joint will be able to sense their positions in the grid, again after training. For illustration purposes joints may be color designated: 1. Black 2. Green 3. Blue 4. Red. [Training adds a degree of complexity, which I suspect will be important as we move to systems more complex than ApeWorm]

How would the ApeWorm know about its exemplar? Note that it would be a trivial task, if exact information about the exemplar is known, to assign that information to the ApeWorm directly in software. But if we built two ApeWorms in the flesh, so to speak, we should have autonomous systems that require such information to cross space, so that information about position becomes encoded in some other form.

We will allow the ApeWorm to have two "eyes" on the Exemplar worm. The eyes (retinas) will be at the bottom and left sides of the grid. Each eye can detect segments that are broadside to it, and their orientation by row or column and by distance from the eye.

It is tempting to ignore the temporal aspect in the first build of ApeWorm. It seems difficult enough to construct a general aping ability when the minimum requirement is to detect the position of the Exemplar worm and then provide for the ApeWorm to conform itself to a copy of that. However, going forward we will want temporal input and temporal aping.

Consider that the aping instinct evolved in animals only after basic lower functions were in place. An animal has to sense the environment, coordinate itself, catch food and escape predators just to survive. All that puts a high premium on temporal capabilities. Aping sits atop all those capabilities, just as language and culture sit atop aping. We have freed our ApeWorms from the need to eat, but we run the danger of making our aping system hard to integrate into robots and more complex programs if it is unable to at least deal with time sequences.

Next: Constructing the ApeWorm World