I've been reading Without Miracles. I had just read about children learning labels. The example was a child looking at a bird and then being told it was called a "bird", then looking at a different bird and being told it was a "bird". The child then has to generalize and consider that other small flying creatures are likely to be "birds" as well. However, upon seeing a butterfly and being told it was not a bird, the child then has to refine his understanding of what a "bird" is.

This form of learning, with overgeneralization followed by correction, may well be faster than undergeneralizing. For example, what would happen if the child only believed the two creatures he saw were birds, and did not label other small flying creatures as birds until explicitly told they were birds? He'd learn slower.

I was at the beach, looking at tide pools, when I realized there's a similarity to the child's learning. Rivers flow in one direction (usually) and have a certain amount of life in them. Beaches have water flowing in both directions, due to waves (short time scale), tides (medium time scale), and seasons (long time scale). The diversity of life in tide pools is far greater than what I've seen in rivers.

In the case of the child's learning, generalization (classifying everything as a bird) is one direction and correction (being told that some creatures previously thought to be birds are not) is the other direction. Could it be that oscillation leads to faster "learning" than a steady stream?

The advantage of learning and then correcting mistakes is that you can learn faster than if you were learning cautiously enough to avoid making mistakes. Many AI algorithms are of the cautious sort. It may be that it'd be better to have our games learn patterns very quickly and then keep a set of exceptions. For example, in Simulated Annealing, we slowly lower the temperature until the system stabilizes. It may be better to quickly lower the temperature, then raise it again, and continue oscillating for some time. In Neural Networks we change the neuron parameters very slowly. It may be better to change them quickly, then change them back if needed.

I'm enjoying the book a great deal but I haven't yet figured out how I might use this for my own games. My intuition tells me that there's something very useful here, but I haven't pinpointed anything specific.

Labels:

1 comment:

Anonymous wrote at December 01, 2006 10:03 AM

This is very similar to successive over relaxation, which is used to solve nasty numerical problems (fluid dynamics etc.) You have to be careful that the over generalizations damp down, as otherwise the predictive power gets worse over time.