## Monday, January 04, 2010

### Fitness functions

I'm reading up on Genetic Algorithms and the most important thing for these algorithms is to choose a correct fitness function. This is a function that determines a score for the performance of a certain instance or candidate for the solution or closest approximation of a given problem.

For some mathematical problems, if you have a set of data that you wish to approximate, this fitness function can be given by, for example, the root mean square error of the differences of the calculated result and the known result. A neural network wouldn't be used if the actual function was available, so let's assume that the problem is somewhat hard and cannot be approximated appropriately with either a rule or a function approximation.

Neural networks are sometimes trained using genetic algorithms by gencoding of the neuron weights, randomizing these in the first generation and then incrementally mutate, mix, pair and select these network instances until one network is found that seems to perform appropriately.

Eventually, after 50 generations or so, networks emerge that exhibit the desired behaviour expressed in the fitness function. But if we take this as an analogue to the biological world (where both ann's and ga's come from), then this fitness function isn't a single expression.

If what Darwin says it's true, imagine the time it must have taken to evolve from simpler organisms into more complex ones and the evolutions and changes that must have occurred in populations. The genetic effects are measured over the entire population (not on individuals).

For computers, we assume the role of overall biological system, but I argue that this is only suitable for the simplest of cases (assuming this role actually means defining the fitness function).

Besides considering plain survivability of a particular instance, the social behaviour of the overall group should also be considered (the generation that the individual net is part of). Some animals for example increase their survival rates by working together on occasion, which may leave room for other developments to occur that are highly favourable (although negatively impacting survivability). And if one individual succeeds in subverting or influencing another individual to their own gain, then this is one thing that should be taken along in the fitness function.

Come to think of it; many GA's are only used during a training session, but discarded otherwise. Most neural network training so far has been about pursuing a particular unknown function with a large amount of observational data. But each network generally is constructed as a lone individual in the entire population. still, there is at least some research that has been undertaken in the area of neural network ensembles, where cooperation is sometimes used to solve sub-problems, or in the application of environments where multiple objectives are present. As such, successful combinations of neural networks are what count most, not individual results. So, cooperation becomes an important factor for individual survival.

As with the previous post, for anything interesting to come out of neural networks, it may be necessary to find a fitness or cost function of a very complex environment in which this neural network is supposed to execute actions. This sounds really easy, but consider an example of a robot that needs to find a way through a corridor as fast as possible.

A simple fitness function is a measure of the amount of time taken to reach the end of the corridor. We may now feed the robot some sensory data, for example when it bumps into a wall. We always start off a robot in the correct direction. The robot can rotate two tracks on either side. Rotating a track faster than another makes the robot turn to one side, such that it gets a steering ability. Rotating both tracks to the same speed make it go forward.

You may think that the robot that goes fastest wins, but then the scientist sees that the robot has so much speed, it hits the wall with unforgiving force and breaks off a track. So a very stern measure would be to restrict the speed of the tracks, such that none of the individuals can move at that speed. A different solution however is to somehow factor in this speed or wall-hitting force and penalize those robots where this occurs often. Then we may have two different kinds of winners:
• Robots that still maintain a very high speed, but hit the walls sometimes
• Robots that maintain lower speeds and gradually arrive at the end of the corridor in good time.
Both strategies have their advantages and disadvantages. The one that is faster can also run faster from danger and therefore can maybe survive better. The one that is more careful has less chances of considerable harm.

So the judges are out on all of this technology. For specific tasks, they work because we can think of a suitable fitness function for them. As soon as things get tougher and wider though, the environment needs to penalize and reward those individuals appropriately. Good environments also allow for diversity in the population. And then one wonders whether such diversified elements continue to specialize in their own populations and wander away from their original ones. A bit like evolution outside of their normal populations.