The picture to the left is an LSTM cell that can be used in some neural networks to 'remember' sequences of previous inputs. It can therefore re-create similar outputs as have been observed in previous runs or it can be used to associate one thing with another in temporal terms. Anyway, the reason for posting today is that there are particular dynamics that one should understand to choose the right 'kind' of network or even if one should use a neural network anyway. The more obvious design criteria are related to explain-ability of certain classifications or outputs. If you expect a neural network to give you reasons for why it indicated a certain classification or output, you're out of luck. So they have limited use in knowledge based systems where there must be traceability of the observations that lead to some conclusion or the observations that need to be taken in order to derive a certain final conclusion (enrichment and convergence towards a certain diagnosis for example). So in such cases, you should definitely use different technology.
Input signals can be interpreted or preprocessed in many ways and the exact method that you choose to model your network may have complicating effects on the design you require in the neural network that is supposed to resolve the problem. For example, if you have a mine sweeper as in http://www.ai-junkie.com/, then it is given its own orientation and the direction to the closest mine as input. Then you have a couple of weights and the end result is the output at two terminals, which is defined as the activation of the right and the left track of the mine sweeper. The overall goal is to get mine sweepers that consistently reorient their own direction to match the direction of the closest mine, such that they can clean up the mines as fast as possible.
However, another possible design choice is to use the distance to the closest mine instead. This really changes the entire problem scope, because a single 'frame' of the entire situation doesn't immediately provide all the information that is necessary such that the tank can be directed to the tank. Maybe this reorientation of the tank isn't successful from frame 1 to frame 2, but certainly in the direction/orientation approach, you can rest assured that you can guarantee convergence to match the two together and the ability to train a network to exhibit this obvious correlation (and basically behaviour).
In the case of distance however, the mine may be in a radius around the tank at any distance. The only way in which the tank can find out more about the actual location of the mine is to choose a single action and observe the difference in the distance. This particular action therefore is a very important key decision that determines the action that the tank should take in the frame thereafter (the time between frames in a simulation isn't necessarily 0.0000001 seconds, we could easily take 1 second and be happy with it. It does determine the resolution and accuracy of the end solution, but even minor fluctuations should provide this information).
So, the reasoning that takes place here cannot be executed successfully by observing a single frame of information. In other words, you need changes in the external environment, sometimes induced by your personal actions, in order to determine what happens next. This often occurs in robotics or other controller situations, where the relation or 'directionality' is not always known. Or sometimes you can derive it, but you don't know how you should react to it. Also, changes induced by other forces in that environment will then allow the system to react to those in similar terms, since the observation is changing.
The point is that there are certain dynamics in real-life scenario's that standard feed-forward neural networks can never deal with. Because FFN's must have full information available in a single frame that conclusively can result in the output situation to converge to some expected result. In cases where the external environment has a larger reaction period... for example the temperature of cooling water for a very large diesel engine, these neural networks will very likely never be able to react successfully, because they do not understand or embed the concept of memory.
This is why the type of the network is important and why there has been so much research into Recurrent Neural Networks. These RNN's have the capability to store observations of a number of past events into the memory of the network, such that the observations made in history impact the actual decisions that are taken at the output. Standard RNN's can remember information for about 10 timeframes (whatever the time resolution is you've chosen), whereas LSTM's have the capability to remember significant events up to 1000 timesteps. The idea is that significant reoccurrences of some events cause these cells to suddenly behave different and indicate to output neurons that some event is reoccurring, inducing a different kind of response than in other, more regular or random situations.
Now... there's a worm on this earth called Caenorhabditis elegans, which has exactly 302 neurons. This worm reacts to the smell (chemical concentration) of bacterial residu, because it knows that it can feed on the bacteria that are necessarily present there. Some bacteria infect the worm and harm it, whereas others are excellent feeding grounds. It innately avoids the smell of some types of bacteria that are harmful, but there are two types of harmful bacteria that it is especially attracted to. Lab research has shown that after exposure to these two harmful bacteria, it learns to avoid them later. Other behaviour includes social feeding, where worms gather together on large piles of food, up to some threshold determined by the amount of oxygen that these worms observe by one or two sensor neurons that detect oxygen.
If you ever wonder what a neural network in biology looks like, here's a full description of such a network. You can trace the network from sensor neurons to motor neurons and the actual muscle. But beware! You'll also see that this type of network is extremely intricate.
This shows that the dynamics of particular problems aren't necessarily obvious. Those things you can observe and how they interact with other dynamics of the problem interact together to which a solution has to be found that doesn't violate critical constraints, but still allow the problem to converge to some optimal solution.
New tool in town: KnowledgeGenes.com
7 years ago