I'm very skeptical about these approaches at the moment, but don't totally discard it. The problem with a computer is that it is a fairly linear device. Most programs today run by means of a stack, which is used to push information about current execution context. Basically, it's used to store contexts of previous actions temporarily, so that the CPU can perform other tasks either deeper or revert to previous contexts and continue from there.
I'm not sure whether in the future we're looking to change this computing concept significantly. A program is basically something that starts up and then, in general, proceeds deeper to process more specific actions, winds back, then process more specific actions of a different nature.
This concept also more or less holds for distributed computing, for many ways this is implemented today. If you look at Google's MapReduce for example, it reads input, processes that input and converts it to another representation, then stores the output of the process towards a more persistent medium, for example GFS.
I imagine a certain model in the next paragraphs, which is not an exact representation of the brain or how it works, but it serves to purpose to understand things better. Perhaps analogies can be made to specific parts of the brain later to explain this model.
I imagine that the brain and different kinds of processing work by signalling many nodes of a network at the same time, rather than choosing one path of execution. There are exceptionally complex rules for event routing and management and not necessarily will all events arrive, but each event may induce another node, which may become part of the storm of events until the brain reaches more or less a steady-state.
In this model, the events fire at the same time and very quickly resolve to a certain state that induce a certain thought (or memory?). Even though this sounds very random, there is one thing that gives these states meaning (in this model). It is the process of learning. The process where we remember what a certain state means, because we pull that particular similar state from memory and that state in another time or context induced a certain meaning. In this case, analogy is then pulling a more or less similar state from memory, analyzing the meaning again and comparing that with the actual context we are in at the moment. The final conclusion may be wrong, but in that case we have one more experience (or state) to store that allows us to better define the differences in the future.
So, in this model, I see that rather than processing a many linear functions for a result, it's as if networks of different purposes interact together to give us the context or semantics of a certain situation. I am not entirely sure yet whether this means thought or whether this is the combination of thought and feeling. Let's see if I can analyze the different components of this model:
- Analysis
- Interpretation
- Memory
- Instinct, feeling, emotion, fear, etc.
Well, the difference that this model shows is that semantic analysis talks about generally accepted meaning rather than individual meaning. The generally accepted meaning can be resolved by voting or allowing people to indicate their association when a word is on screen. This seems totally wrong. If for example a recent event, like 9/11 occurs, and the screen shows "plane", most would type "airplane" and the meaning of that word will very quickly distort other possible meanings: a surface, an "astral" plane, geometric plane, compass plane, etc. Meaning by itself doesn't seem to bear any relationship with frequency.
If this holds true, then it means that as soon as any model that shapes semantic analysis in computers has any relationship with frequency, it means the model or implementation is flawed.
No comments:
Post a Comment