The difference in memory in computers and organic memory is a strange thing. The memory in computers is linearly aligned and stores elements very differently than organic memory. Where the storage area is not large enough in computers, we tend to use databases to make the memory "searchable" and mallable.
Computers generally need to trawl through the entire space in order to find something, albeit with clever algorithms rather than an exhaustive search. We're modeling the world around us into a specific, definition to work with and then compare new input to those things we know.
Organic memory is very different. Through the input of elements and situations, the recall of other elements in memory seems activated automatically without the need for a search. As if very small biological elements recognize features in the stream of information and respond. This may cause other cells to become activated too, thereby leading to the recognition of one set of features with previous experiences. It's as if the recognition drifts up automatically, rather than that a general CPU searches for some meaning based on a description or broken down representation.
The problems of computers then is our limitation / lack of imagination to be able to represent memory in a non-linear form. The way how computers are modeled require the programmer to define concepts, elements and situations explicitly or less explicitly using rules and then search, within a certain space, for similar situations and reason further from there. It's totally lost if the situation cannot be mapped to anything.
I was thinking lately that it would be a ground-breaking discovery if memory could be modeled in similar ways to organic memory. That is, the access to memory being non-linear and a type of network, rather than developing a linear access algorithm. If you consider a certain memory space where a network may position its recognition elements (neurons?), then the connection algorithm of a current computer should map the linear memory differently into a more diffuse space of an inter-connected network. Basically, I'm not saying anything different than "create a neural network" at this time, but I'm considering other possibilities to use the mechanical properties of memory access in a clever way and as such to reduce the required memory for connection storage and to figure out a possibility to indicate "similarity" by the proximity of memory address locations.
Or, alternatively to that, use a neural network to determine a set of index vectors that will map to a large linear space. The index vectors can be compared to a semantic signature of any element. This signature should be developed in such a way that it is categorizing the element from various perspectives. Basically, considering the ability for semantic indexing of text, the technique is used to find texts that are semantically similar.
The larger the neural network, the finer its ability to recognize features. But our minds do not allocate 100 billion neurons to the (same) ability of pattern recognition. Thus, you could talk of specialized sub-networks of analysis that together define a certain result (binding problem).
But perhaps we're thinking too much again in the terms of input-processing-output as I've indicated before. We like things to be explicitly defined, since it provides a method of understanding. What if the networks don't work together in a hierarchy (input->network1->network2->output->reasoning), but work together in a network themselves?
Then this would mean that such a network could aggregate information from different sub-networks together to form a complicated mesh itself. The activation of certain elements in one part could induce the activation of neurons in another, leading to a new sequence of activation by reasoning over very complicated inputs of other neuronal networks. For example, what if thinking about a bear causes our vision analysis network to fire up / become induced and then produce a picture of such a bear?
Imagine a core of a couple of neural networks that have complicated information available to them from "processing" neural networks before them. If those core networks are interconnected in intricate ways and influence one another, then it's likely that a smell causes the memory of a vision or hearing in another network, albeit slightly weaker than normal.
Leaving this thought alone for now...
New tool in town: KnowledgeGenes.com
15 years ago
No comments:
Post a Comment