Confabulation theory is a theory from Robert Hecht-Nielsen about the cognitive function of the brain, or in other words the working of thought. It's a theory, not an explanation. Confabulation theory basically works by processing lots of information and then from this information, find out which symbols belong together. Those symbols that are often seen together (and in cases by which distance) is information contained within the network. Confabulation can now produce new sentences by continuously generating possible phrases based on the context that is seen prior to that.
In effect, the confabulation theory uses an architecture that can produce entirely new sentences, which are plausible in the context. The interesting thing is that these sentences are also grammatically and syntactically correct. Thus, the rules of a language seem to be embedded within this network.
This does not mean that the machine 'understands' the language, or that it is conscious of the sentences it produces. I think the results should be considered the production of a thought-less brain, which only has the capacity to produce sentences without understanding what they mean. It's probably comparable to the chinese room producer, where it looks at streams of chinese symbols. At some point, the machine understands the order in which the symbols may appear and which symbols are often seen together. When asked to produce a sentence on its own, it uses this knowledge to produce a sentence.
What interested me in the theory is how this network differs from other networks that can fantasize, like the RBM. The RBM is a network that stores knowledge by looking at things and can then complete the signal it is receiving. This confabulation network is slightly different, in the sense that it can project continuations (say, a hypothesis) that are very plausible. So, if you were building a network that can produce responses to sentences, the confabulation network is likely to perform better. But if you ask a confabulation network to recognize a face, it might have good difficulty and the RBM might be better.
The RBM is a flatter network (judging the entire system) in comparison to the confabulation network. The confabulation network just competes between symbols and modules and always takes the highest value of all, but since the signals proceed from those results towards other networks, it's in a sense hierarchical.
It'd be really interesting to be able to identify specific properties of each network and then see if they can be used together. It's also possible that we're thinking about this the wrong way. The continuous processing of the confabulation network is quite different from others. We like to think from static situation to the next, perhaps the entire thing is more dynamic than that and we should focus on generating states, looping back and reprocessing the results, thus continuously adding more results to some hypothesis.
Since A.I. is also a lot about search in search spaces (think chess!), a neural network could be used to generate a hypothesis step-by-step, until it is deemed that a particular branch isn't going to produce a good result, so that it can be terminated.
New tool in town: KnowledgeGenes.com
15 years ago
1 comment:
Lovely postt
Post a Comment