John Holland wrote one of the most interesting books I've read so far, "Emergence". And it's not even the size of the Bible. :).
My previous musings on cognitive science and neural networks and artificial reasoning are greatly influenced by this book.
As I've stated in one of the posts on this blog, I've sketched out an argument that the "output-as-we-know-it" from artificial networks isn't so much useful from a reasoning perspective, but the state of the network tells a lot more about "meaning" than measuring output at output tendrils. I'm not sure whether for very complicated and very large neural networks you would even have a type of output.
The book "Emergence" provides a potential new view on this topic. It makes clear that feed-forward networks (as used in some A.I. implementations) cannot have indefinite memory. Indefinite memory is basically the ability of a network to start reverberating once it recognizes excitation at the input and further continuous excitation further on. The capabilities of a network without memory are greatly reduced and after reading the text, I dare say that pure feed-forward networks are very unlikely to be at the base of intelligence.
Indefinite memory is caused by feedback loops within the network. So you'd have a neuron that connects to some neuron of the previous input layer or a previous hidden layer, thereby increasing the likelihood it will fire in the next cycle.
There are however additional features required for a feedback network. It has a fatigue factor and a recently fired neuron has a very high threshold for firing again for a short time period. As neurons are continuously firing, these become fatigued and gradually decrease the likelihood it will fire in subsequent rounds. This helps to decrease the effect of continuous excitation (and may explain boredom). Plus that neurons that have just fired increase their threshold of firing significantly for the next couple of rounds (about 3-4), further decreasing the chances of reverberation across the network in a kind of epileptic state.
The end result for such a network are three important features: synchrony, anticipation and hierarchy. Synchrony means that certain neurons or cell assemblies in the network may start to reverberate together (through the loops), which is an important factor in anticipation, where cell assemblies reduce their thresholds of activation, so that they become more sensitive to certain potential patterns (it's as if the network anticipates something to be there, so it's the memory of where things might lead in some context), and hierarchy, where cell assemblies may excite other assemblies. The other assemblies may then represent a concept slightly higher in the hierarchy (for example a sentence as opposed to a word).
As has been discussed in the post on the implementation of humor, we can derive that humor is probably induced by the felt changes in the network (electricity and fast-changing reverberations to other cell assemblies) as the changes in context develop sudden changes in excitation across the network.
Thus, humor can be described as a recalibration of part of the network that is close enough to the original reverberation pattern, but not as distant as to become incomprehensible.
The final assumption I'm going to make then is that a certain state of the network (reverberating assemblies) correspond to a particular meaning. There is indeed a kind of anticipation in this network, and recently reverberated assemblies might reverberate very quickly again in the future (short-term memory).
Then perhaps memory is not so much concerned with remembering every trait and feature as it is observed, but more concerned with storing and creating paths of execution and cell assemblies throughout the network and make sure they reverberate when they're supposed to. Then memory isn't so much "putting a byte of memory into neuron A", but it's the reverberation of cell assemblies in different parts of the network. Categorization is then basically recognizing that certain cell assemblies are reverberating, thus detecting similarities. We've already shown that the effect of anticipation reduces the threshold of other assemblies to reverberate, although it doesn't necessarily excite them.
Question then is of course how the brain detects the assemblies that are reverberating? It requires a detector that has this knowledge around the entire brain in order for this theory to make any sense. As if it knows where activity is taking place around the network to induce a kind of meaning to it. The meaning doesn't need to be translated to words yet, it's just knowing that something looks like (or is exactly like) something seen before.
Actually, the interesting thing of memory is also that different paths can lead to the same excitation. So the smell of grass, the vision of grass, the word grass, the sound of grass and other representations may all be somehow connected.
In this thought-model, if we would form sentences by attaching nouns to reverberating assemblies, it may be possible to utter sounds from wave-forms attached to those concepts and perhaps use the path of context modification (how the reverberating assemblies shift to new parts) to choose the correct wording. Or actually, I can imagine that multiple assemblies are active at the same time, also modifying the context.
Multiple active assemblies seem like a more plausible suggestion. It would enable higher levels of classification in different ways, although it does not yet explain the ability of our mind to re-classify items based on new knowledge. Do we reshape our neural network so quickly? Although I must say that we do seem to make previous mistakes more often for a certain period of time until at some point we dislearn it and relearn it properly. Dislearning something has always been known as more difficult than learning something.
A very interesting thought here is the idea of the referee. If the network is allowed to reverberate in a specific state, how do we learn so effectively? We continuously seem to test our thought to reason and explanation of how it should be. Is there a separate neural network on the side which tests the state of the network against an expected state? That would however require two brains inside of one, and one to be perfect and correct to measure the output of the other, thereby invalidating that model. Perhaps the validity of the network can at some point be tested against its own tacit knowledge. Does it make sense that certain categories or cell assemblies reverberate in unison? If they have never done that before, then perhaps the incorrect conclusions are made, which should cause the network to discard the possibility, reduce the likelihood of reverberation of a certain cell assembly and keep looking for sensible co-reverberation.
To finalize the topic for now... Emergence requires a network of agents that interoperate together through a set of simple rules. The rules that I found most interesting for now are described in this blog post. But I can't help but wonder about the role of DNA. DNA is said to have its own memory and it's also known to represent a kind of blue-print. Recently, some researchers have stated that DNA isn't necessarily fixed and static, but that parts of DNA can become modified within a person's lifetime. That would be a very interesting discovery.
Anyway, if we take DNA as the building blocks for a person's shape, features and biological composition (besides the shape influences due to bad eating habits and so on), then we have certain body features that are controlled by DNA and probably certain human behaviour that is reflected in our children ( "he takes after him/her" ).
Just the recognition that behaviour can be transcended by children makes a strong case that the building up of the human brain is determined by rules that are prescribed by the DNA, a kind of "brain blue-printing", a recipe for how to build a brain through a set of rules.
So, we could create a neural network through very random rules and see what happens, but we could also think of the construction of that network to have followed certain rules that are determined through evolution. This would make a particular network more effective at each generation. It's a big question. Real connections are formed by neurons that just happen to be close by another and I cannot imagine the possibility that a neuron on one side of the brain manages to connect to a neuron at a significant distance.
Maybe the construction of this network is determined by a lower level of emergence, which is determined by smaller elements like DNA and whatever else is there at an organism level. Perhaps our consciousness starts with those minuscule elements?
Or just maybe the growth of the brain is entirely random. We could then consider the possibility that neurons exist somewhere and grow towards another. Then, through Hebb's rule, it might continuously attempt to reverberate and kill those axons between neurons that never lead to reverberation together (thus, have no useful interconnection with one another). Especially in the first four years, these connections (axons) grow like wildfire in a continuous manner. It takes four years for a network of 50 billion neurons to start producing some sensible results. We generally kick-start a network and almost expect it to produce something interesting after five minutes.
It would be very interesting research to find out if this kind of growth/evolution can be jump-started and done in much less of the time through application of a computer cluster (or whether the brain can run on clusters in the first place :).
New tool in town: KnowledgeGenes.com
7 years ago