Monday, May 19, 2008

Quantum consciousness, curling and glials...

The book of Prof. Penrose is a very interesting read. For the core A.I. people that believe in the "everything is perfectly computational and so is human thought" (not taking into account memory, processing or storage resources), it can be bit disheartening. Prof. Penrose asserts that there are things about the mind that can be simulated, although not perfectly, but that by simulating things it doesn't make a machine aware.

This of course depends on the definition of awareness, how awareness is raised in the human brain and whether awareness is real or whether we're just thinking we're aware and not in the same computed state as something else.

Another interesting thought here is with regards to the ability to steer thought and the ability to visualize things "in the mind's eye" as it's called. The interesting thing here is that besides regular perception that reacts to stimuli, we can also invoke stimuli on our own brain. Mostly for thought experiments or for dreaming. Although the last objective is nicer :).

I refer to the last post on "The plot thickens". I imagine that glial cells may have a more active role than previously perceived. Neuro-scientists have for example asserted that the glial cells are cleaners.

Ever heard of the sport "curling"? It's a sport where you have a heavy 20kg stone that is "thrown" on ice. The team consists of three players, where one player executes the throw (neuron) and two players clean the track in front of it. The cleaners can affect the trajectory of the stone, lengthening it or shortening it or moving it to a side and thereby affect the nature of the game.

One could imagine that the cleaners in curling have a similar role to the glial cells. If it's true, then it certainly makes things more interesting. And the search for the real "consciousness" would also just be starting. Would it be something in the brain? Would it be more like "waves"? Those glials than sort of become the manipulators of neuron cells, like an overwatch or teacher. If one maintains the Hebb's theory, the plot thickens indeed. Then learning isn't necessarily triggering neurons based on input and then making the cells fit, learning would be controlled by a network 10 times the size of the neurons, which seem to be driven by some other force.

At this point I'm not sure how to imagine consciousness then, nor do I intend very well the activity and power of the glial cells to influence the behaviour of neurons (or slight modifications in its firing pattern, or perhaps entire modification of the network itself?). Is this where consciousness is really located?

Some people resort to quantom consciousness.

Friday, May 09, 2008

The plot thickens...

The plot thickens as they say... And this time it's about glue!

Only since somewhat recently, scientists discovered that the glial cells around the neurons may be a bit more active than just removing waste and feeding the neurons. Read more about glial cells here.

Actually, there are 10-15 more glial cells than neurons. So if you thought 100 billion was a lot of cells, there's 10-15 times more of that in supporting cells.

The cells are mostly responsible for maintenance. So they regulate the chemicals, clean up waste, regulate blood supply. But they also produce a myelin sheath around the axon (helping it to fire) and can also act as scaffolding for the generation of new neurons.

So, you could say that glials are the implementation of the rules to create and maintain the network on a microscopic level. During its lifetime, it continuously checks the environment and can actually reshape the network on a local level. This is interesting, because then we know that there are complicated and active maintenance functions on the neural network taking place.

Thursday, May 08, 2008

Neural Network Hierarchies

The book I read discussed the possibility of cell assemblies and cell assembly resonance through recursive loops in the neural network. It also stated the possibility of cooperating neural networks that are each allocated a specific function. See the following page:

http://faculty.washington.edu/chudler/functional.html

And then the following:

http://faculty.washington.edu/chudler/lang.html

I'm not sure if anyone has ever considered to join neural networks together in a sort of serial combination. The difference between these networks is that for example vision is only allocated the task of recognizing images / shapes / forms / colors and translate then into numbers. And the auditive system processes sounds. If you look at the images closely, you see that there are actually two kinds of networks for each perception method. An associative cortex and a primary cortex.

If you look at the production of speech, it's a number of different areas all working together. This gives us clues about how human speech is really put together.

Imagine those networks all working together. As a matter of fact, there are more neurons at work than just the brain. The eyes also have neurons and are already the first stage of processing optical information. Suppose we'd like to make a computer utter the word "circle" without simply recognizing the circle and play a wave file. We'd have to make it learn to do so:
  • Convert the pixels (camera?) to a stream of digital information, which can be processed by the visual cortex.
  • Analyze the shapes or image at the very center of the image (see motor reflexes and voluntary movement of the eye to accomplish the scanning of the environment for receiving more information).
  • The visual cortex will then produce different signals as part of this exercise and the reverberating cell assemblies generate new input signals for the more complex processing of such information (memory? context?)
  • This is then output to the speech area, where the "words" are selected to produce (mapping of signals to concepts).
  • The information is then passed to the Broca area, where it is contextualized and put into grammar.
  • The instructions of the Broca area (which could have a time-gating function and verifies the spoken word with the words that should be uttered), are sent to the primary motor cortex, which produces speech by frequent practice
  • The speech organs move as in concert by the simple emission of information towards the speech organs.
The above sequence displays a very interesting point. Wernicke's area is involved in understanding heard words, Broca's area is involved in producing words.

So, this sequence shows that these areas work together and that together, the emergent phenomenon can produce very interesting behaviours.

I'm not sure if these networks can be built by just trying them out at random. There's also a huge problem with the verification of the validity of a network. We can only validate its validity whenever the output at the other side (hearing the word) makes sense due to the input (the visual image of a circle). Everything that happens inbetween can develop problems in this entire circle. Also, there is expectedly a very large learning curve required to produce the above scenario. Remember that children learn to speak only after about 1.5-2 years or so, and then only produce words like 'mama' / 'papa' (as if those words are embedded memory in DNA).

Important numbers and statements

The following are statements that are important to remember and re-assess for validity:
  1. The brain consumes 12W of energy. Ideally, artificial simulations of the brain should respect this energy consumption level. But this seems far from possible because the individual elements used in artificial intelligence consume far more power and there are factor thousands involved in this calculation.
  2. It should be parallel in nature, similar to neuron firings (thread initiations) that fire along dendrites and synapses. If not, the model should assess scheduling in a single thread of operation.
  3. It should be stack-less and not have function unwinds.
  4. The brain has about 100 billion neurons.
  5. The fanout (connections) with other neurons is between 1,000 to 10,000, (others report this to be 20,000).
  6. It's not so much the working neural network that is interesting, but the DNA or construction rules that generate the working neural network. That is, it's more interesting to come up with rules that determine how a network is to be built than build a network that works and not being able to reconstruct it elsewhere.
  7. How to observe the state of the network in an intelligent way in order to deduce conclusions from the observation by the network?? (does it need to do so?)
  8. It is possible that successfully working networks can only evolve / develop over a certain time period and that the initial results look like nothing interesting at all. This statement can be deepened out by observing the development of infants.
  9. How does the state of a brain transcend into consciousness? (or is thinking the re-excitation of network assemblies by faking nerve input, imagination, so that images and audio seem to be there?)
  10. Zero-point measurement: My computer (a dual intel E6850 with 2GB low-latency memory) can process 500,000,000 (500 million) neuron structures in 0.87 seconds. That is about 1.14 cycles per second on 500,000,000 neurons. That is still a factor of 100 * 1000 = 100,000 slower than the human brain, assuming it re-evaluates all neurons in one sweep.
  11. For a very simple neuron structure on a 50 that does not yet contain connection information, but 3 bytes for threshold, fatigue and excitation information, 140 GB of memory is required to store this network in memory.
  12. In 2 GB of memory, you can fit 715,000,000 neurons without connection information.
  13. 50 billion neurons need 186404 GB of memory to store an average of 1,000 connections at a pointer size of 4 bytes per neuron.
  14. On my CPU (E6850) and a single thread/process, a number of 400,000 can reasonably be processed in one sweep. That makes it about 1,500 sweeps per second across the entire neuron array.
  15. In 2GB of memory, it's possible to fit 500,000 neurons with connection information.
I'm therefore choosing 500,000 neurons as the basis of the network, which might eventually translate to a frequency of about 1000Hz if the sweeps are designed more carefully (1000Hz is derived from extremely high firing rates in the human brain that are observed to be at 200 pulses per second. Add the absolute refractory period to that, which lasts 3-4 cycles, and 1000Hz emerges).

500,000 seems to be the limit due to memory and due to CPU cycles in order to attain the same frequency. That is a factor 100,000 lower than the human brain and it's more or less maxing out the machine.

Wednesday, May 07, 2008

The emergence of intelligence

John Holland wrote one of the most interesting books I've read so far, "Emergence". And it's not even the size of the Bible. :).

My previous musings on cognitive science and neural networks and artificial reasoning are greatly influenced by this book.

As I've stated in one of the posts on this blog, I've sketched out an argument that the "output-as-we-know-it" from artificial networks isn't so much useful from a reasoning perspective, but the state of the network tells a lot more about "meaning" than measuring output at output tendrils. I'm not sure whether for very complicated and very large neural networks you would even have a type of output.

The book "Emergence" provides a potential new view on this topic. It makes clear that feed-forward networks (as used in some A.I. implementations) cannot have indefinite memory. Indefinite memory is basically the ability of a network to start reverberating once it recognizes excitation at the input and further continuous excitation further on. The capabilities of a network without memory are greatly reduced and after reading the text, I dare say that pure feed-forward networks are very unlikely to be at the base of intelligence.

Indefinite memory is caused by feedback loops within the network. So you'd have a neuron that connects to some neuron of the previous input layer or a previous hidden layer, thereby increasing the likelihood it will fire in the next cycle.

There are however additional features required for a feedback network. It has a fatigue factor and a recently fired neuron has a very high threshold for firing again for a short time period. As neurons are continuously firing, these become fatigued and gradually decrease the likelihood it will fire in subsequent rounds. This helps to decrease the effect of continuous excitation (and may explain boredom). Plus that neurons that have just fired increase their threshold of firing significantly for the next couple of rounds (about 3-4), further decreasing the chances of reverberation across the network in a kind of epileptic state.

The end result for such a network are three important features: synchrony, anticipation and hierarchy. Synchrony means that certain neurons or cell assemblies in the network may start to reverberate together (through the loops), which is an important factor in anticipation, where cell assemblies reduce their thresholds of activation, so that they become more sensitive to certain potential patterns (it's as if the network anticipates something to be there, so it's the memory of where things might lead in some context), and hierarchy, where cell assemblies may excite other assemblies. The other assemblies may then represent a concept slightly higher in the hierarchy (for example a sentence as opposed to a word).

As has been discussed in the post on the implementation of humor, we can derive that humor is probably induced by the felt changes in the network (electricity and fast-changing reverberations to other cell assemblies) as the changes in context develop sudden changes in excitation across the network.

Thus, humor can be described as a recalibration of part of the network that is close enough to the original reverberation pattern, but not as distant as to become incomprehensible.

The final assumption I'm going to make then is that a certain state of the network (reverberating assemblies) correspond to a particular meaning. There is indeed a kind of anticipation in this network, and recently reverberated assemblies might reverberate very quickly again in the future (short-term memory).

Then perhaps memory is not so much concerned with remembering every trait and feature as it is observed, but more concerned with storing and creating paths of execution and cell assemblies throughout the network and make sure they reverberate when they're supposed to. Then memory isn't so much "putting a byte of memory into neuron A", but it's the reverberation of cell assemblies in different parts of the network. Categorization is then basically recognizing that certain cell assemblies are reverberating, thus detecting similarities. We've already shown that the effect of anticipation reduces the threshold of other assemblies to reverberate, although it doesn't necessarily excite them.

Question then is of course how the brain detects the assemblies that are reverberating? It requires a detector that has this knowledge around the entire brain in order for this theory to make any sense. As if it knows where activity is taking place around the network to induce a kind of meaning to it. The meaning doesn't need to be translated to words yet, it's just knowing that something looks like (or is exactly like) something seen before.

Actually, the interesting thing of memory is also that different paths can lead to the same excitation. So the smell of grass, the vision of grass, the word grass, the sound of grass and other representations may all be somehow connected.

In this thought-model, if we would form sentences by attaching nouns to reverberating assemblies, it may be possible to utter sounds from wave-forms attached to those concepts and perhaps use the path of context modification (how the reverberating assemblies shift to new parts) to choose the correct wording. Or actually, I can imagine that multiple assemblies are active at the same time, also modifying the context.

Multiple active assemblies seem like a more plausible suggestion. It would enable higher levels of classification in different ways, although it does not yet explain the ability of our mind to re-classify items based on new knowledge. Do we reshape our neural network so quickly? Although I must say that we do seem to make previous mistakes more often for a certain period of time until at some point we dislearn it and relearn it properly. Dislearning something has always been known as more difficult than learning something.

A very interesting thought here is the idea of the referee. If the network is allowed to reverberate in a specific state, how do we learn so effectively? We continuously seem to test our thought to reason and explanation of how it should be. Is there a separate neural network on the side which tests the state of the network against an expected state? That would however require two brains inside of one, and one to be perfect and correct to measure the output of the other, thereby invalidating that model. Perhaps the validity of the network can at some point be tested against its own tacit knowledge. Does it make sense that certain categories or cell assemblies reverberate in unison? If they have never done that before, then perhaps the incorrect conclusions are made, which should cause the network to discard the possibility, reduce the likelihood of reverberation of a certain cell assembly and keep looking for sensible co-reverberation.

To finalize the topic for now... Emergence requires a network of agents that interoperate together through a set of simple rules. The rules that I found most interesting for now are described in this blog post. But I can't help but wonder about the role of DNA. DNA is said to have its own memory and it's also known to represent a kind of blue-print. Recently, some researchers have stated that DNA isn't necessarily fixed and static, but that parts of DNA can become modified within a person's lifetime. That would be a very interesting discovery.

Anyway, if we take DNA as the building blocks for a person's shape, features and biological composition (besides the shape influences due to bad eating habits and so on), then we have certain body features that are controlled by DNA and probably certain human behaviour that is reflected in our children ( "he takes after him/her" ).

Just the recognition that behaviour can be transcended by children makes a strong case that the building up of the human brain is determined by rules that are prescribed by the DNA, a kind of "brain blue-printing", a recipe for how to build a brain through a set of rules.

So, we could create a neural network through very random rules and see what happens, but we could also think of the construction of that network to have followed certain rules that are determined through evolution. This would make a particular network more effective at each generation. It's a big question. Real connections are formed by neurons that just happen to be close by another and I cannot imagine the possibility that a neuron on one side of the brain manages to connect to a neuron at a significant distance.

Maybe the construction of this network is determined by a lower level of emergence, which is determined by smaller elements like DNA and whatever else is there at an organism level. Perhaps our consciousness starts with those minuscule elements?

Or just maybe the growth of the brain is entirely random. We could then consider the possibility that neurons exist somewhere and grow towards another. Then, through Hebb's rule, it might continuously attempt to reverberate and kill those axons between neurons that never lead to reverberation together (thus, have no useful interconnection with one another). Especially in the first four years, these connections (axons) grow like wildfire in a continuous manner. It takes four years for a network of 50 billion neurons to start producing some sensible results. We generally kick-start a network and almost expect it to produce something interesting after five minutes.

It would be very interesting research to find out if this kind of growth/evolution can be jump-started and done in much less of the time through application of a computer cluster (or whether the brain can run on clusters in the first place :).

Monday, May 05, 2008

On content and process

I read a very interesting post just recently regarding the difference of content versus process. Process is basically determining action based on the context and is very much done in the here and now. Content has to do with analysis of concepts and the relationships between them and could be taken as learning experiences. Process can also be learning, but the enhancement of action (reflex) on the perception of content identified in a way. Content itself is deep-rooted knowledge of how a concept might have gotten somewhere or how it might relate to other concepts (in various different possible ways).

Maybe if you don't appreciate the arts, you're a person that highly prefers process (objectives), getting things done or moving from A to B without caring much about the how and where. People that really dig art and content may not be as efficient in getting their things done, but they understand the relations between concepts better and "enjoy the journey" :).

This is an interesting differentation of course. If not achieving your objective causes frustration, than this might also explain why some people feel depressed, frustrated or stressed more than other people. Some are just there for the journey and the pleasure, others always want to be somewhere else just as they got somewhere.

The argument of the person writing the article was that long, continuous exposure to video games and films for example didn't train the content-analyzing capabilities sufficiently. Therefore, training people to only get things done without training them on the pleasure/enjoyment of analyzing the interrelationships and contents of things.

Historically, humans have mostly lived together and generally spent a lot of time interacting with one another, developing and improving interpersonal relationships. The virtual environments are however loaded with objectives "just to make it interesting", so the argument that social environments improve relationships isn't a natural argument. It might just provide an excuse for achieving your objectives.

The article also articulated that the development of the individual (the recognition of who you are yourself, your "self-idea" ) isn't as developed. Or stated in another way, you're not sufficiently self-aware or "individualawared" enough. This puts pressure on the need to make more effort to be recognized as a specific type of person, or projected self-image.

This lack of individuality could then be compensated by collecting status symbols, generally projected symbols of what is considered success by oneself. Those symbols of success are basically material trophies like cars, houses and other things, material things that are thought to add up to one's identity. Sadly to say though, one can never gather enough items for individualization, there's always place for more "self-articulation", which explains the unexhaustive search for new items to conquer.

Thursday, May 01, 2008

Cocomo II implementation on Project Dune

As discussed in previous posts, I've been designing an approach for Cocomo II software project estimations. The implementation has already been added to Project Dune on the main branch and is destined for finalization into a new major version of the project, Project Dune 2.0.

The first step in estimation is to determine the project size:

http://gtoonstra.googlepages.com/sizing.png

The next step in this approach is to determine the effort multipliers. Those multipliers increase the time required to develop something linearly.

http://gtoonstra.googlepages.com/effortadjustment.png

The scaling factors in the next tab increase effort exponentially, so if those end up really high, the effort required is soaring as well:

http://gtoonstra.googlepages.com/scalefactors.png

After the factors are supplied, the outcome of Cocomo II is a set of numbers. The most useful numbers at this time are listed here:

http://gtoonstra.googlepages.com/result.png

Notice in the different screenshots how the factor being manipulated is explained in the tooltip. The text in the middle is a reflection of the correct assessment of that factor.

The results show the person-months that are required to develop the project, the nominal person-months (if all scale factors and multipliers would be nominal), and the Time-To-Develop together with the number of staff required to implement the project.