Tuesday, December 18, 2007

Architecture Of Mind

Previous posts discussed many individual things of the inner workings of the mind that I have read so far. As a SW architect, I prefer pictures over words to convey meanings. I've been working on a picture that is reminiscent of the OSI layer in Computer Science, which describes how communication takes place over the Internet. The picture is here:


Based on the books, it shows that at the physical layer, we have neurons and chemicals that are somehow interacting together. Much like a computer, this layer only becomes interesting for absolute experts. It is at such a low level that understanding how it works might make clear how things operate, but since it is so complex and executes at a very high frequency and most probably is parallel, it is difficult to develop expectations on that level on operation, unless you subdivide the most basic functions (CPU instructions) into grouped functions with a particular purpose. And further generalization of that.

In this picture, I see neurons as transistors or silicon. So they are basically thousands and thousands of black boxes that interoperate together and the sum of all its minuscule operations have a certain effect. The silicon accomodates the flow of currents for communication between transistors, where transistors modify that flow. The same is probably true for neurons and synapses. This is about where the direct analogy (should) stop(s), as I believe that the current household computer as we know it has a totally different architecture than the human mind due to its requirements on determinism and finity. The human mind could be inifinite (?) in its capability to produce new thoughts and new goals, based on previous contexts. Computer programs are mostly single-goal oriented and are generally not engineered to produce new goals along the way.

The physical layer thus contains neurons (or "mush") that accomodates emotions, feelings, the mind's eye, pattern recognition, rotation, language?, reasoning and thus intelligence. Intelligence can probably also be described as the interaction between pattern recognition, reasoning (which is setting new goals or imagining consequences (developing expectations) based on previous experiences). Inventions and innovations are the acts of making new associations where previously there were none. With each new created association, we are likely to become more intelligent. Ones "aptness" to develop associations is probably ones IQ. Emotional Intelligence is probably the sympathy in observing one's behaviour and developing expectations based on those observations through other associations and one's aptness to read behaviour (be attentive to signals) and so on.

In the previous post it was said that the main goals of our being are determined by our emotions and feelings. Feeling hungry means instinctively searching for food that is edible. If food is not around, it is invoking our intelligent system to determine the best course of action to get it (or zero the emotion if effort is larger than desire). If you "feel" lazy, you might want to buy something at the local gas station rather than go to the supermarket. So the emotional system works very close together with the intelligent system to resolve the problems. If you have access to a car, you might want to go into town, park the car and do some shopping. Unless you forgot where you put your car keys, in which case it might cause frustration and the setting of a new goal by taking the bike. Unless it is perceived to rain outside and the feeling of wetness is not a very pleasant foresight, which might cause you to look into the freezer and defrost some ready-made meal instead after all, even though initially the goal was looking forward to something fresher. The interaction of emotions, feelings and intelligence is clear. The discovery through our intelligence and memory that something is impossible might cause feelings of frustration, which might release some chemicals that cause our brain to go into a higher state of awareness, much like adrenaline that prepares the body for a fight.

The main goal in this case is always set by emotions or feelings. I haven't yet thought deep enough to find cases where a main goal is 100% determined by intelligence. Intelligence can set subgoals to achieve the main goal. The subgoals are determined by imagining how the main goal can most efficiently be achieved, which is done by looking for a path to that goal. This is mostly done by looking at historic events and how well this worked out in the past. If we engage a barrier that blocks access to the main goal, we invoke our associative mind and reasoning and expectations of outcome to try to remove that barrier. A sense of urgency might change how we reach those goals.

So reasoning and assessments highly require the services of the mind's eye, imagination and memory. The logic revolves around imagination and the analogy (pattern similarity) of other outcomes. The main goal determines our eventual behaviour always in the long run for the event. Subgoals might change our behaviour slightly or modify it entirely temporarily, but should always serve the main goal in the long run.

The picture is not quite complete as there is closer interaction between the emotional system and behavior. One can imagine that our body has been programmed to react instinctively to one person's behavior, a direct response instead of an evaluated response by the mind (if not, we would probably look and act like robots).

There is a continuous interchange between the emotional system and the "Intelligence" system. The "depth" of the recursion in the intelligence system (the ability to resolve complex cases like "if this, then that, and then that, however when, if not, etc."), the congruity that particular mind needs (or incongruity it can suffer) to find associated material in memory and the amount of experience are probably the best factors that determine intelligence.

As I have said in previous posts, I imagine that the brain is thus not a 'stack-based' computer, but a machine that always moves forward within its own context. I also imagine the context as something that is fluid rather than hacked in stone (as is the case with computers). The following picture shows how I regard memory, which I call "associative memory", since it recalls symbols as we listen to a piece of music, hear speech, see a scene or "think about" / "imagine" things.


The picture shows a line, the "thread" of a conversation or the "thread" of an observation/thought. We have the interesting capability to "steer" our thoughts into new directions. Are we modifying or creating goals at the same time?

The items in associative memory are not just in "drawers" or memory locations like in a computer. A computer may use lists, linear memory or hash keys to organize information. But no matter which method is chosen, there is always an incremental cost to look things up. This cost is expressed in O-notation and in general, the more information you store, the higher the cost. This has the nasty side-effect that becoming smarter means becoming slower and in certain cases, some problems become unresolveable unless you work together with many machines. Google is one perfect example, where the system basically stores the information on the Internet. It uses many, many, many, many computers to open up that information to others. However, Google cannot "reason" with that information, it can however process it and modify its relationships for a particular purpose.

So the model above shows a kind of memory that does not store information and make it accessible through keys. It shows memory where elements are naturally associated with one another and where these associated elements are automatically brought forward. The "thread" determines the direction of the context and how further associations are made. The further away from the thread, the lower the activation of that particular synapse or memory element. The context is thus basically the elements that were invoked and what we know about them. The context (associations) also give us rules. If a certain (new) association cannot be made, the thread must be redirected or halted and a solution suggested by the reasoning part of the brain. when reading nonsense, we cannot allow the nonsense to be stored as reality in our brain, since that taints our model of the real world. How do we prevent this from happening? By testing the thread against our current model and see how it complies. Changing a belief then means changing certain associations that we took for granted.

It is also possible that strong beliefs are formed when certain associations are often walked by threads. So associations probably have weights. It is not uncommon for us to believe something very strongly as being associated, but then discover that the association is invalid. We resist breaking the association very strongly, because the new evidence is still a very weak association that does not have many associations with others. Only when we forge other associations with the new evidence do we accept it taking the place of another element. Whereas we don't just "forget" the other element either. It becomes like a ghost image superimposed on the initial association, which is tagged as a false belief.

A computer can thus not replicate this behaviour easily. There are constructs like linked-lists and so on, but linear memory (the way how memory is developed now) is not ideal as a storage element for associative memory. It would be easier to think of associative memory as elements that are somehow forming tiny threads between them (which can strengthen into cables) and probably move closer together.

It's difficult for many human beings to do two things at once. To think two things at once. This suggests we have the analogous single CPU available to us. Other research shows that when we process information, we can only deeply focus on 3-5 pieces of information and derive results from those. That is analogous to a CPU that has about 4 registers available for processing. However, Intel processors work intensely with stacks, which suggests that things unwind and continue. I think of the mind as a CPU that is always moving forwards, does not have a stack and just finds new goals and conclusions and stores them as associations in memory. In computer lingo, functions that return parameters or allow output parameters or pointers simply do not exist.

The point of this whole post is to reason about the architecture of mind as if it were possible to build it into a computer. I somehow see points in the logical function of the mind that are incompatible with the current Intel architecture I am familiar with. Memory is linear, but should be associative and fluid. The computer/OS is stack based and always "attempts" to resolve subgoals at that point in time. I think that stack-based computing is the barrier to further intelligent systems. These intelligent systems are difficult to imagine, because we are not familiar with them. Its architecture needs to be thought out. Maybe it becomes easier in the long run than deterministic systems (who knows?), or on the other hand maybe they are significantly harder to program. That can be expected when you model the human mind to some degree. But full artificial intelligence (reasoning systems) require associative memory that can forget, the ability to form (new) goals based on things perceived on the outside and so on.

Another thing on the "mind's eye" that I find incomplete... Some psychological or "HR knowledge" states that some people are "auditive" or "emotive". These are related to our senses like vision, gustation (taste), olfaction (smell), auditory (hearing) and somatosensation, where the last one is a fancy one to describe everything we sense in the body (allow me to include "emotional state" into that sensation as well). The mind's eye however has been described as a purely visive operation. However, I don't know whether someone has ever done research on how blind people for example would handle their "mind's eye".

I can also personally reflect on the "mind's eye" myself. It is basically imagination itself. But I cannot only imagine visible elements (images, which is a word that probably invoked this whole oversight), but I can also imagine auditory elements, music, feeling a hot pan, feeling a cold pan, smelling grass, smelling strawberry and even more... I can store those senses as elements in memory. So, rather than thinking about memory as a set of images, it's a set of experienced senses at some point in time that are inter-related and give me a more complete picture of some event or thing. When I store the element of "feeling intensely happy" with the image of freshly cut grass and especially with the smell of that, it is not difficult to imagine that somehow the smell of freshly cut grass in the future can evoke the same feelings.

That latter part somehow suggests that our mind works even more intricately. It is as if the "processed" elements of information that we store in our memory (not signals itself, but perceived signals, post-processed by our organs like the nose etcetera, the very signals that are sent to the brain for further processing), when these post-processed elements are recalled in our imagination, the recall of that item causes our senses to relive the stored event. Some research I read at some point in time stated that when we speak, our brains temporarily reduce our auditory senses, so that we can recognize our own voice. If not, we would be startled everytime we make noise as we'd think a stranger is in the room. Maybe this mechanism is more intricate and we're actually reusing those processing parts of the brain for reasoning itself and the brain is not just one big mush (or maybe just in the physical sense, but not the logical sense). We'd actually have dedicated areas that we can reuse in our imagination as well.

So I make the point that the "mind's eye" sounds incomplete and that the only way that we can make deductions and reasoning are through previously experienced things that are stored in memory, not only as images, but also as smell, auditory information and anything else we can perceive about the situation.

It would also make the case that a computer cannot become "aware" unless it is given multiple senses itself. A sense for emotion would be very difficult, as emotion is entirely internal and I have doubts it can ever be re-engineered (it doesn't look like anything). We could perhaps simulate it by perceiving behaviour. But given audio, images and things that we can externally observe, maybe it's possible to store things together and in the future build a computer that is capable of doing similar things with that information through associative memory and the development of a context in which the observations are assessed and reasoned with.

No comments: