Friday, December 28, 2007


Previous articles discussed issues of pattern recognition. Patterns are all around us, you could say that the whole visual world is composed of patterns, audio has certain patterns and there should be patterns in smell. You could even talk of touch patterns, which are essentially structures of surfaces.

Patterns are mostly used for recognition between margins of error. But besides material patterns for material objects, we can also talk about motivational patterns and contextual patterns. A contextual pattern for example would determine within a scene where you are, at what time you are there and then it might form an expectation or induce a goal to get information to develop such expectations. For example, if you are at a train station you would find certain events quite surprising, whilst in a different context such events make absolute sense. The emotion of surprise is a measure of how your expectations are formed and how these expectations are not always true due to random events.

Is consciousness analog with pattern recognition? Consciousness can probably be re-defined in this context as: "knowingly(?) manipulating the symbols of input into the biological neural networks in your brain until you measure the correct (expected?) output". Well, the first question then is whether consciousness IS an output of a neural network or whether consciousness is an encompassing ability to monitor the network from afar. The latter possibility being uncomfortably close to the idea of homunculus.

Therefore, I take it that the output of the neural network is consciousness, but that we have the ability to back-propagate the output on the input and thereby steer the 'thought' process in our brains. Certainly, there are other forces at work that have an effect on the output. Hereby, consciousness is thus the output itself, the production of the output in the neural network might be described as subconsciousness.

In order to steer output, one must have an idea of the expected output. Therefore, another network (separated?) should exist that recognizes the output and compares it to the expected result. The concept of learning teaches us that another neural network might modify its weights and values to comply with the expectation. Therefore, given a certain situation, and understanding that certain outputs occur, we have the ability to train our network to form a certain expectation (generate an output), given a set of input factors. The generalization of these "ideas" (inputs) being very important. For example, you'd expect a phone in a train station. But one should also therefore expect a phone in a metro station, since both stations can be categorized into the category of transportation. Once you think about how you derive this reasoning that a phone should exist in both stations, you should notice that it relies heavily on persisted knowledge and (unconscious?) categorization of what you experience in your environment.

In this model of consciousness, you might say that there is a certain truth in saying that the network IS your consciousness. But it is a dangerous argument in that it is likely regressive (one network monitoring another, but who monitors the final network?).

First of, any system of reasoning will have to start with a final goal. It may be broken down into subgoals. Subgoals are necessary when there is a lack of information to complete the goal directly. Our mind may have a limit to the memory of how many subgoals to complete (after which a problem is so difficult, that it is impossible to solve). But a computer may not need to have this limit.

Let's say that you are on a train station and a train is delayed towards your destination. Previous experience has shown us that the people that expect you at a certain time may get concerned if you do not notify them of the change. Our brain now has a goal to achieve and we start to assess our options to complete the goal. First, do we have a mobile phone? is it active and functioning? is there coverage in the area? We can find out by trial and error. How much time is there left to make the phone call? If we make a phone call from a fixed phone elsewhere, will we return in time to catch the train? All of these questions are assessed and estimated. Let's say there are 20 minutes. We would expect a phone to be somewhere in the vicinity of or on the station (that is a reasonable expectation). So we scan for a phone (color? shape? brand?) within the station. Notice here that as soon as you have set your mind to recognize these shapes, it is easy to overlook phones that look entirely different then those you'd expect (one may also argue whether children that do not have these goals in a young life have a different 'state' of mind in which the objective is to consume as much information as possible about the environment, sometimes to the irritation of the parents).

Then finding the phone, we need to find out how it works. We expect to have to pay. We need to see if it accepts credit card or money or special phone cards. Once we found that out, we'll need to ascertain whether it actually works (that is, whether the phone behaves as we expect it to). Notice that in this case we use auditory information mostly to discriminate between busy signals and other kinds of signals that are not known in advance (defect?) ).

Then there's a regular conversation on the phone, the initial goal is met by communicating this with the recipient (goal retrieved from memory) and we return to the next goal, catching the delayed train.

Whereas pattern recognitions are surely abound in the above story, one cannot miss out certain capabilities that seem to surpass pattern recognition. Knowledge, definitions etc. are continuously retrieved from memory and used in the network to make correct assumptions and expectations. Not finding the general "UK" model of a phone (red booth) around, one might decide to scan for a more global shape of a phone, a scan which generally takes longer to complete as more potential matches have to be made.

Eventually, memory also needs to keep up with the final goal, notifying the other person. There's thus some kind of stack that keeps track of the subgoals. Some people sometimes ask you:"What was I doing again?". This shows me that losing track of goals is easier than we think, especially when other goals are intervening with our current processing.

But what/which/who detects when a certain goal has been achieved? Is that what consciousness really is? How does it know for sure that it has been done?

The objective of this article is to reason about the scope of consciousness. How much of consciousness is not pattern recognition processing and how much is?

Perhaps we should consider the possibility that our brain can switch modes immediately. A learning mode, which uses exploratory senses and allows certain networks and memories to assume information. Another mode that is used to form expectations based on certain inputs. It's probably not useful to confuse these modes of operation at the same time.

This way, if we find that there are no "sensible" expectations that we can form, we know that we need to assume information and learn about the environment. We might learn it incorrectly, but eventually some time we'll find out about this incongruity and correct it (you'll find that your friends have incomplete ideas/beliefs about things, as well as you do).

Within this model then, pattern recognition can be very efficient, given that there were methods to limit the analysis of context. This means that the analysis of your environment and what draws your attention should be limited towards the completion of your current goal, otherwise your brain gets overloaded with processing or learning, or it makes incorrect assumptions. Perhaps categorizing items here helps enormously to eliminate those issues that are not important at the time. But this requires careful analysis of the goal and the expected path of resolution.

Is it possible therefore that we create these paths (with further analogies to other similar paths) in our minds, which can thereafter be "reused" as paths of resolution? Basically... learning? If we analyze our environment in which things happen and we assume everything that we think is important, then when we are inserted into an environment with similar attributes, can we reuse the events in the simulation to make expectations of what is about to happen? Can we "simulate" (think, develop future expectations) of what is going to happen if we apply a certain action?

The above suggests that real intelligence is not the ability to manipulate symbols, but the ability to simulate hypothetical scenarios and expectations and adjust your goals accordingly to the expected outcome of each. The dexterity and precision of these scenarios is then leading.

Dogs can learn things as well, but is it the same kind of learning? Why are we able to use language and dogs can't. Our awareness is better and our methods for making assumptions and expectations of the environment as well. Is this only because we have larger brains? How can we explain the behaviour of a guide dog for the blind? It has learned to behave a certain way based on input signals and surely therefore it must interpret the environment somehow. Does it act instinctively or does it possess a kind of mind that is much simpler, but similar to us humans? Is it fully reactive to impulses or does it have expectations as well? Surely it must be able to deal with goals, which in the case of the dog is received from its boss.

Monday, December 24, 2007

Modularity of Mind

Merry Christmas everybody. I'm just writing up some recent thoughts.

Some books I was looking at with Amazon consider the mind as a thing with a modular composition. A module for language, another for reasoning, and so on. Logically it may be possible to dissect it this way, but I don't think it should be confused with physical modularity so quickly.

The previous post considered pattern recognition as the main topic of reasoning. I thought about this more and more and I just felt as if something else is missing. Pattern recognition is all around us and necessary, but it just doesn't feel like AI and neural networks are the beef of what our minds are about. I miss something that constitutes logic. Because, even if we have the ability to recognize words from a stream of noise, visual patterns in what we say or even objects and so on, it does not yet allow us to manipulate those things and combine them with other items.

Or, in other words... In my meanderings I missed the element of "consciousness", what it is about and how it is intertwined within our abilities to recognize patterns. I also think of consciousness as the ability to learn, identify and establish new patterns. For, in order for an artificial network to learn things, something must exist that compares output with input and recalibrates the network. What is that thing inside our mind?

An easier way to think about this is skill-acquisition. When we learn to drive a car or a bike, we combine certain inputs together (balance, sight, motorics, accuracy, action/consequence patterns, danger recognition) and eventually patterns are created, which allow us to 'automatically' perform the task. Before it gets there however, we are consciously accompanying each action and consciously making adjustments until we finally get it. So it feels as if besides pattern recognition that is as some kind of auto-pilot, we consciously need to evaluate the world around us to learn from it. And even then, we consistently apply consciousness throughout a journey, for example when encountering new territories or when certain elements have changed or when traffic is significantly dense. (It is next to impossible to execute other tasks in those events).

So I am basically concluding that pattern recognition by itself is not sufficient for the human mind. But I would not go as far as saying that the mind can be thought of a physically modular kind of thing. I'd rather think of it as a richer neural network than AI constitutes, probably something that still contains other elements for reasoning, logic and learning that we are yet unable to perceive. Memory (and retrieval) is another thing that I started to neglect.

The symbols that flow through the network may not be numbers. But if these are not numbers, what are they? If I consider an AI network in computers that does not use numbers, but keys or some gibberish that I make equivalent to some kind of symbol, will the output product be a sensible product after the network has manipulated and processed it? It sounds too random for that to be true, unless the output product is somehow matched to something else. Maybe these outputs are basically non-sensical symbols that are keyed to some kind of knowledge. Whereas knowledge in AI networks are embedded into the weights, I think of knowledge slightly differently when applied to neuro-science.

Friday, December 21, 2007

The Irrational Android

I've written the last few posts from a philosophical perspective on how the mind works, with input from some different sources and books and some of my own reasonings and reflections.

It is a very difficult thing to introspect the mind. Reverse-engineering yourself is an absurd idea. A computer might be able to register and find out about how other computers work, but it is very unlikely to gain some kind of consciousness or knowledge about the state of a single transistor in itself, unless it was specifically designed to be so (which therefore generally has a reason).

Artificial intelligence uses "neural networks" to solve some specific types of problems, which in general is a fancy way to express "pattern-seeking". Some pattern exists in some glob of data and the network is being trained to process the information and have, as its output, some new parameters or data. Sounds pretty simple. Many networks like this have 16 neurons only, yet are capable of inferring quite some knowledge. For a good example of such, read the following example on neural networks and download the example executable:

More links:

So, in the example you could see that the minesweeper only has as its input the direction to the closest mine. There is no algorithm or other function that changes the direction of the minesweeper directly. Each minesweeper has its own neural network (its brain). The output of the brain is directly used to control the side-tracks (left and right). A very simple physics engine then calculates the rotation of the minesweeper and where it goes next.

The first generation is quite useless, but then winners that picked up at least one mine are promoted to the next generation. It might create children. Eventually, after 50 generations and "natural" selection (of the fittest), the minesweepers exhibit rather intelligent behaviour with regards to the mines and there are many more minesweepers that become very efficient and picking them up, even change direction immediately to the next available mine as soon as the one in front was picked up by another.

A neural network was modeled after the biological brain, which has 10,000,000 neurons, dendrites (the input to the neurons) and axons (their outputs). Something happens inside the neuron that results in the neuron triggering an output charge or not. This only happens after a neuron receives an input charge (or various).

The most interesting questions here is whether 10,000,000 neurons are the human brain, are human thought and allow for reasoning, or are only part of this. Emotions, the way we call them and how we sometimes uncontrollably display our emotional state, suggest that it is not just a computational function. In other blog posts, I reasoned that emotions are the driving forces behind humanity. I also challenge the line of thought that rational thought really is rational in nature (void of emotion), because I recognize that most of our human actions and interactions are based somehow on emotional action or response, where it also uses some reasoning to add to the response or subtract (withheld emotions or exaggerated emotions). I think every one of our decisions, which people sometimes label as rational, are actually emotional decisions with a cover of argument. Only in the case where we reason within a factual model (science, maths, etc, which are generally highly deterministic and consistent) can we state that our reasoning is void of emotion (2+2=4 and will never be 5, whereas our human decisions in similar situations can highly differ through the concept of priorization of emotional importances, which are then argumented further as if these were intellectually considered points).

Should an android like Data from Star Trek exist, then I would not expect him to display emotional state, nor advise anyone on the best course of actions through a lot of reasoning. One could not ask Data if he wanted to go out for dinner, because he wouldn't be able to resolve the question, as he doesn't feel anything and cannot reason within the emotional domain. One could not ask him if he liked the color red, or if he wanted to take care of little androids in his life. To want and to like are unimaginable and unresolveable concepts for the android.

So... what does this mean for us human beings? The brains are a bunch of neurons and outside the microscope we see a hump of meat (humor, sentient meat):

Is it really possible that 10,000,000 neurons connected together can fool us into thinking that we are conscious beings? Is consciousness embedded within these neurons or is consciousness yet another force in our brains that uses the neurons for reasoning and recognizing patterns?

If it is true that the neurons drive us, then we are basically nothing else but pattern-recognizing machines. Everywhere we go, we quickly recognize the world around us and continuously reinforce new patterns when experiencing things never experienced before. It is basically input->processing->output for everything we do, look at, hear and so on. To ourselves, our thoughts claim that we look at absolute objects, have absolute knowledge of the world around us. That is, we recognize that a brick is a brick and it cannot be something else. But maybe the real way *inside* our neurons, how we look at the world is through pattern recognition. If we then consider "thought" as the output of the network, then it is clear that the inner working lies hidden for us. We don't have specific knowledge or awareness of the process within the network, since the output is what we measure.

This might mean that the experience of this world is highly driven through (partial?) neuron activity. You see something, it gets processed, it activates other neurons, and so on, until at the output we recognize it as something.

A mistifying factor here of course is the ability to learn. Not "learning" in the sense like artificial networks do, but learning in the sense to have an interest in understanding something. An ANN is a network that has a very specific purpose and is conditioned to execute its purpose with a training set. It recognizes only that pattern, but don't ask it to reason about something else.

Reasoning can probably also be expressed as the activity of comparing pattern recognition programs with one another in the hope to reuse or pre-initiate a new capability of a new network.

Another limiting thing for androids might be that I do not expect them to start asking questions. Would Data ask anybody else about a particular event? He can be given a goal to execute, the result can be right/wrong or informative, but will Data ask questions from the environment to enrich his own knowledge? Or will he actively seek this information on the Internet?

Looking at a developing baby to a toddler, there are many interactions with the environment. Given a clean slate, the baby starts to develop an interest in the environment. When it is born, it cannot focus the eyes or move the muscles through coordination. These can probably also be seen as "patterns", where the sight and hearing are paired together and then compiled in a network, where the output is directly used to control the muscles. Research here in the area of clock frequency or how fast our movements are adjusted based on changes would be interesting.

In further development, children are shown children's books and we point at pictures and say "zebra" or "tiger". At some point children get it. Then ask a child..."Can lions walk?" and the child uses reasoning to find this out. Uncertainty is a feeling one can have about the validity of a certain answer. This has a direct relationship with the strength of one belief. If a belief is formed by the strength of the trigger of a set of neurons, then you could say that it is also how strongly a pattern has been recognized. If a lion has legs that bend backwards and looks somewhat like a cat or dog, and things that have legs can walk and cats and dogs do walk, then with a good amount of certainty, it can be said that lions walk too. Now consider the parent... there are many different little bits of knowledge that we pair together in our reasoning that the parent uses to teach the child (with that usual mother-kind-of-voice): "The lion has legs and manes and big teeth. It can bite and is dangerous".

This pattern recognition, if it is at the source of intelligence, then also explains why we are so subject to categorization. By forming larger categories, we collect truths in the same basket and then test other things against that rule. That greatly reduces the need for storage space in the brain and makes it possible for us to make assumptions about things never encountered in this world.

Problem solving skills are the next step then. Given a particular goal, we can think and think and come up with a solution to meet that goal. We could say that we have the ability to 'imagine' things we have seen happen in real life and then try to replicate that same event, behaviour or property with other means. Problem-solving highly depends on the ability to create different hypothesis and to recognize (cross-pollinate) ideas from different analogous areas.

It sounds as thus, if the brain has very powerful pattern recognition abilities in the brain, but it is helped and supplemented by additional capabilities that use this network as a tool. Imagery patterns, auditory patterns, behavior patterns and so forth. When studying the child, one can clearly see that it is a lengthy process to get our neurons in order. Learning takes a long time to finish and even then, us humans do not develop in the same way and do not exhibit the same behaviour. Each one of us is unique. Our ability to deal with noise that clouds the pattern is amazing.

An android might actually have some advantages in learning. It should be possible for a computer to demonstrate internal states of neurons and visualize them, or execute "what-if" scenarios (as it is computed and won't have a guaranteed effect on the state or quality of the network), so that we can steer the development of the network or understand it better. One can also imagine a desktop that demonstrates the processed elements with interactive handles to improve the network, the same way a parent would point at things in pictures to explain why certain associations are true and how these same properties in other pictures, which may look somewhat different, also demonstrate the same behavior or constitute the same thing.

Thursday, December 20, 2007

The Mind Computer

The last few posts were about the human mind and contained a number of reflections on the issue from various other writers and my own hypothesis.

As we go from place to place, we assume a very large number of media information from the information. Not *all* information is indefinitely stored in our memory, but a very large number of specific landmarks are, or we recognize things around the house, and so on.

If you were to take a picture of all the things we see and store it as a binary blob of information, it is absolutely incredible how much information the brain can contain. And those are only images, not smells or auditory information. If the brain were to store it as such information, is there any limit to how much we can store? Can we express the storage capacity of the human brain in megabytes, gigabytes, terabytes or petabytes? And what happens when we reach the limit? Will we ever reach the limit or will we push out other information that we don't need?

I can more or less vividly remember (and imagine, in fact, see it as if I were to look at it right now), certain scenes of the ships I sailed on, the engine room, the cabin, the bridge, the horizon at night, my car in the UK, even the house as I knew it.

The impressive thing is that the picture need not be entirely equal for us to make a match with history. It may actually have quite some differences. A computer on the contrary relies on pixel-by-pixel verification and cannot recognize objects or entities from photo's automatically, unless it is a highly specialized algorithm. Some really interesting work was recently presented on content-aware image resizing that provides some clues about the "real" information that is contained within an image:

Perhaps the technology may prove useful for image analysis in the future, where it will reduce the unnecessary parts of the image and keep the landmarks. When you describe how to drive to somewhere to another person, you are making use of landmarks. You don't generally rely on distances or boring parts. Finding your way in the city is often more difficult, unless you rely specific monumental references.

What if the mind does not store the image itself, but a processed image? An image decomposed into ... numbers even, where the numbers can then be compared against other numbers that are derived from another processed image.

A big question is whether the image processors /sound processors that are in our heads are re-used at the time we look up our stored media. Are we reconstructing the images from (poorer) stored material? The more (useless) information you let go, the easier it is to find a match. The interesting thing here is also that we can rotate items in our mind's eye and thereby also form expectations what the back of something looks like.

Does this mean that the brain itself is a biological digital computer of numbers as well? Or is it
an analog machine like a valve amplifier or a CRT? How is it possible to remember things and create imaginations based on descriptions?

Cognitive Science has theories on a language of mind called "mentalese". This is a symbol language, but still "imagined". We cannot look into the mind to find out what it looks like, I don't even know if it looks like anything, it might be just mush and numbers, or perhaps be equalized with numbers... But then, how would one convert these 'symbols' or 'numbers' or whatever they are into symbols that a computer can work with? The computer is a number machine, so would require numbers in whatever representation to be able to do something with it. The function of the mind is then an algorithm, neural network, matrix or whatever construction that turns numbers into something else, manipulates them, stores them and generates meaningful output.

Tuesday, December 18, 2007

Architecture Of Mind

Previous posts discussed many individual things of the inner workings of the mind that I have read so far. As a SW architect, I prefer pictures over words to convey meanings. I've been working on a picture that is reminiscent of the OSI layer in Computer Science, which describes how communication takes place over the Internet. The picture is here:

Based on the books, it shows that at the physical layer, we have neurons and chemicals that are somehow interacting together. Much like a computer, this layer only becomes interesting for absolute experts. It is at such a low level that understanding how it works might make clear how things operate, but since it is so complex and executes at a very high frequency and most probably is parallel, it is difficult to develop expectations on that level on operation, unless you subdivide the most basic functions (CPU instructions) into grouped functions with a particular purpose. And further generalization of that.

In this picture, I see neurons as transistors or silicon. So they are basically thousands and thousands of black boxes that interoperate together and the sum of all its minuscule operations have a certain effect. The silicon accomodates the flow of currents for communication between transistors, where transistors modify that flow. The same is probably true for neurons and synapses. This is about where the direct analogy (should) stop(s), as I believe that the current household computer as we know it has a totally different architecture than the human mind due to its requirements on determinism and finity. The human mind could be inifinite (?) in its capability to produce new thoughts and new goals, based on previous contexts. Computer programs are mostly single-goal oriented and are generally not engineered to produce new goals along the way.

The physical layer thus contains neurons (or "mush") that accomodates emotions, feelings, the mind's eye, pattern recognition, rotation, language?, reasoning and thus intelligence. Intelligence can probably also be described as the interaction between pattern recognition, reasoning (which is setting new goals or imagining consequences (developing expectations) based on previous experiences). Inventions and innovations are the acts of making new associations where previously there were none. With each new created association, we are likely to become more intelligent. Ones "aptness" to develop associations is probably ones IQ. Emotional Intelligence is probably the sympathy in observing one's behaviour and developing expectations based on those observations through other associations and one's aptness to read behaviour (be attentive to signals) and so on.

In the previous post it was said that the main goals of our being are determined by our emotions and feelings. Feeling hungry means instinctively searching for food that is edible. If food is not around, it is invoking our intelligent system to determine the best course of action to get it (or zero the emotion if effort is larger than desire). If you "feel" lazy, you might want to buy something at the local gas station rather than go to the supermarket. So the emotional system works very close together with the intelligent system to resolve the problems. If you have access to a car, you might want to go into town, park the car and do some shopping. Unless you forgot where you put your car keys, in which case it might cause frustration and the setting of a new goal by taking the bike. Unless it is perceived to rain outside and the feeling of wetness is not a very pleasant foresight, which might cause you to look into the freezer and defrost some ready-made meal instead after all, even though initially the goal was looking forward to something fresher. The interaction of emotions, feelings and intelligence is clear. The discovery through our intelligence and memory that something is impossible might cause feelings of frustration, which might release some chemicals that cause our brain to go into a higher state of awareness, much like adrenaline that prepares the body for a fight.

The main goal in this case is always set by emotions or feelings. I haven't yet thought deep enough to find cases where a main goal is 100% determined by intelligence. Intelligence can set subgoals to achieve the main goal. The subgoals are determined by imagining how the main goal can most efficiently be achieved, which is done by looking for a path to that goal. This is mostly done by looking at historic events and how well this worked out in the past. If we engage a barrier that blocks access to the main goal, we invoke our associative mind and reasoning and expectations of outcome to try to remove that barrier. A sense of urgency might change how we reach those goals.

So reasoning and assessments highly require the services of the mind's eye, imagination and memory. The logic revolves around imagination and the analogy (pattern similarity) of other outcomes. The main goal determines our eventual behaviour always in the long run for the event. Subgoals might change our behaviour slightly or modify it entirely temporarily, but should always serve the main goal in the long run.

The picture is not quite complete as there is closer interaction between the emotional system and behavior. One can imagine that our body has been programmed to react instinctively to one person's behavior, a direct response instead of an evaluated response by the mind (if not, we would probably look and act like robots).

There is a continuous interchange between the emotional system and the "Intelligence" system. The "depth" of the recursion in the intelligence system (the ability to resolve complex cases like "if this, then that, and then that, however when, if not, etc."), the congruity that particular mind needs (or incongruity it can suffer) to find associated material in memory and the amount of experience are probably the best factors that determine intelligence.

As I have said in previous posts, I imagine that the brain is thus not a 'stack-based' computer, but a machine that always moves forward within its own context. I also imagine the context as something that is fluid rather than hacked in stone (as is the case with computers). The following picture shows how I regard memory, which I call "associative memory", since it recalls symbols as we listen to a piece of music, hear speech, see a scene or "think about" / "imagine" things.

The picture shows a line, the "thread" of a conversation or the "thread" of an observation/thought. We have the interesting capability to "steer" our thoughts into new directions. Are we modifying or creating goals at the same time?

The items in associative memory are not just in "drawers" or memory locations like in a computer. A computer may use lists, linear memory or hash keys to organize information. But no matter which method is chosen, there is always an incremental cost to look things up. This cost is expressed in O-notation and in general, the more information you store, the higher the cost. This has the nasty side-effect that becoming smarter means becoming slower and in certain cases, some problems become unresolveable unless you work together with many machines. Google is one perfect example, where the system basically stores the information on the Internet. It uses many, many, many, many computers to open up that information to others. However, Google cannot "reason" with that information, it can however process it and modify its relationships for a particular purpose.

So the model above shows a kind of memory that does not store information and make it accessible through keys. It shows memory where elements are naturally associated with one another and where these associated elements are automatically brought forward. The "thread" determines the direction of the context and how further associations are made. The further away from the thread, the lower the activation of that particular synapse or memory element. The context is thus basically the elements that were invoked and what we know about them. The context (associations) also give us rules. If a certain (new) association cannot be made, the thread must be redirected or halted and a solution suggested by the reasoning part of the brain. when reading nonsense, we cannot allow the nonsense to be stored as reality in our brain, since that taints our model of the real world. How do we prevent this from happening? By testing the thread against our current model and see how it complies. Changing a belief then means changing certain associations that we took for granted.

It is also possible that strong beliefs are formed when certain associations are often walked by threads. So associations probably have weights. It is not uncommon for us to believe something very strongly as being associated, but then discover that the association is invalid. We resist breaking the association very strongly, because the new evidence is still a very weak association that does not have many associations with others. Only when we forge other associations with the new evidence do we accept it taking the place of another element. Whereas we don't just "forget" the other element either. It becomes like a ghost image superimposed on the initial association, which is tagged as a false belief.

A computer can thus not replicate this behaviour easily. There are constructs like linked-lists and so on, but linear memory (the way how memory is developed now) is not ideal as a storage element for associative memory. It would be easier to think of associative memory as elements that are somehow forming tiny threads between them (which can strengthen into cables) and probably move closer together.

It's difficult for many human beings to do two things at once. To think two things at once. This suggests we have the analogous single CPU available to us. Other research shows that when we process information, we can only deeply focus on 3-5 pieces of information and derive results from those. That is analogous to a CPU that has about 4 registers available for processing. However, Intel processors work intensely with stacks, which suggests that things unwind and continue. I think of the mind as a CPU that is always moving forwards, does not have a stack and just finds new goals and conclusions and stores them as associations in memory. In computer lingo, functions that return parameters or allow output parameters or pointers simply do not exist.

The point of this whole post is to reason about the architecture of mind as if it were possible to build it into a computer. I somehow see points in the logical function of the mind that are incompatible with the current Intel architecture I am familiar with. Memory is linear, but should be associative and fluid. The computer/OS is stack based and always "attempts" to resolve subgoals at that point in time. I think that stack-based computing is the barrier to further intelligent systems. These intelligent systems are difficult to imagine, because we are not familiar with them. Its architecture needs to be thought out. Maybe it becomes easier in the long run than deterministic systems (who knows?), or on the other hand maybe they are significantly harder to program. That can be expected when you model the human mind to some degree. But full artificial intelligence (reasoning systems) require associative memory that can forget, the ability to form (new) goals based on things perceived on the outside and so on.

Another thing on the "mind's eye" that I find incomplete... Some psychological or "HR knowledge" states that some people are "auditive" or "emotive". These are related to our senses like vision, gustation (taste), olfaction (smell), auditory (hearing) and somatosensation, where the last one is a fancy one to describe everything we sense in the body (allow me to include "emotional state" into that sensation as well). The mind's eye however has been described as a purely visive operation. However, I don't know whether someone has ever done research on how blind people for example would handle their "mind's eye".

I can also personally reflect on the "mind's eye" myself. It is basically imagination itself. But I cannot only imagine visible elements (images, which is a word that probably invoked this whole oversight), but I can also imagine auditory elements, music, feeling a hot pan, feeling a cold pan, smelling grass, smelling strawberry and even more... I can store those senses as elements in memory. So, rather than thinking about memory as a set of images, it's a set of experienced senses at some point in time that are inter-related and give me a more complete picture of some event or thing. When I store the element of "feeling intensely happy" with the image of freshly cut grass and especially with the smell of that, it is not difficult to imagine that somehow the smell of freshly cut grass in the future can evoke the same feelings.

That latter part somehow suggests that our mind works even more intricately. It is as if the "processed" elements of information that we store in our memory (not signals itself, but perceived signals, post-processed by our organs like the nose etcetera, the very signals that are sent to the brain for further processing), when these post-processed elements are recalled in our imagination, the recall of that item causes our senses to relive the stored event. Some research I read at some point in time stated that when we speak, our brains temporarily reduce our auditory senses, so that we can recognize our own voice. If not, we would be startled everytime we make noise as we'd think a stranger is in the room. Maybe this mechanism is more intricate and we're actually reusing those processing parts of the brain for reasoning itself and the brain is not just one big mush (or maybe just in the physical sense, but not the logical sense). We'd actually have dedicated areas that we can reuse in our imagination as well.

So I make the point that the "mind's eye" sounds incomplete and that the only way that we can make deductions and reasoning are through previously experienced things that are stored in memory, not only as images, but also as smell, auditory information and anything else we can perceive about the situation.

It would also make the case that a computer cannot become "aware" unless it is given multiple senses itself. A sense for emotion would be very difficult, as emotion is entirely internal and I have doubts it can ever be re-engineered (it doesn't look like anything). We could perhaps simulate it by perceiving behaviour. But given audio, images and things that we can externally observe, maybe it's possible to store things together and in the future build a computer that is capable of doing similar things with that information through associative memory and the development of a context in which the observations are assessed and reasoned with.

Monday, December 17, 2007

Emotions as genetic instruments

I finished one of Steven Pinker's books yesterday, "How the Mind Works". The final chapters are a wonderful excursion into an explanation of emotions, the raison-d'etre of those and how DNA are the potential building bricks for having these emotions.

In the explanation about emotions in this perspective, Pinker and others assign the reason for having emotions to better chances for survival and reproduction. The explanation is that genes are building bricks that lead to having emotions. As usual with science, philosophy and especially cognitive science, all sorts of basic questions pop up on things that you generally take for granted when you grow up.

Marriage is a very common concept all over the world in almost any society or community. The explanation is that marriage is an "intelligent" method to reserve the attention of a spouse or to reserve the use of an uterus for the reproduction of your own genes. Marriage is, in this context, also a contract of property on a woman. This paragraph is very unromantic. It is very much viewing the concept of marriage from a biological perspective. Please don't consider this an attack on morality or ethics or that those things should be forgotten, they are entirely different discussions. Marriage is a public declaration of both spouses that a woman is dedicated to the reproduction of the sharing of the genes. The man dedicates his attention and protection to the reproduction as well. The idea of marriage is that this treaty cannot easily be broken by outsiders, further demonstrated by the carrying of a wedding ring. We construe further laws around the idea of marriage. I can imagine different social structures, for example harems and letting the men fight amongst themselves, where the non-winners become expendable armies that are driven by the leader to protect the pack of women. A full explanation of marriage and the social structure we have ultimately developed is written in the book, so I urge people to read those chapters instead of this blog for further clarification.

Men don't "feel" that they should flee when war breaks out. Women generally try to find the first hiding place. Men are thus biologically equipped to fight threats. Whereas, if you think rationally, war doesn't provide good odds for survival. Men are generally more violent than women (I did not say however that women are always non-violent). In most societies, only men are expected to go to war. Women are expected to stay at home with the children. It is morally repugnant to murder children and women. It is only a shame, but morally acceptable and not that shocking, that men are killed in the course of violence. In the context of gene reproduction and evolution, a woman is far more attached to the consequences of the decision than men. It takes 9 months and much more afterwards dedication for a woman. It is only natural for women to seek out partners that are willing to provide the after-care and attention and protection. Hence courtship. Courtship is the declaration of a man that he is willing to make the investment. A better courter makes better chances. It is thus not always the strongest man that has the best chances of a group, although other zoological families prefer the strongest. Since humans have different problems to solve after birth, the (biological) needs are different.

Which brings us to further interesting points, pornography for example. Why is almost all of pornography men-focused? Pornography for women is much less abundant, almost non-existent. Playgirl is mostly read by gays. There are plenty of bars with female exotic dancers. How many bars are there with dancing males? An explanation from a genetic perspective is that men can in theory mate with many women, but only so far that they are still able to guarantee protection for the upbringing. So this is not unlimited. The point here is that the possibility is there. A woman can typically mate with one man, but after that is tied to her decision for at least a number of years. Some research has been done on this topic. A naked, anonymous, unknown woman was felt by men as an opportunity and aroused almost all of them. Women however felt a naked, handsome, unknown, anonymous man not instantly as an opportunity, but firstly as a threat. This doesn't mean that women always need men around them that they know. But the first reaction to naked men isn't generally immediate and uncontrollable attraction.

Another point in the book is regarding adultery. A man's worst fear is the act of adultery itself by his spouse. But for the woman, the worst fear is not necessarily the act itself, but the loss of commitment and attention by the husband, if the husband decides to redirect his attention to someone else. This is "worst fear", I did not say that committing adultery by men is something that women would typically allow.

Wealth, status, dominance are all measures of fitness. Beauty is a measure of health. When you bring these factors together in our society, we still pursue wealth like crazy, compete strongly with other men, women dress up nicely and make themselves beautiful and thus compete with other women. If we were solely rational and thinking beings without emotions having a strong say in the forming of our thoughts, we wouldn't need to compete and make ourselves beautiful, it would surely save a lot of time in a day.

In another blog post I mentioned that we thought we were smart, but are actually still very much subject to emotions. This shows up sometimes when whole societies or nations go to war with one another. Why feel strongly about the piece of earth where you grew up on? If someone attacks you, it might be much more efficient to just pack up and leave. We like to think that our actions are 100% determined through intelligence. Yet if you think rational and consider the same situation for someone else, a radical thinker might just discover that for a host of generally accepted reactions, the best course of action might just be different. Some of our mundane and dark desires still bubble up all the way to the surface. And where we notice they are rather basic, we often try to cloud them through "reason". Can reason be an initiator for an action? I think of reasons as explanations for behaviour, or demonstrations that certain instinctive behaviors (emotionally driven actions) also make sense from an intelligence perspective.

It'd be rather difficult to think of a human being to be 100% guided by intelligence. The reason is that intelligence doesn't really provide a goal. As soon however as a goal is set, intelligence helps enormously in achieving it (consider how humanity evolved over the past decades and millenia). But to set a goal...? Are goals set by intelligence or are these actually set by some emotion, some underlying biological drive? If all our goals are based on emotion somehow, than it is fairly logical to conclude that we cannot be 100% intelligence driven. So when I say that humanity is emotionally driven and probably cannot act solely on intelligence alone, does this make the picture look bleak? Namely, ethics and morality are intellectual constructs, not emotional or instinctive ones. Why do we want humanity to be intelligence-driven? And does justice take into account emotional actions/reactions and does it protect those, or does it counter direct emotional actions? Is intelligence then, in this context, partly a plot in our brain to counter/control our biological purpose?

It is far easier for humans to remember negative events. We have a twice larger vocabulary for negative emotions than we have for the positive ones. Humanity has been subject to a large number of very negative events that encompassed all the world in certain cases, for example World war I and II. If you have seen the film "The Fifth Element", it creates a picture of a human and humanity as wild savage beasts that do nothing but engage in warfare. We do see frequent wars and atrocities. Books like "Humanity" from Jonathan Glover are interesting reads on the sociological causes for war and the circumstances in which war can occur. When thinking entirely and 100% rationally, war doesn't make sense. There are always ways to avert it, if both sides think rational. We may feel the urge to submit another tribe/nation to our will, religion or ways of living, but is it sensible? Intellectually, war is rather stupid. Then where does that feeling of "pleasure" or "need" in war originate?

On a positive note however, other things can be said about wars. The impact, size and involvement in wars has risen due to our possibilities of communication. ally-seeking, treaties and agreements. Worldwide mass-media and communication definitely helps to involve every country. Just think about it... Global war, a world war, how it is a direct demonstration of reciprocative aggressive behaviour that doesn't make much sense if you think about it rationally. However, how does the picture look when we remove the very effects of worldwide communication and globalization from the equation. Has war really become "worse" as compared to other centuries? And the frequency? And what about the reasons for going to war?

Some experiments have been conducted where a class of students was divided by some imaginary or real construct. In one case even, a coin was flipped in front of all the people in a class. Being part of a tribe or some imaginary construct seems to be very important for people. People will modify their behavior accordingly and in many cases will try to subvert others to the same division or murder them. That, to me, is the strongest evidence that we are very much guided by our emotions and that are goals are not determined by our intellect. In these experiments, fights have broken out or tortures have taken place by people that would not normally torture in other circumstances. The Rwanda atrocities happened, because the people were divided by their height. This division occurred through the Belgian colonization, where the Belgians called one side Hutu's and the other Tutu's. The sad part is that these were people of the same tribe. It was an intentional division of the Belgians. Years later... based on this artificial division, the people killed each other. Because of a division in height and resulting propaganda and behavior by those who felt part of this new "tribe".

Continuing on the positive note, there are also great positive achievements in humanity that are not often highlighted. Health care plans, women's right to vote, abolishment of torture (even though not practiced everywhere), welfare rules, government public services, courts and the justice/legal system. Free speech. Free thought. Free press. And so forth. So before anyone doubts that humanity itself is doomed, since the wars don't stop and continue, there are other thinks to consider as well that are not as easily remembered, but are very important to insert into the equation. We like to think negatively in many cases, but we should also think positively about achievements and not take those for granted, but treat them the same as the negative events and celebrate them more.

It will be difficult to understand to which degrees our thoughts, opinions and declarations are based on our emotions rather than our intelligent thought. I'm not sure you can even ever consider that there is a separation possible, since thought is given a direction by goals. A goal to convince, a goal to entertain, a goal to improve status, a goal to...? I am not sure therefore, where the future will bring us. Should we aim to think 100% rational or is that just going to destroy us since emotions are better guarantors for survival? Can we develop methods in the future to separate emotion from rationality?

Since at the very depth of our being, we are driven by emotions and this gives us goals, then would a life without emotion and only pure intelligence become meaningless?

Wednesday, December 05, 2007

TomTom roll-out in Brazil

I'm doing some work for TomTom at the moment and recently they announced a roll-out in Brazil of their devices. These will be for mostly SP and Rio, amongst other cities and the general routes in the country. So it certainly doesn't include everything, it's too large a country at this time, but you can already benefit somewhat from this device over there. Or... knowing how people break open cars for car radio's and cellphones, this may be yet another reason to have your car broken into.

End of year is coming up. I'll be visiting Brazil again for family mostly. Back before New Year's.

I ordered my Honda Hybrid yesterday and am expecting it January soonest, February most probably. That'll be the car for the next 4 years.