Friday, December 28, 2007

Consciousness

Previous articles discussed issues of pattern recognition. Patterns are all around us, you could say that the whole visual world is composed of patterns, audio has certain patterns and there should be patterns in smell. You could even talk of touch patterns, which are essentially structures of surfaces.

Patterns are mostly used for recognition between margins of error. But besides material patterns for material objects, we can also talk about motivational patterns and contextual patterns. A contextual pattern for example would determine within a scene where you are, at what time you are there and then it might form an expectation or induce a goal to get information to develop such expectations. For example, if you are at a train station you would find certain events quite surprising, whilst in a different context such events make absolute sense. The emotion of surprise is a measure of how your expectations are formed and how these expectations are not always true due to random events.

Is consciousness analog with pattern recognition? Consciousness can probably be re-defined in this context as: "knowingly(?) manipulating the symbols of input into the biological neural networks in your brain until you measure the correct (expected?) output". Well, the first question then is whether consciousness IS an output of a neural network or whether consciousness is an encompassing ability to monitor the network from afar. The latter possibility being uncomfortably close to the idea of homunculus.

Therefore, I take it that the output of the neural network is consciousness, but that we have the ability to back-propagate the output on the input and thereby steer the 'thought' process in our brains. Certainly, there are other forces at work that have an effect on the output. Hereby, consciousness is thus the output itself, the production of the output in the neural network might be described as subconsciousness.

In order to steer output, one must have an idea of the expected output. Therefore, another network (separated?) should exist that recognizes the output and compares it to the expected result. The concept of learning teaches us that another neural network might modify its weights and values to comply with the expectation. Therefore, given a certain situation, and understanding that certain outputs occur, we have the ability to train our network to form a certain expectation (generate an output), given a set of input factors. The generalization of these "ideas" (inputs) being very important. For example, you'd expect a phone in a train station. But one should also therefore expect a phone in a metro station, since both stations can be categorized into the category of transportation. Once you think about how you derive this reasoning that a phone should exist in both stations, you should notice that it relies heavily on persisted knowledge and (unconscious?) categorization of what you experience in your environment.

In this model of consciousness, you might say that there is a certain truth in saying that the network IS your consciousness. But it is a dangerous argument in that it is likely regressive (one network monitoring another, but who monitors the final network?).

First of, any system of reasoning will have to start with a final goal. It may be broken down into subgoals. Subgoals are necessary when there is a lack of information to complete the goal directly. Our mind may have a limit to the memory of how many subgoals to complete (after which a problem is so difficult, that it is impossible to solve). But a computer may not need to have this limit.

Let's say that you are on a train station and a train is delayed towards your destination. Previous experience has shown us that the people that expect you at a certain time may get concerned if you do not notify them of the change. Our brain now has a goal to achieve and we start to assess our options to complete the goal. First, do we have a mobile phone? is it active and functioning? is there coverage in the area? We can find out by trial and error. How much time is there left to make the phone call? If we make a phone call from a fixed phone elsewhere, will we return in time to catch the train? All of these questions are assessed and estimated. Let's say there are 20 minutes. We would expect a phone to be somewhere in the vicinity of or on the station (that is a reasonable expectation). So we scan for a phone (color? shape? brand?) within the station. Notice here that as soon as you have set your mind to recognize these shapes, it is easy to overlook phones that look entirely different then those you'd expect (one may also argue whether children that do not have these goals in a young life have a different 'state' of mind in which the objective is to consume as much information as possible about the environment, sometimes to the irritation of the parents).

Then finding the phone, we need to find out how it works. We expect to have to pay. We need to see if it accepts credit card or money or special phone cards. Once we found that out, we'll need to ascertain whether it actually works (that is, whether the phone behaves as we expect it to). Notice that in this case we use auditory information mostly to discriminate between busy signals and other kinds of signals that are not known in advance (defect?) ).

Then there's a regular conversation on the phone, the initial goal is met by communicating this with the recipient (goal retrieved from memory) and we return to the next goal, catching the delayed train.

Whereas pattern recognitions are surely abound in the above story, one cannot miss out certain capabilities that seem to surpass pattern recognition. Knowledge, definitions etc. are continuously retrieved from memory and used in the network to make correct assumptions and expectations. Not finding the general "UK" model of a phone (red booth) around, one might decide to scan for a more global shape of a phone, a scan which generally takes longer to complete as more potential matches have to be made.

Eventually, memory also needs to keep up with the final goal, notifying the other person. There's thus some kind of stack that keeps track of the subgoals. Some people sometimes ask you:"What was I doing again?". This shows me that losing track of goals is easier than we think, especially when other goals are intervening with our current processing.

But what/which/who detects when a certain goal has been achieved? Is that what consciousness really is? How does it know for sure that it has been done?

The objective of this article is to reason about the scope of consciousness. How much of consciousness is not pattern recognition processing and how much is?

Perhaps we should consider the possibility that our brain can switch modes immediately. A learning mode, which uses exploratory senses and allows certain networks and memories to assume information. Another mode that is used to form expectations based on certain inputs. It's probably not useful to confuse these modes of operation at the same time.

This way, if we find that there are no "sensible" expectations that we can form, we know that we need to assume information and learn about the environment. We might learn it incorrectly, but eventually some time we'll find out about this incongruity and correct it (you'll find that your friends have incomplete ideas/beliefs about things, as well as you do).

Within this model then, pattern recognition can be very efficient, given that there were methods to limit the analysis of context. This means that the analysis of your environment and what draws your attention should be limited towards the completion of your current goal, otherwise your brain gets overloaded with processing or learning, or it makes incorrect assumptions. Perhaps categorizing items here helps enormously to eliminate those issues that are not important at the time. But this requires careful analysis of the goal and the expected path of resolution.

Is it possible therefore that we create these paths (with further analogies to other similar paths) in our minds, which can thereafter be "reused" as paths of resolution? Basically... learning? If we analyze our environment in which things happen and we assume everything that we think is important, then when we are inserted into an environment with similar attributes, can we reuse the events in the simulation to make expectations of what is about to happen? Can we "simulate" (think, develop future expectations) of what is going to happen if we apply a certain action?

The above suggests that real intelligence is not the ability to manipulate symbols, but the ability to simulate hypothetical scenarios and expectations and adjust your goals accordingly to the expected outcome of each. The dexterity and precision of these scenarios is then leading.

Dogs can learn things as well, but is it the same kind of learning? Why are we able to use language and dogs can't. Our awareness is better and our methods for making assumptions and expectations of the environment as well. Is this only because we have larger brains? How can we explain the behaviour of a guide dog for the blind? It has learned to behave a certain way based on input signals and surely therefore it must interpret the environment somehow. Does it act instinctively or does it possess a kind of mind that is much simpler, but similar to us humans? Is it fully reactive to impulses or does it have expectations as well? Surely it must be able to deal with goals, which in the case of the dog is received from its boss.

Monday, December 24, 2007

Modularity of Mind

Merry Christmas everybody. I'm just writing up some recent thoughts.

Some books I was looking at with Amazon consider the mind as a thing with a modular composition. A module for language, another for reasoning, and so on. Logically it may be possible to dissect it this way, but I don't think it should be confused with physical modularity so quickly.

The previous post considered pattern recognition as the main topic of reasoning. I thought about this more and more and I just felt as if something else is missing. Pattern recognition is all around us and necessary, but it just doesn't feel like AI and neural networks are the beef of what our minds are about. I miss something that constitutes logic. Because, even if we have the ability to recognize words from a stream of noise, visual patterns in what we say or even objects and so on, it does not yet allow us to manipulate those things and combine them with other items.

Or, in other words... In my meanderings I missed the element of "consciousness", what it is about and how it is intertwined within our abilities to recognize patterns. I also think of consciousness as the ability to learn, identify and establish new patterns. For, in order for an artificial network to learn things, something must exist that compares output with input and recalibrates the network. What is that thing inside our mind?

An easier way to think about this is skill-acquisition. When we learn to drive a car or a bike, we combine certain inputs together (balance, sight, motorics, accuracy, action/consequence patterns, danger recognition) and eventually patterns are created, which allow us to 'automatically' perform the task. Before it gets there however, we are consciously accompanying each action and consciously making adjustments until we finally get it. So it feels as if besides pattern recognition that is as some kind of auto-pilot, we consciously need to evaluate the world around us to learn from it. And even then, we consistently apply consciousness throughout a journey, for example when encountering new territories or when certain elements have changed or when traffic is significantly dense. (It is next to impossible to execute other tasks in those events).

So I am basically concluding that pattern recognition by itself is not sufficient for the human mind. But I would not go as far as saying that the mind can be thought of a physically modular kind of thing. I'd rather think of it as a richer neural network than AI constitutes, probably something that still contains other elements for reasoning, logic and learning that we are yet unable to perceive. Memory (and retrieval) is another thing that I started to neglect.

The symbols that flow through the network may not be numbers. But if these are not numbers, what are they? If I consider an AI network in computers that does not use numbers, but keys or some gibberish that I make equivalent to some kind of symbol, will the output product be a sensible product after the network has manipulated and processed it? It sounds too random for that to be true, unless the output product is somehow matched to something else. Maybe these outputs are basically non-sensical symbols that are keyed to some kind of knowledge. Whereas knowledge in AI networks are embedded into the weights, I think of knowledge slightly differently when applied to neuro-science.

Friday, December 21, 2007

The Irrational Android

I've written the last few posts from a philosophical perspective on how the mind works, with input from some different sources and books and some of my own reasonings and reflections.

It is a very difficult thing to introspect the mind. Reverse-engineering yourself is an absurd idea. A computer might be able to register and find out about how other computers work, but it is very unlikely to gain some kind of consciousness or knowledge about the state of a single transistor in itself, unless it was specifically designed to be so (which therefore generally has a reason).

Artificial intelligence uses "neural networks" to solve some specific types of problems, which in general is a fancy way to express "pattern-seeking". Some pattern exists in some glob of data and the network is being trained to process the information and have, as its output, some new parameters or data. Sounds pretty simple. Many networks like this have 16 neurons only, yet are capable of inferring quite some knowledge. For a good example of such, read the following example on neural networks and download the example executable:

http://www.ai-junkie.com/ann/evolved/nnt1.html

More links:

http://www.ai-junkie.com/links.html

http://halo.bungie.org/misc/gdc.2002.haloai/talk.html?page=2

So, in the example you could see that the minesweeper only has as its input the direction to the closest mine. There is no algorithm or other function that changes the direction of the minesweeper directly. Each minesweeper has its own neural network (its brain). The output of the brain is directly used to control the side-tracks (left and right). A very simple physics engine then calculates the rotation of the minesweeper and where it goes next.

The first generation is quite useless, but then winners that picked up at least one mine are promoted to the next generation. It might create children. Eventually, after 50 generations and "natural" selection (of the fittest), the minesweepers exhibit rather intelligent behaviour with regards to the mines and there are many more minesweepers that become very efficient and picking them up, even change direction immediately to the next available mine as soon as the one in front was picked up by another.

A neural network was modeled after the biological brain, which has 10,000,000 neurons, dendrites (the input to the neurons) and axons (their outputs). Something happens inside the neuron that results in the neuron triggering an output charge or not. This only happens after a neuron receives an input charge (or various).

The most interesting questions here is whether 10,000,000 neurons are the human brain, are human thought and allow for reasoning, or are only part of this. Emotions, the way we call them and how we sometimes uncontrollably display our emotional state, suggest that it is not just a computational function. In other blog posts, I reasoned that emotions are the driving forces behind humanity. I also challenge the line of thought that rational thought really is rational in nature (void of emotion), because I recognize that most of our human actions and interactions are based somehow on emotional action or response, where it also uses some reasoning to add to the response or subtract (withheld emotions or exaggerated emotions). I think every one of our decisions, which people sometimes label as rational, are actually emotional decisions with a cover of argument. Only in the case where we reason within a factual model (science, maths, etc, which are generally highly deterministic and consistent) can we state that our reasoning is void of emotion (2+2=4 and will never be 5, whereas our human decisions in similar situations can highly differ through the concept of priorization of emotional importances, which are then argumented further as if these were intellectually considered points).

Should an android like Data from Star Trek exist, then I would not expect him to display emotional state, nor advise anyone on the best course of actions through a lot of reasoning. One could not ask Data if he wanted to go out for dinner, because he wouldn't be able to resolve the question, as he doesn't feel anything and cannot reason within the emotional domain. One could not ask him if he liked the color red, or if he wanted to take care of little androids in his life. To want and to like are unimaginable and unresolveable concepts for the android.

So... what does this mean for us human beings? The brains are a bunch of neurons and outside the microscope we see a hump of meat (humor, sentient meat):

http://www.netjeff.com/humor/item.cgi?file=spacetravellers.txt

Is it really possible that 10,000,000 neurons connected together can fool us into thinking that we are conscious beings? Is consciousness embedded within these neurons or is consciousness yet another force in our brains that uses the neurons for reasoning and recognizing patterns?

If it is true that the neurons drive us, then we are basically nothing else but pattern-recognizing machines. Everywhere we go, we quickly recognize the world around us and continuously reinforce new patterns when experiencing things never experienced before. It is basically input->processing->output for everything we do, look at, hear and so on. To ourselves, our thoughts claim that we look at absolute objects, have absolute knowledge of the world around us. That is, we recognize that a brick is a brick and it cannot be something else. But maybe the real way *inside* our neurons, how we look at the world is through pattern recognition. If we then consider "thought" as the output of the network, then it is clear that the inner working lies hidden for us. We don't have specific knowledge or awareness of the process within the network, since the output is what we measure.

This might mean that the experience of this world is highly driven through (partial?) neuron activity. You see something, it gets processed, it activates other neurons, and so on, until at the output we recognize it as something.

A mistifying factor here of course is the ability to learn. Not "learning" in the sense like artificial networks do, but learning in the sense to have an interest in understanding something. An ANN is a network that has a very specific purpose and is conditioned to execute its purpose with a training set. It recognizes only that pattern, but don't ask it to reason about something else.

Reasoning can probably also be expressed as the activity of comparing pattern recognition programs with one another in the hope to reuse or pre-initiate a new capability of a new network.

Another limiting thing for androids might be that I do not expect them to start asking questions. Would Data ask anybody else about a particular event? He can be given a goal to execute, the result can be right/wrong or informative, but will Data ask questions from the environment to enrich his own knowledge? Or will he actively seek this information on the Internet?

Looking at a developing baby to a toddler, there are many interactions with the environment. Given a clean slate, the baby starts to develop an interest in the environment. When it is born, it cannot focus the eyes or move the muscles through coordination. These can probably also be seen as "patterns", where the sight and hearing are paired together and then compiled in a network, where the output is directly used to control the muscles. Research here in the area of clock frequency or how fast our movements are adjusted based on changes would be interesting.

In further development, children are shown children's books and we point at pictures and say "zebra" or "tiger". At some point children get it. Then ask a child..."Can lions walk?" and the child uses reasoning to find this out. Uncertainty is a feeling one can have about the validity of a certain answer. This has a direct relationship with the strength of one belief. If a belief is formed by the strength of the trigger of a set of neurons, then you could say that it is also how strongly a pattern has been recognized. If a lion has legs that bend backwards and looks somewhat like a cat or dog, and things that have legs can walk and cats and dogs do walk, then with a good amount of certainty, it can be said that lions walk too. Now consider the parent... there are many different little bits of knowledge that we pair together in our reasoning that the parent uses to teach the child (with that usual mother-kind-of-voice): "The lion has legs and manes and big teeth. It can bite and is dangerous".

This pattern recognition, if it is at the source of intelligence, then also explains why we are so subject to categorization. By forming larger categories, we collect truths in the same basket and then test other things against that rule. That greatly reduces the need for storage space in the brain and makes it possible for us to make assumptions about things never encountered in this world.

Problem solving skills are the next step then. Given a particular goal, we can think and think and come up with a solution to meet that goal. We could say that we have the ability to 'imagine' things we have seen happen in real life and then try to replicate that same event, behaviour or property with other means. Problem-solving highly depends on the ability to create different hypothesis and to recognize (cross-pollinate) ideas from different analogous areas.

It sounds as thus, if the brain has very powerful pattern recognition abilities in the brain, but it is helped and supplemented by additional capabilities that use this network as a tool. Imagery patterns, auditory patterns, behavior patterns and so forth. When studying the child, one can clearly see that it is a lengthy process to get our neurons in order. Learning takes a long time to finish and even then, us humans do not develop in the same way and do not exhibit the same behaviour. Each one of us is unique. Our ability to deal with noise that clouds the pattern is amazing.

An android might actually have some advantages in learning. It should be possible for a computer to demonstrate internal states of neurons and visualize them, or execute "what-if" scenarios (as it is computed and won't have a guaranteed effect on the state or quality of the network), so that we can steer the development of the network or understand it better. One can also imagine a desktop that demonstrates the processed elements with interactive handles to improve the network, the same way a parent would point at things in pictures to explain why certain associations are true and how these same properties in other pictures, which may look somewhat different, also demonstrate the same behavior or constitute the same thing.

Thursday, December 20, 2007

The Mind Computer

The last few posts were about the human mind and contained a number of reflections on the issue from various other writers and my own hypothesis.

As we go from place to place, we assume a very large number of media information from the information. Not *all* information is indefinitely stored in our memory, but a very large number of specific landmarks are, or we recognize things around the house, and so on.

If you were to take a picture of all the things we see and store it as a binary blob of information, it is absolutely incredible how much information the brain can contain. And those are only images, not smells or auditory information. If the brain were to store it as such information, is there any limit to how much we can store? Can we express the storage capacity of the human brain in megabytes, gigabytes, terabytes or petabytes? And what happens when we reach the limit? Will we ever reach the limit or will we push out other information that we don't need?

I can more or less vividly remember (and imagine, in fact, see it as if I were to look at it right now), certain scenes of the ships I sailed on, the engine room, the cabin, the bridge, the horizon at night, my car in the UK, even the house as I knew it.

The impressive thing is that the picture need not be entirely equal for us to make a match with history. It may actually have quite some differences. A computer on the contrary relies on pixel-by-pixel verification and cannot recognize objects or entities from photo's automatically, unless it is a highly specialized algorithm. Some really interesting work was recently presented on content-aware image resizing that provides some clues about the "real" information that is contained within an image:

http://www.youtube.com/watch?v=vIFCV2spKtg

Perhaps the technology may prove useful for image analysis in the future, where it will reduce the unnecessary parts of the image and keep the landmarks. When you describe how to drive to somewhere to another person, you are making use of landmarks. You don't generally rely on distances or boring parts. Finding your way in the city is often more difficult, unless you rely specific monumental references.

What if the mind does not store the image itself, but a processed image? An image decomposed into ... numbers even, where the numbers can then be compared against other numbers that are derived from another processed image.

A big question is whether the image processors /sound processors that are in our heads are re-used at the time we look up our stored media. Are we reconstructing the images from (poorer) stored material? The more (useless) information you let go, the easier it is to find a match. The interesting thing here is also that we can rotate items in our mind's eye and thereby also form expectations what the back of something looks like.

Does this mean that the brain itself is a biological digital computer of numbers as well? Or is it
an analog machine like a valve amplifier or a CRT? How is it possible to remember things and create imaginations based on descriptions?

Cognitive Science has theories on a language of mind called "mentalese". This is a symbol language, but still "imagined". We cannot look into the mind to find out what it looks like, I don't even know if it looks like anything, it might be just mush and numbers, or perhaps be equalized with numbers... But then, how would one convert these 'symbols' or 'numbers' or whatever they are into symbols that a computer can work with? The computer is a number machine, so would require numbers in whatever representation to be able to do something with it. The function of the mind is then an algorithm, neural network, matrix or whatever construction that turns numbers into something else, manipulates them, stores them and generates meaningful output.

Tuesday, December 18, 2007

Architecture Of Mind

Previous posts discussed many individual things of the inner workings of the mind that I have read so far. As a SW architect, I prefer pictures over words to convey meanings. I've been working on a picture that is reminiscent of the OSI layer in Computer Science, which describes how communication takes place over the Internet. The picture is here:


Based on the books, it shows that at the physical layer, we have neurons and chemicals that are somehow interacting together. Much like a computer, this layer only becomes interesting for absolute experts. It is at such a low level that understanding how it works might make clear how things operate, but since it is so complex and executes at a very high frequency and most probably is parallel, it is difficult to develop expectations on that level on operation, unless you subdivide the most basic functions (CPU instructions) into grouped functions with a particular purpose. And further generalization of that.

In this picture, I see neurons as transistors or silicon. So they are basically thousands and thousands of black boxes that interoperate together and the sum of all its minuscule operations have a certain effect. The silicon accomodates the flow of currents for communication between transistors, where transistors modify that flow. The same is probably true for neurons and synapses. This is about where the direct analogy (should) stop(s), as I believe that the current household computer as we know it has a totally different architecture than the human mind due to its requirements on determinism and finity. The human mind could be inifinite (?) in its capability to produce new thoughts and new goals, based on previous contexts. Computer programs are mostly single-goal oriented and are generally not engineered to produce new goals along the way.

The physical layer thus contains neurons (or "mush") that accomodates emotions, feelings, the mind's eye, pattern recognition, rotation, language?, reasoning and thus intelligence. Intelligence can probably also be described as the interaction between pattern recognition, reasoning (which is setting new goals or imagining consequences (developing expectations) based on previous experiences). Inventions and innovations are the acts of making new associations where previously there were none. With each new created association, we are likely to become more intelligent. Ones "aptness" to develop associations is probably ones IQ. Emotional Intelligence is probably the sympathy in observing one's behaviour and developing expectations based on those observations through other associations and one's aptness to read behaviour (be attentive to signals) and so on.

In the previous post it was said that the main goals of our being are determined by our emotions and feelings. Feeling hungry means instinctively searching for food that is edible. If food is not around, it is invoking our intelligent system to determine the best course of action to get it (or zero the emotion if effort is larger than desire). If you "feel" lazy, you might want to buy something at the local gas station rather than go to the supermarket. So the emotional system works very close together with the intelligent system to resolve the problems. If you have access to a car, you might want to go into town, park the car and do some shopping. Unless you forgot where you put your car keys, in which case it might cause frustration and the setting of a new goal by taking the bike. Unless it is perceived to rain outside and the feeling of wetness is not a very pleasant foresight, which might cause you to look into the freezer and defrost some ready-made meal instead after all, even though initially the goal was looking forward to something fresher. The interaction of emotions, feelings and intelligence is clear. The discovery through our intelligence and memory that something is impossible might cause feelings of frustration, which might release some chemicals that cause our brain to go into a higher state of awareness, much like adrenaline that prepares the body for a fight.

The main goal in this case is always set by emotions or feelings. I haven't yet thought deep enough to find cases where a main goal is 100% determined by intelligence. Intelligence can set subgoals to achieve the main goal. The subgoals are determined by imagining how the main goal can most efficiently be achieved, which is done by looking for a path to that goal. This is mostly done by looking at historic events and how well this worked out in the past. If we engage a barrier that blocks access to the main goal, we invoke our associative mind and reasoning and expectations of outcome to try to remove that barrier. A sense of urgency might change how we reach those goals.

So reasoning and assessments highly require the services of the mind's eye, imagination and memory. The logic revolves around imagination and the analogy (pattern similarity) of other outcomes. The main goal determines our eventual behaviour always in the long run for the event. Subgoals might change our behaviour slightly or modify it entirely temporarily, but should always serve the main goal in the long run.

The picture is not quite complete as there is closer interaction between the emotional system and behavior. One can imagine that our body has been programmed to react instinctively to one person's behavior, a direct response instead of an evaluated response by the mind (if not, we would probably look and act like robots).

There is a continuous interchange between the emotional system and the "Intelligence" system. The "depth" of the recursion in the intelligence system (the ability to resolve complex cases like "if this, then that, and then that, however when, if not, etc."), the congruity that particular mind needs (or incongruity it can suffer) to find associated material in memory and the amount of experience are probably the best factors that determine intelligence.

As I have said in previous posts, I imagine that the brain is thus not a 'stack-based' computer, but a machine that always moves forward within its own context. I also imagine the context as something that is fluid rather than hacked in stone (as is the case with computers). The following picture shows how I regard memory, which I call "associative memory", since it recalls symbols as we listen to a piece of music, hear speech, see a scene or "think about" / "imagine" things.


The picture shows a line, the "thread" of a conversation or the "thread" of an observation/thought. We have the interesting capability to "steer" our thoughts into new directions. Are we modifying or creating goals at the same time?

The items in associative memory are not just in "drawers" or memory locations like in a computer. A computer may use lists, linear memory or hash keys to organize information. But no matter which method is chosen, there is always an incremental cost to look things up. This cost is expressed in O-notation and in general, the more information you store, the higher the cost. This has the nasty side-effect that becoming smarter means becoming slower and in certain cases, some problems become unresolveable unless you work together with many machines. Google is one perfect example, where the system basically stores the information on the Internet. It uses many, many, many, many computers to open up that information to others. However, Google cannot "reason" with that information, it can however process it and modify its relationships for a particular purpose.

So the model above shows a kind of memory that does not store information and make it accessible through keys. It shows memory where elements are naturally associated with one another and where these associated elements are automatically brought forward. The "thread" determines the direction of the context and how further associations are made. The further away from the thread, the lower the activation of that particular synapse or memory element. The context is thus basically the elements that were invoked and what we know about them. The context (associations) also give us rules. If a certain (new) association cannot be made, the thread must be redirected or halted and a solution suggested by the reasoning part of the brain. when reading nonsense, we cannot allow the nonsense to be stored as reality in our brain, since that taints our model of the real world. How do we prevent this from happening? By testing the thread against our current model and see how it complies. Changing a belief then means changing certain associations that we took for granted.

It is also possible that strong beliefs are formed when certain associations are often walked by threads. So associations probably have weights. It is not uncommon for us to believe something very strongly as being associated, but then discover that the association is invalid. We resist breaking the association very strongly, because the new evidence is still a very weak association that does not have many associations with others. Only when we forge other associations with the new evidence do we accept it taking the place of another element. Whereas we don't just "forget" the other element either. It becomes like a ghost image superimposed on the initial association, which is tagged as a false belief.

A computer can thus not replicate this behaviour easily. There are constructs like linked-lists and so on, but linear memory (the way how memory is developed now) is not ideal as a storage element for associative memory. It would be easier to think of associative memory as elements that are somehow forming tiny threads between them (which can strengthen into cables) and probably move closer together.

It's difficult for many human beings to do two things at once. To think two things at once. This suggests we have the analogous single CPU available to us. Other research shows that when we process information, we can only deeply focus on 3-5 pieces of information and derive results from those. That is analogous to a CPU that has about 4 registers available for processing. However, Intel processors work intensely with stacks, which suggests that things unwind and continue. I think of the mind as a CPU that is always moving forwards, does not have a stack and just finds new goals and conclusions and stores them as associations in memory. In computer lingo, functions that return parameters or allow output parameters or pointers simply do not exist.

The point of this whole post is to reason about the architecture of mind as if it were possible to build it into a computer. I somehow see points in the logical function of the mind that are incompatible with the current Intel architecture I am familiar with. Memory is linear, but should be associative and fluid. The computer/OS is stack based and always "attempts" to resolve subgoals at that point in time. I think that stack-based computing is the barrier to further intelligent systems. These intelligent systems are difficult to imagine, because we are not familiar with them. Its architecture needs to be thought out. Maybe it becomes easier in the long run than deterministic systems (who knows?), or on the other hand maybe they are significantly harder to program. That can be expected when you model the human mind to some degree. But full artificial intelligence (reasoning systems) require associative memory that can forget, the ability to form (new) goals based on things perceived on the outside and so on.

Another thing on the "mind's eye" that I find incomplete... Some psychological or "HR knowledge" states that some people are "auditive" or "emotive". These are related to our senses like vision, gustation (taste), olfaction (smell), auditory (hearing) and somatosensation, where the last one is a fancy one to describe everything we sense in the body (allow me to include "emotional state" into that sensation as well). The mind's eye however has been described as a purely visive operation. However, I don't know whether someone has ever done research on how blind people for example would handle their "mind's eye".

I can also personally reflect on the "mind's eye" myself. It is basically imagination itself. But I cannot only imagine visible elements (images, which is a word that probably invoked this whole oversight), but I can also imagine auditory elements, music, feeling a hot pan, feeling a cold pan, smelling grass, smelling strawberry and even more... I can store those senses as elements in memory. So, rather than thinking about memory as a set of images, it's a set of experienced senses at some point in time that are inter-related and give me a more complete picture of some event or thing. When I store the element of "feeling intensely happy" with the image of freshly cut grass and especially with the smell of that, it is not difficult to imagine that somehow the smell of freshly cut grass in the future can evoke the same feelings.

That latter part somehow suggests that our mind works even more intricately. It is as if the "processed" elements of information that we store in our memory (not signals itself, but perceived signals, post-processed by our organs like the nose etcetera, the very signals that are sent to the brain for further processing), when these post-processed elements are recalled in our imagination, the recall of that item causes our senses to relive the stored event. Some research I read at some point in time stated that when we speak, our brains temporarily reduce our auditory senses, so that we can recognize our own voice. If not, we would be startled everytime we make noise as we'd think a stranger is in the room. Maybe this mechanism is more intricate and we're actually reusing those processing parts of the brain for reasoning itself and the brain is not just one big mush (or maybe just in the physical sense, but not the logical sense). We'd actually have dedicated areas that we can reuse in our imagination as well.

So I make the point that the "mind's eye" sounds incomplete and that the only way that we can make deductions and reasoning are through previously experienced things that are stored in memory, not only as images, but also as smell, auditory information and anything else we can perceive about the situation.

It would also make the case that a computer cannot become "aware" unless it is given multiple senses itself. A sense for emotion would be very difficult, as emotion is entirely internal and I have doubts it can ever be re-engineered (it doesn't look like anything). We could perhaps simulate it by perceiving behaviour. But given audio, images and things that we can externally observe, maybe it's possible to store things together and in the future build a computer that is capable of doing similar things with that information through associative memory and the development of a context in which the observations are assessed and reasoned with.

Monday, December 17, 2007

Emotions as genetic instruments

I finished one of Steven Pinker's books yesterday, "How the Mind Works". The final chapters are a wonderful excursion into an explanation of emotions, the raison-d'etre of those and how DNA are the potential building bricks for having these emotions.

In the explanation about emotions in this perspective, Pinker and others assign the reason for having emotions to better chances for survival and reproduction. The explanation is that genes are building bricks that lead to having emotions. As usual with science, philosophy and especially cognitive science, all sorts of basic questions pop up on things that you generally take for granted when you grow up.

Marriage is a very common concept all over the world in almost any society or community. The explanation is that marriage is an "intelligent" method to reserve the attention of a spouse or to reserve the use of an uterus for the reproduction of your own genes. Marriage is, in this context, also a contract of property on a woman. This paragraph is very unromantic. It is very much viewing the concept of marriage from a biological perspective. Please don't consider this an attack on morality or ethics or that those things should be forgotten, they are entirely different discussions. Marriage is a public declaration of both spouses that a woman is dedicated to the reproduction of the sharing of the genes. The man dedicates his attention and protection to the reproduction as well. The idea of marriage is that this treaty cannot easily be broken by outsiders, further demonstrated by the carrying of a wedding ring. We construe further laws around the idea of marriage. I can imagine different social structures, for example harems and letting the men fight amongst themselves, where the non-winners become expendable armies that are driven by the leader to protect the pack of women. A full explanation of marriage and the social structure we have ultimately developed is written in the book, so I urge people to read those chapters instead of this blog for further clarification.

Men don't "feel" that they should flee when war breaks out. Women generally try to find the first hiding place. Men are thus biologically equipped to fight threats. Whereas, if you think rationally, war doesn't provide good odds for survival. Men are generally more violent than women (I did not say however that women are always non-violent). In most societies, only men are expected to go to war. Women are expected to stay at home with the children. It is morally repugnant to murder children and women. It is only a shame, but morally acceptable and not that shocking, that men are killed in the course of violence. In the context of gene reproduction and evolution, a woman is far more attached to the consequences of the decision than men. It takes 9 months and much more afterwards dedication for a woman. It is only natural for women to seek out partners that are willing to provide the after-care and attention and protection. Hence courtship. Courtship is the declaration of a man that he is willing to make the investment. A better courter makes better chances. It is thus not always the strongest man that has the best chances of a group, although other zoological families prefer the strongest. Since humans have different problems to solve after birth, the (biological) needs are different.

Which brings us to further interesting points, pornography for example. Why is almost all of pornography men-focused? Pornography for women is much less abundant, almost non-existent. Playgirl is mostly read by gays. There are plenty of bars with female exotic dancers. How many bars are there with dancing males? An explanation from a genetic perspective is that men can in theory mate with many women, but only so far that they are still able to guarantee protection for the upbringing. So this is not unlimited. The point here is that the possibility is there. A woman can typically mate with one man, but after that is tied to her decision for at least a number of years. Some research has been done on this topic. A naked, anonymous, unknown woman was felt by men as an opportunity and aroused almost all of them. Women however felt a naked, handsome, unknown, anonymous man not instantly as an opportunity, but firstly as a threat. This doesn't mean that women always need men around them that they know. But the first reaction to naked men isn't generally immediate and uncontrollable attraction.

Another point in the book is regarding adultery. A man's worst fear is the act of adultery itself by his spouse. But for the woman, the worst fear is not necessarily the act itself, but the loss of commitment and attention by the husband, if the husband decides to redirect his attention to someone else. This is "worst fear", I did not say that committing adultery by men is something that women would typically allow.

Wealth, status, dominance are all measures of fitness. Beauty is a measure of health. When you bring these factors together in our society, we still pursue wealth like crazy, compete strongly with other men, women dress up nicely and make themselves beautiful and thus compete with other women. If we were solely rational and thinking beings without emotions having a strong say in the forming of our thoughts, we wouldn't need to compete and make ourselves beautiful, it would surely save a lot of time in a day.

In another blog post I mentioned that we thought we were smart, but are actually still very much subject to emotions. This shows up sometimes when whole societies or nations go to war with one another. Why feel strongly about the piece of earth where you grew up on? If someone attacks you, it might be much more efficient to just pack up and leave. We like to think that our actions are 100% determined through intelligence. Yet if you think rational and consider the same situation for someone else, a radical thinker might just discover that for a host of generally accepted reactions, the best course of action might just be different. Some of our mundane and dark desires still bubble up all the way to the surface. And where we notice they are rather basic, we often try to cloud them through "reason". Can reason be an initiator for an action? I think of reasons as explanations for behaviour, or demonstrations that certain instinctive behaviors (emotionally driven actions) also make sense from an intelligence perspective.

It'd be rather difficult to think of a human being to be 100% guided by intelligence. The reason is that intelligence doesn't really provide a goal. As soon however as a goal is set, intelligence helps enormously in achieving it (consider how humanity evolved over the past decades and millenia). But to set a goal...? Are goals set by intelligence or are these actually set by some emotion, some underlying biological drive? If all our goals are based on emotion somehow, than it is fairly logical to conclude that we cannot be 100% intelligence driven. So when I say that humanity is emotionally driven and probably cannot act solely on intelligence alone, does this make the picture look bleak? Namely, ethics and morality are intellectual constructs, not emotional or instinctive ones. Why do we want humanity to be intelligence-driven? And does justice take into account emotional actions/reactions and does it protect those, or does it counter direct emotional actions? Is intelligence then, in this context, partly a plot in our brain to counter/control our biological purpose?

It is far easier for humans to remember negative events. We have a twice larger vocabulary for negative emotions than we have for the positive ones. Humanity has been subject to a large number of very negative events that encompassed all the world in certain cases, for example World war I and II. If you have seen the film "The Fifth Element", it creates a picture of a human and humanity as wild savage beasts that do nothing but engage in warfare. We do see frequent wars and atrocities. Books like "Humanity" from Jonathan Glover are interesting reads on the sociological causes for war and the circumstances in which war can occur. When thinking entirely and 100% rationally, war doesn't make sense. There are always ways to avert it, if both sides think rational. We may feel the urge to submit another tribe/nation to our will, religion or ways of living, but is it sensible? Intellectually, war is rather stupid. Then where does that feeling of "pleasure" or "need" in war originate?

On a positive note however, other things can be said about wars. The impact, size and involvement in wars has risen due to our possibilities of communication. ally-seeking, treaties and agreements. Worldwide mass-media and communication definitely helps to involve every country. Just think about it... Global war, a world war, how it is a direct demonstration of reciprocative aggressive behaviour that doesn't make much sense if you think about it rationally. However, how does the picture look when we remove the very effects of worldwide communication and globalization from the equation. Has war really become "worse" as compared to other centuries? And the frequency? And what about the reasons for going to war?

Some experiments have been conducted where a class of students was divided by some imaginary or real construct. In one case even, a coin was flipped in front of all the people in a class. Being part of a tribe or some imaginary construct seems to be very important for people. People will modify their behavior accordingly and in many cases will try to subvert others to the same division or murder them. That, to me, is the strongest evidence that we are very much guided by our emotions and that are goals are not determined by our intellect. In these experiments, fights have broken out or tortures have taken place by people that would not normally torture in other circumstances. The Rwanda atrocities happened, because the people were divided by their height. This division occurred through the Belgian colonization, where the Belgians called one side Hutu's and the other Tutu's. The sad part is that these were people of the same tribe. It was an intentional division of the Belgians. Years later... based on this artificial division, the people killed each other. Because of a division in height and resulting propaganda and behavior by those who felt part of this new "tribe".

Continuing on the positive note, there are also great positive achievements in humanity that are not often highlighted. Health care plans, women's right to vote, abolishment of torture (even though not practiced everywhere), welfare rules, government public services, courts and the justice/legal system. Free speech. Free thought. Free press. And so forth. So before anyone doubts that humanity itself is doomed, since the wars don't stop and continue, there are other thinks to consider as well that are not as easily remembered, but are very important to insert into the equation. We like to think negatively in many cases, but we should also think positively about achievements and not take those for granted, but treat them the same as the negative events and celebrate them more.

It will be difficult to understand to which degrees our thoughts, opinions and declarations are based on our emotions rather than our intelligent thought. I'm not sure you can even ever consider that there is a separation possible, since thought is given a direction by goals. A goal to convince, a goal to entertain, a goal to improve status, a goal to...? I am not sure therefore, where the future will bring us. Should we aim to think 100% rational or is that just going to destroy us since emotions are better guarantors for survival? Can we develop methods in the future to separate emotion from rationality?

Since at the very depth of our being, we are driven by emotions and this gives us goals, then would a life without emotion and only pure intelligence become meaningless?

Wednesday, December 05, 2007

TomTom roll-out in Brazil

I'm doing some work for TomTom at the moment and recently they announced a roll-out in Brazil of their devices. These will be for mostly SP and Rio, amongst other cities and the general routes in the country. So it certainly doesn't include everything, it's too large a country at this time, but you can already benefit somewhat from this device over there. Or... knowing how people break open cars for car radio's and cellphones, this may be yet another reason to have your car broken into.

End of year is coming up. I'll be visiting Brazil again for family mostly. Back before New Year's.

I ordered my Honda Hybrid yesterday and am expecting it January soonest, February most probably. That'll be the car for the next 4 years.

Monday, November 26, 2007

Lack of Internet

Well, my fast computer is now unhappily very single and isolated and hasn't had the joy to talk to other computers around the world to exchange information. One can only imagine how it must be feeling at the moment.

Anyway, I haven't sulked like my computer in this new and warm house. The move went otherwise very well and the living quarters I am in are very spacious and especially friendly. The house is down-floors only and has 3 bedrooms, one kitchen and a large living room. Shower and toilet too of course, or it would become quite messy pretty soon.

This month I began a PHP job for a larger organization in Holland. Finished within a week and moved on to *another* PHP job, but dealing with secure payments and a security audit, the details of which I am not allowed to disclose under NDA :). I'm now working for an organization that grew very quickly and is very popular for GPS device lovers. Yep. That one. And it's got a bit of PHP there too, although I am not technically involved this time. I'm dealing with the joy to bring clarity into the functional description of the system. Challenging, fast and things change under your fingertips when you're writing things up. Feels very healthy when things change that fast, although it's difficult to keep a full view of what's going on.

I'm still reading cognitive science when I can. I read Steve Pinker's book a bit more and it's getting really interesting. When you really understand more about perception, how you work and think, it may sound like it takes away some magic, but it's also creating more mystery since no one has fully explained how things work (it's writing down their perceptions really... "hey, this is what the brain does too" ). Many of the things in our general day-to-day activities are so much taken for granted, that when you point things out to people they start noticing how amazing it is. Like how people are now used to getting water from the tap, electricity from their sockets and when the new generation now grows up with Google to find information. (Hardly able to imagine that "historically", books were used to look up information. For them, books have become "introductory" repositories of information from where you start to know more about the topic and "involved").

Good. I worked more on Dune as well and now I can generate PDF from HTML. Well, if you know blogspot and XPress and GMail... They use an "IFRAME" component basically that is set to "designMode" using JavaScript. From there onwards, you can manipulate elements within that text control in order to format it and do other funky stuff. I'm using an existing control called TinyMCE that is used to create the content. Then I process it in six stages towards a valid PDF document that looks very slick and nice.

The idea is to build fragments of text and cross-refer them to other things. Then you can start processing things differently and refer from within documents to other parts of er.... whatever it is you're building.

Sorry for the delay in getting new posts up. I really did not have the time or the availability of the Internet, nor inspiration or events to write new things down.

Thursday, November 01, 2007

Application Security Special Interest Group

I'm part of an expertise group at the new company where we attempt to resolve security concerns and develop new awareness on security to be integrated in the development process from the beginning of a project. The focus is not on specific things like encrypting passwords, but carries a more global nature and may lead to the development of a new service portfolio.

Tonight we have a meeting. My focus is mostly on application architecture, so very high level.

Examples of AS concerns are:
  • Unwanted and unseen information leakage (see recent web2.0 developments)
  • Cross Site Scripting attacks and other browser vulnerabilities
  • Unwanted access
  • Injection vulnerabilities
  • Lack of input validation
  • Insufficient testing on the security of an application
  • Insufficient preparation and evaluation in the architecture and design
A very basic thing that isn't truly considered in many cases is that requirements are written from the perspective how something should behave. Never how something should definitely not behave. Especially in the field of security, this is where you leave a wide gap that may introduce security problems when the developer/writer/architect is not aware of certain vulnerabilities in that area.

When things develop further, I'll write more on this blog.

Wednesday, October 24, 2007

All gone quiet

It's been very quiet on this blog, since I need to arrange a number of things and will be starting a new assignment tomorrow. I only worked very sporadically on the Dune project, managed to implement some meeting and customer email stuff.

I'm also deciding on a new car for lease purposes. So far, I checked all brands and I think I found the car I'm going to take. I'll make one more test drive, then I'll definitely go for it or not. The Netherlands have a very "well" thought out fiscal system, nobody escapes from it. One of the taxed items are lease-cars, since they are income. So the higher the price, the higher the taxation on it.

My car is probably going to be a Honda Civic Hybrid. It's cheap in the lease on a monthly basis, has sufficient power, is very friendly for the environment, doesn't break, is very quiet so that music is good to listen to, sufficiently safe, enough room for my purposes, very low "bijtelling" (addition of some value for the purpose of taxation) and the car has many luxury features standard built-in, like climate-control, seat-warming and so on. Actually, the only thing you can get for accessory extras are leather seats.

Some negative points about the car is that the back seats have a low ceiling because it's a sedan and the back seat cannot flip forward because of the batteries. Also, the internal design of controls is a bit tacky, a bit like a cheap hi-fi system with lots of dials and LED's to make the car look nice. I personally prefer cleaner design, more like the Volvo.

Friday, September 28, 2007

New machine

I've received my new machine that I ordered and managed to get it installed and working. It has been working great so far. Part of the challenge of this machine is to get Linux and Windows in a dual boot configuration on a RAID-0 array. Well, after puzzling for a day or two, I managed to get things done.

I installed Windows first. For this, just follow the steps in the manual. You'll need a single floppy disk with the RAID drivers on it, then you allocate a portion of the RAID array to Windows and the rest is similar to what you are used to.

For Linux, you'll need some more work for installation. I use Ubuntu and booted from the regular Live CD. Then I followed parts of this guide first:

https://help.ubuntu.com/community/FakeRaidHowto

But I did not proceed with the installation of the software. A very important step is the mkswap / swapon commands, as this will otherwise stop regular installation. I actually continued from the LiveCD installation of this guide:

http://ubuntuforums.org/showthread.php?t=464758

So, the use of gparted is totally unnecessary. I partitioned using dmraid and fdisk, then formatted with mkfs and set swapon/mkswap as in the guides. Then immediately started the installation process and finished off as in the second guide.

My total system has Intel E6850 dual core, 2x1GB (667Mhz) low-latency memory in dual-channel setup and 2x10,000rpm WD Raptor drives of 75G each in RAID-0 configuration.

Sunday, September 23, 2007

How the Mind Works

I bought the book "How the Mind Works" from Steven Pinker. It is a very interesting book regarding the evolution and operation of the mind. You should of course not expect a book detailing the exact workings, since that is still unknown, but a series of philosophical reflections regarding the topic.

Reading the book until now, I can see how the invention of the computer makes people believe that at some point the mind can be replicated in a machine. But I have some serious doubts on this.

I think a couple of items will become very difficult to implement in machines with the current technology developed (since computers are necessarily "formal" machines that operate on "formal" symbols and need deterministic results):
  • The mind is strongly goal-driven. A computer is not.
  • The mind does not compare formal symbols, they appear more as very fuzzy. We compare and develop rules in our mind that matches potential elements with other symbols we perceive or think. (learning is the development and extension of those rules?)
  • The mind follows a goal and extracts, from our memory/experience, relevant symbols for further processing. This can even result in a learning exercise (new rules?). The key point here being that only relevant memories are very quickly extracted at an enormous quick pace. So how does a memory extractor know what is relevant and what is not beforehand?
These are already three large problems that a software engineer should face and solve before any true intelligence is remotely possible. As a key note on neural networks, before we get there... some critics have suggested that only after large amounts of training (100,000 cycles?) does the network show the behaviour that is expected. The human mind however needs a much smaller number of iterations to pick up a new ability or skill.

Hence, my point above about rule-based networks. It is as if the memory extractor picks out certain memories (let's say mentalese fuzzy symbols) that match it to what we are perceiving or comparing, out of which may be developed a new rule that is stored in our memory for further processing.

It should be a very intelligent machine that can develop rules and even has the ability to represent (internally!) fuzzy mentalese symbols. We tend to always represent items as formal elements, since these are ultimately deterministic. So, in a way, our communication with the machine never gets translated to an "inner" representation in the machine, but always as a formal representation that makes it easier for us to analyze.

Monday, September 17, 2007

Cognitive Science and Artificial Intelligence

In some other article I discussed some of my personal perspectives on how the mind works. I've been reading in the book of "Introduction to Cognitive Science" whilst in Paris, sitting in one of the brasseries near Gare de l'Est. Not exactly the most pittoresque places, but any other place would probably distract :).

It's a very interesting book with lots of different views, perspectives and theories. It makes clear that current theories consider three different levels for analysis and these have direct analogies with computers. The lowest level is at the hardware level, where the researcher attempts to understand the mind at the level of the synapse and the biology (which is the level of the circuit board, the volts, current and silicon components). Another level looks at the component level, where and how different components of the mind work together to improve our understanding of the world and contextualize input. The highest level looks at the functional level and thus describes the representation of meaning and the end results of the overall functions.

All levels are very important. The highest level is where philosophy is most helpful, the lowest level is where biology and technology measure. One school of thought suggests that the mind is some kind of associative network that is activated through thoughts themselves (or are recollections of long-term memory).

This, to me, somehow suggests that for Artificial Intelligence to really succeed, it must spend time on re-implementing the very basics of computers. Actually, to go the route of Haskell and Erlang and stackless Python.

To make a clear distinction... the architecture of a Pentium processor uses a stack by default. This is a temporary storage in memory that is used and reserved for the processor and used to "track back" into the main line of a certain program. A program is generally written in a way that it becomes more specific for each function. So a generic function would calculate discounts for an account in a larger process, a called function retrieves the account, another called function retrieves applicable discounts.

The organization of programs this way allows us to get the programs " in our heads". The complexity of a network is highly intensive for us to resolve, as compared to hierarchical trees for example. One suggested reason for this is the limited amount of working memory that is dedicated to solve a small problem.

In my imagination, it's as if we have 3-4 CPU registers and a limited L2 cache and a strange kind of memory. This memory does not work through "locators" externally, but gets "triggered" by input and starts feeding our thoughts system.

One of the most important things to consider is that AI could benefit from computer programming without stacks, so stackless computing. Look for "stackless" python to see some examples. There are significant differences and possibilities when there is no stack in programming:
  • Programs can run without pre-determined goal. That is interesting, since programs run and act in a deterministic way. We program them to behave systematically and consistently. In the absence of a stack it is theoretically possible to introduce non-consistent behavior (which might be a pre-requisite for true intelligence).
  • General batch program architecture organizes a processing loop of some kind that always perform the same hierarchically organized routines. Without a stack and with different architectures, it is possible to consider a system that has a certain "memory" of what it did before, possibly allowing for contextual determination of certain events.
  • Continuation of a program occurs by passing in the address of a function to another. This can both be a function that complements the called function or it can be the function to process next.
Stackless computing is significantly harder to architect and program than stack-based computing. The programs closer resemble a kind of network and there is no longer (necessarily) deterministic behaviour, which is a necessity to resolve a certain problem in a consistent manner. Neural networks used in Artificial Intelligence are examples where patterns are identified, but it is in my imagination impossible to build intelligent systems from neural networks alone.

I started this story with three distinct levels for analyzing behavior. The most basic level is most important, since it's the level where things execute and exchange information. If we attempt to run our functions on incompatible hardware, we're not likely to get good results. Can we redesign the computer not to use stacks, but to require programs that are behaving as different kinds of networks and are compiled to continue execution forward, never unwind a stack-entity and in the process gather and structure their memory and other functions to develop a sense of context? It might be the key to real intelligence :)

Friday, September 07, 2007

Bye Bye Brasil...

I'm relocating myself for some time and going back to Holland. Reasons mostly have to do with family and possibilities/opportunities career-wise. Besides that, it's a question of the ability to do a Master's, the work conditions, the violence and some absolutely appalling cases of corruption / abuse of public services / government that the world has ever seen :\ (and in my opinion a general lack of common applied sense and/or lack of action. Brasil (the people, the judiciary system, the democracy) will have to throw out a good lot of incompetent or thieving personas that somehow got their position there before it can go forward.

I'm already looking around for opportunities and have some interviews planned. Later on, I'll have to see how things match together. Project Dune is still going forward, the forums could improve a bit qua traffic. I'm reinstalling and moving between computers at the moment, so editing and other activities may be a bit difficult.

Saturday, September 01, 2007

Why quality plans should use wiki's

I'm writing up a lot of information in the Project Dune wiki and start to realize the potential and importance of the wiki itself. I have been browsing wiki's for some time, but now is the first time I am actually editing a lot of pages.

The Project Dune wiki is about software quality and has two main purposes. It documents the project and it documents consolidated knowledge about quality.

As I go through the pages, I experience the difference between a site with static information that is maintained by a number of editors versus a site that has freely editable information with a couple of access constraints. So when a reader can become an editor at the touch of a button, it gives the feeling (and potential) of participation. This is important for a human being and for companies to generate a sense of identity.

When a company would use a wiki to document quality plans and use the discussion and talk extensions, consider the difference in attitude that the engineers would have on the quality policy and plans (in the case of companies where only managers are owners of the policy and dictate it 100%). The question is not so much that engineers must have made a contribution on the wiki. The difference is the possibilityto suggest changes to the policy immediately and do so on the record in public.

But I don't think the wiki is immediately sufficient. I've worked in some companies that have a very archaic view of the quality plan / policy. It is probably comparable to code crush, which is when a developer becomes highly defensive against any proposed changes on the code and may get furious when he finds out that someone else messed about in the implementation. Even though the statement is often made that it's totally open and we're willing to change, it doesn't necessarily apply in practice.

It shouldn't be like that. We should consider the quality plan and policy an adopted approach documented and taken by all participants (certainly in the case of the wiki). In this view, the quality manager, managers and people that make decisions become stewards of this information. Their role isn't to judge content, it's to take care of it and continuously ensure that the group as a whole steers the quality plan and policy to better definitions.

Consider opening up your policy and plans to your internal engineers. Trust them to be able to apply common sense (or make sure they receive additional, adequate training to make better decisions). You might also want to back up your wiki with discussion forums. This way, any doubts on the policy can be cleared by other participants with the added benefit that it can also be used to document certain conversations and retain that knowledge.

Friday, August 31, 2007

Project Dune developments

I haven't been able to post much recently due to a release being developed at work, some emails from the project team of Dune and the stewardship of a new project infrastructure for Project Dune.

You can check out the forums (phpBB) already. The wiki (mediawiki) is in development and I suspect it will be released very soon.

The project has attracted a couple of new members and is now getting ready to support itself over the following couple of months. We're adding better targets, better planning, better documentation, better milestones, better interaction with the community.

Our hosting is done by SiteGround. It's the first time I do business with them, but so far things have been fantastic. Always up and their support team responded within minutes to support requests. Absolutely awesome service, awesome packages (5000GB/month bandwidth and 500GB disk space per account) and I can certainly recommend it to others. If you do, make sure that you mention us as a reference, you will help the project out with a couple of free months of hosting.

So pretty soon the wiki comes out. We'll get all the way back to development and a regular cycle of project documentation when all is done. You should also see a couple of new (active) members on the project.

Wednesday, August 22, 2007

The mind as an activity network

The book I am reading about cognitive science is very interesting. It talks about the mind as a network that is constantly activated by external events. Mostly within the context of a book you read or a conversation you are part of.

Assuming you haven't drunk any alcohol that might impede this network activation... :) When reading a story, certain events become connected and you start to visualize them. They also become intertwined with your past experiences, so the exercise of recalling the actual events that happened at for example a restaurant may not actually be 100% true ( this is a problem for justice, thinking about it ).

If you read the following story:

"The men are sitting at the table in the diner. The waitress brings the coffee. The coffee spills on the table top. The men exchange documents. The contract is signed".

It sounds like a really boring story, but your past experiences fill in a lot of details here:
  • The men are probably businessmen, because there is a contract and documents (but the story doesn't tell)
  • The waiter looks like one of those waitress stereo-types you have seen in the movies.
  • The spill on the table is not the whole pot, but it's only the size of a coaster and since you didn't read any complaint, it might be a drop.
  • The men are not sitting next to each other, but confront each other.
  • There is an eery sense of mystery perhaps.
Cognitive science has different explanations for how the mind works. One of those is a network with activations that activate nodes and which, in parallel form, activates other nodes that are related.

Now consider memory... Memory is according to some theorists a regular activation of nodes that cause you to feel or think something similar to what you have experienced before.

So... when reality is difficult to recall perfectly... it's because memory isn't a perfect retrieval machine. It's an imperfect machine that retrieves the gist of things and something else that may or may not have occurred, but you may never be sure.

When you gather more experiences in your network, you'll probably form your opinions and personality as well. This means that you yourself becomes responsible for some things that you find important, thus those nodes become more visited and as they become more visited, they seem even stronger and more important than anything else. The objectivity of the mind in this sense is a bit of an utopia for sure. Yes, we are able to hide our subjectivity by writing and talking the right words, but it won't ever happen! :)

Recalling the events from above, you can think of various experiments like:
  • Show a couple of sentences and then ask people if they're sure they didn't read the sentence or actually did ( helps to find out how much they imagined and how much was real of the story ).
  • Re-tell the story factually, how it was read exactly.
  • After the story, let people choose words that are relevant and words that are not. This helps to find out more about the ability to associate between events and to find out the distance of concepts within a network.
Anyway, I'm just starting to read here... It's very interesting indeed!

Tuesday, August 14, 2007

Is a formal IT development process like ISO and CMM a cognitive substitute?

Not hindered by any lack of knowledge once again, I'm asking myself some questions on what the real factors for IT project success are. These factors are often broken down as planning, skills, communication and formalization of a development process (like ISO, CMM, etc.).

What I miss from the above are other properties that people should have, beyond formalization and communication and skills. It should be easy to defend that project success depends for a very high percentage on communication and its quality.

But communication is an expression of our ideas, and then my argument would be that the correct ideas should exist before we are able to communicate efficiently with others to align the team and project with those ideas that would guarantee the success of a project. So... what is more important? The efficient communication itself between the team? Or the formation of the (correct) ideas itself in the first place?

My thoughts are basically revolving around the idea that... if I were to re-design or re-think software quality as a concept, how would I explore the limitations, shape this area of thought and come to new conclusions and realizations of what, from a cognitive perspective, really goes on in an individual's mind during the development process and how this can be very strongly influenced by the communication within the team. I reckon that from this perspective on quality, personality is more important than technical skill.

Some initial thoughts that could start this theory are:
  • Personal traits and attitude that seek out error are far more important than any compliance with rules or regulations.
  • Quality cannot be undoubtedly and efficiently measured without establishing clear criteria with the user or client.
  • Thought pattern development, problem analysis, conflict resolution, behaviour etc. are not generally part of quality theories (unless you accept the very vague terms in ISO/CMM documents that might just mean about anything).
  • The nature and objective of the project should be very clear from the start.
  • Software engineers should understand general errors of thought and learn to voice concerns more readily and harshly.
  • Disposition towards stakeholders may put pressure on engineers to change their response.
A natural reaction when encountering a problem, accident or incompliance is to establish rules and guidelines that people need to adhere to in an attempt to prevent similar occurrences. This might also stimulate a certain no-thought attitude where rules and guidelines are simply followed without understanding the actual matter and nature of the job. It can also falsely be used as a means to indicate progress. The latter can result in very serious problems developing until it's too late to recognize them. It might also result in a couple of people that know and a couple of people that follow blind.

So, my focus and view on quality in this post is to ramify about identifying cognitive processes and nature of the human mind that are most contributing to developing quality software. So, rather than thinking of process as a set of rules and actions, I regard process as a set of traits, attitudes and motivations that someone needs to develop in order to develop high quality software.

Traits, attitudes and motivations can also be called company culture. If we understand how we can influence this culture from within, we should have the capability to improve the quality that a certain person is able (and willing) to develop.

But there is another problem here. Without a framework for measuring the level of quality produced, the entity has no means of knowing whether their actions are effective. This probably requires frequent peer reviews and other means for measuring compliance in an attempt to adjust traits, attitudes and motivations, all to become more efficient all the time and eventually contribute more significantly to this process of self-enlightenment.

The problem here is the same as the one indicated at the start. There are (not yet) true absolute criteria of measuring software quality. Any attempt to establish such a criteria has so far resulted in a total mess, since the entities that are impacted by these criteria attempt to maximize on the goals given to them individually. This is because these criteria are often aligned with promotion factors or budget allocations.

There have been good successes, because the factors that indicate quality can differ from one project to another. Now... given this is a truth. How can we ever consider to develop a framework or "standard" of quality that encompasses each and every different situation or project? Standard in the sense of rules, regulations and processes as actions.

Monday, August 06, 2007

Knowledge has arrived!

I ordered books from Amazon about Cognitive Science, distributed intelligent multi-agent systems, we got "Corporation be Good!", the market for virtue, "The language instinct" and "How the Mind Works"...

The purpose is not just to consume the knowledge within the books. I've interestingly experienced that through cognitive science explorations around, I've started to think more focused on very philosophical issues like the meaning of meaning. When making life decisions and so on, it's a good idea to know why and what you are doing things for.

So... if I don't post here for a while... that's the reason!

Saturday, August 04, 2007

And we thought we are clever... :(

Some recent discussions and readings are about cognitive science and behaviour. Behaviour sciences and anthropology are very interesting study areas where some people like to draw similarities between behaviours of animals and humans.

When we're young, we learn that humans are the superior intelligent species, given that we are able to apply rationality and reason to cases. The very idea of intelligence is a bit of a loose concept in my opinion. I find it a bit difficult to truly define what intelligence means, even not in words.

To give examples about the daftness of human beings, just look at war. When you really think about it, war is a very foolish concept, you could make analogies with male leaders of animal groups like moose or elephants that are struggling to come out as a leader of a group. A very simple and basic animal activity. I liked the book Humanity by Jonathan Glover. It shows how far we as humans still need to go to really exceed ourselves and to really become the intelligent species on this planet.

Actually, somewhere in this book I read about an occurrence in the first world war, where soldiers of different sides got together for a Christmas mass and the other day buried their dead and played soccer in the battlefield. The film is called "Joyeux Noel" and really shows you how still close to basic animalistic behaviour (and instinct?) we get. As soon as people found out that the people on the other side were equal, they discovered it didn't make any sense whatsoever to shoot one another. This created quite a difficult situation for the officers, because it means a total impasse of the war situation! How else to resolve a conflict than through violence?

Recent technological improvements in warfare were mostly aimed at increasing the distance between the killer and the victim. The objective to better guarantee the killer's life. From a psychological perspective, killing from a distance is much easier and comfortable. You don't have to drive a knife in the other guy's belly, the guy doesn't scream in your ear, so it's easier to be done with it. The objective being....... ..... ? It's all about deprivation of the resources on the other side, whether these are human resources or material resources. There are warbots on the market now that can be armed with guns and have the ability to fire. Will we see largely mechanized armies in the future, where robots do the fighting for us? If this is the case, then it really gives us new information about ourselves. How boy-ish we actually are in the resolution of conflicts (or starting them). Yes, humans fight on both sides, although their political and cultural development may significantly differ. The same existential rules apply however.

Jonathan Glover talks about moral resources. Think of them as your capability to apply rationality to a given discussion or event. Tribalism (the sense of belonging to a group) can overwhelm these resources, which can thus lead to very severe levels of violence that in other occasions would be deemed totally inappropriate (read: when applying common sense in regular situations). Some psychological experiments put very good friends into different groups to analyze their behaviour. In no time, they turned enemies to one another if their motives couldn't be aligned any longer (motives that are in opposition with the group to which they belong).

Belief is also a factor. Belief. It's a very dangerous thing that can have enormous consequences, especially when certain people are not willing to consider alternatives to whatever they believe. Belief is highly influenced by propaganda and as such, propaganda is a strong tool to (knowingly) influence your disposition towards other people. The trick in this case is to de-rail the enemy or the other side and reduce their status or value. Dehumanize them. Once dehumanized and reduced to second class, the killing becomes a lot easier.

So, if we see how easy it is for our minds to fall into a certain trap of extreme simplistic and animalistic behaviour... how much of our reasoning is really governed by reason and intelligence? I reckon that a lot of arguments in discussion are raised in defense of the continuation of our instincts. Even though they sound intelligence, they do not necessarily take (sufficiently) into account what all the important factors really are.

Our beliefs and emotional disposition towards a subject and morality changes due to worldly events. One could argue that countries that had more problems (war) to deal with historically are also the countries that are now in the first world. Maybe this is also caused by behaviour in general. In Brazil for example, people absolutely hate conflict and do everything in their power not to have to tell someone what they think or that changes in jobs for example need to take place. Compared to Holland and the UK, anybody that steps out of line of generally accepted practice will very quickly be pointed out, or appropriate steps will be taken to conform. Maybe this kind of behaviour leads to more wars (to force other countries to behave similarly or "in line with common sense"), whereas other countires seek to avoid conflict and "live with it".

All in all, the whole point of this post is to re-consider the fact of rationality and intelligence. We cannot assume that we are 100% rational thinkers by reason, morale and so on. There are definitely some animalistic factors involved that influence our beliefs (beliefs influencing our decisionmaking process). After all, how strongly you feel about something being true is how strongly you react to a certain event. I'm not sure whether us humans are able to deal with this in full at some point in time. We'll probably need to emotionally detach ourselves and think like Data, of Star Trek. :)

Maybe it's all part of being human.... :)