Monday, December 21, 2009

The brain in a vat

Daniel C. Dennett started out his writing on "Consciousness Explained" with a description of a demon, who is trying to trick a brain inside a vat into thinking it's actually inside its real body and world with real worldly experiences. You can easily compare this account to the hooked up people inside the Matrix, where each individual's brain is fooled into having a body, relatives, material things and the day-to-day worries of their lives, but actually hooked up to a large computer, the Matrix, which is conjuring up these illusions to every brain. Dr. Dennett's is giving this a bit more thought and considers the things you need to do for the brain to get tricked like this. Now, let us assume that, contrary to the story of the Matrix, the people in this thought experiment actually have had real-world experiences to compare these senses too. First of all, you'll need to be able to simulate the senses; vision, smell, hearing, tasting and so forth. And here comes the difficult part. The demon or the computer need to be able to detect some outcome, the chosen action and react on that appropriately as the physical world would re-induce their worldly feedback to such actions, in a way that this brain is used to.

In the example, he's referring for example to lying on the beach with your eyes closed and running your fingers through the sand and the feeling that the course grains of sand give you on your fingers. You could also refer to the action of jumping, the feeling of being in the air for only a little while and the thump when you get back on your feet. Or whistling in some echo-ey tunnel made of mostly metal and the sound this returns to your ears, or the joint coordination of some sports game, etc...

So, the difficulty of tricking a brain is in the first place hooking it up in the right places, sending it the right kinds of signals and reading the brain at the right places. A more difficult thing however is that outside the brain, there's a physical world that the brain expects to get feedback from in very specific ways. Especially once it has experiences in this world, it'll look strange if anything of that changes. The illusion will wear off very quickly due to these little differences (although you would argue that if the illusion is near-perfect, the brain will probably start doubting itself?).

The point here is that the whole thing of "Artificial Intelligence" isn't that close yet as people may try to make you think. Computers live in some kind of "model" of the reality, it's a transduction of what is really there to be perceived. Some other transduction is also very likely affecting us (and trying to tell someone else what you experience or what a bat experiences is very difficult to achieve. We simply cannot easily imagine how it'd be to experience something else entirely). This model is an electronic or otherwise suitable representation of the world around a robot or AI, and therefore subject to our imagination and experience, but not necessarily the most ideal.

Worse yet, biological entities around us are evolved in the physical world. Probably according to Darwin's Origin of Species, where evolution is largely a function of natural selection of creatures, constantly (needing to) adapting to their environment. The same ideas can be found in Genetic Algorithms, a method of engineering where algorithms or parameters thereof are encoded as genetic material and then mutated and crossed over just as biological material. This is sometimes useful in cases where the actual (very long!) functions are very difficult to discover.

In supervised learning, where the actual solution is perfectly known, artificial neural networks can be taught these patterns by feeding the input and then checking the outcome and recalibrating the parameters of the network until the performance of the network no longer improves. There are difficulties in this kind of training, namely that a network that is too large overfits the training and performs generally badly on unknown cases that it may have to predict. A network that is too small is generally not capable of holding the information that is needed to capture the actual function, so underperforms.

So, for neural networks, one either ends up remodelling the entire world and calculating elements of that, such that a network can be trained to recognize or predict it, in which case you're better off sticking to this representation of the world instead. The alternative is not pretty either. One must find a method to evolve a network such that it has the correct architecture for the job at hand and train it without knowing the elements in advance. More so, because these more complex networks don't generally have classification outputs like most ann's do, but instead, I think, probably thrive on the state within the network itself, induced on their neural surroundings.

Monday, December 14, 2009

Combinatorial explosions

I'm reading, next to "Consciousness explained", a book from one of my favourite writers Steven Pinker. This one is called "A blank slate", otherwise known as tabula rasa, where this blank slate refers to the idea as the brain being a type of blackboard that is written to by experiences, or whether certain capabilities and intricacies are preprogrammed, otherwise called innate. Pinker's writing is everything from an exploration, to various explanations, but not in the least a (strong) argument about how we should interpret this "blank slate" and how, in the search for an explanation about intelligence and consciousness. Probably it's best to see his presentation over at TED talks. But be aware that this presentation doesn't cover the topic of the book entirely and that there are many more little facts and explorations that the presentation certainly doesn't cover. Other than that, it provides a good insight into his kind of thinking and scientific approaches and validations.

The combinatorial explosion comes into effect when the number of variables increases by such an amount, that a true understanding of every state and how states influence each other becomes an impossible task. In two examples given, there's the case where the human genome was analyzed and in some reports counted as the number 30,000. The total DNA is a bit more than that, but 1.5% of this are these 30,000 genes, the rest is considered "junk DNA" (apparently and allegedly). Well, looking up the numbers on different sources, you get different numbers here as well. Some sources say 35,000, others 30,000 and Wikipedia claims 23,000. Which one is the true number, we'll never know :).

There's a kind of worm though that has roughly about 18,000 genes (or 20,000 as in other claims?). This worm has 203 neurons or so exactly, avoids certain smells and crawls around looking for food. How come such a worm with 18,000 genes is so strikingly different in behaviour and intelligence from us humans, where we have "only" 35,000 genes? Are we so much like worms?

If you consider the numbers as a linear base, then this must indeed be shocking. But genes amongst themselves interact and may then inhibit or activate other genes that create different proteins, which in turn influences the way an organism develops and grows and at which time-scale in life.

Here we go... The combinations of 18,000 genes interacting amongst themselves is already staggering, but 35,000 genes is a factor 2^17,000 larger. Whoops! That has the potential to be substantially more complex than anything we've ever seen before :). This doesn't mean that all these genes do interact, but the amount of information in this sense cannot easily be captured by just listing the numbers of genes. The actual information is contextually determined by the numbers that actually do interact together, and whether there are also combinations of 3-4-5 genes that form some kind of composition together. In those cases, the complexities go up to 3^x or 4^x even (we assumed interacting genes together before).

Now, it is interesting to find out how these genes interact and how influence these genes really have on our thinking and behaviour. We generally consider the environment a very important element in the analysis of behaviour and growing up, sometimes to such an extent that the entire evaluation is attributed to how some environment determined a person's actions and behaviour. But if we find out that general behaviour, personality twists, general motives and so on aren't necessarily determined by the environment, but hardcoded in genes. It's just our experience that allows us to exhibit this behaviour or not, then the picture severely changes. I'm starting here with my own thoughts by the way, this is not necessarily what was written in the book.

In that case, experience is more of a method to determine probabilities, elements of chance and other things that either inhibit our motives or stimulate them. In this view, our personality and things we do are genetically determined, while our dynamic interaction and behaviour choices are mostly governed by experiences from the environment. The general observations that one can draw from this is that personality twists, likes and dislikes probably come out as characteristic features of a person, but they are genetic. Whereas specific choices not to do something or go for it all the way are potentially given by variables in the environment. This sheds a totally new light on behaviour and how we perceive it in general.

Another thing that I found interesting is the way how it interlinks with computer science books about modularity and compositions of specific system parts with a particular purpose. Rather than thinking of a computer as a single thing, you can divide it into multiple elements like the CPU, the harddrive, graphics card, etc. But if you look closer, than the CPU is a large number of transistors with a number of pins plugging into the motherboard somewhere, such that it is linked with memory and buses on the motherboard to give it the power it has. The CPU is often called the central part and other parts are ancillary to that (well, you might also argue that the motherboard is the main part, because that's what everything is slotted into).

In this way, you can look at the entire computer from totally different views, each explaining very different purposes and levels of abstraction. Looking at the transistors of the CPU, there is no point in discussing why a word processor does all the things that it is told to do, the level of detail is too high to consider it. A more appropriate level is to consider the functions of the computer as a whole and then to explain how people interact with it, why a computer reacts and acts the way it does (it's been told to do that by its designers) and so on. There is also the level of how devices interact that is interesting. To handle keystrokes for example, you could consider yourself part of this system, the input provider. The keyboard is a transducer that converts a small burst of electricity into a scancode, which is read from the USB port of the computer. This scancode, a byte, is then processed by the CPU and sent to the OS and program, which determine if the scancode should be discarded or accepted. The program may then decide to attach the scancode to some array of bytes it has in memory, completing a long line of character strings. For feedback into this entire cycle, the graphics card gets a pointer into this array and repaints the screen when needed.... PHEW!

For each of these things, you can go down to the signal level even, but also further than that on the physics level of electrons. Explaining this process through electrons is going to be a long sit-in, so let's not do that here. At the highest level, it just seems to make sense. You press a key on the keyboard and that makes the character appear there on screen where the cursor is... Is that so hard to understand? :).

Similarly in the understanding of our thought processes, there surely seems to be good room for finding out how networks interact and process or store information. I don't think the sciences in neural networks can be called complete, in the sense that we know everything about them and what they do. One idea for example is that neural networks are very good at processing signals and outputting another. Basically directly responding to signals. But the way how we compose neural networks in amateuristic ways doesn't yet provide handholds for doing more things with them. Biological networks may have a lot of "failover cells" in them that are not strictly necessary to make something function. Also, the human brain consists of 100,000,000 neurons, but 60,000,000 of those are necessary for direct, muscular responses, instinct and movement (the reptilian brain). That leaves "only" 40,000,000 cells for human reasoning, visual perception, auditory perception, speech and other functions. Hmmm... that does shed a different light on things.

Basically, numbers by themselves don't give you information about actual complexity. It's about the interactions between these components, what they can do together in unison that's yielding the most power.