Wednesday, November 04, 2009

Abstract thought

Thought... what is it? I've posted before on the topic of consciousness and thought. Without any final conclusion, the topic of thought is discussed in philosophy with differing opinions on the matter. Some say that thought has mystic properties and is only reproducible in biological matters, some in that camp go as far to state that thought is purely human and is what separates us from animals. Could be, since there surely are specific differences in the way we learn, reason and behave, even compared to monkeys. See "Ape Genius" here for example. The last video talks about a very important difference, namely pointing. The questions posed in the research is whether apes are more or less thinking like us or share specific traits that make them behave like us. Looking at the videos and specifically the parts about cooperation and learning, I have personally come to the conclusion that there is not that much in common (the problem is that in a way, apes look more like us than, say, horses, so we're inclined to believe they think like us for that reason. But are they really the same once you completely ignore the 'look-alike'? ). Back to the question at hand... there are other streams in philosophy that believe thought is computational. Then there are once again subdivisions in that region. Some say that the mind is partly computational, but has other traits that are incredibly hard to model and execute on a computer for example.

Scientists now believe that they can recreate thought by replicating neural networks. So the idea is to think of a common task and then proof that this task can be satisfiably executed by an artificial neural network running in a computer. The problem here is that the neural network is trained for a very particular task and there is no reasoning taking place other than the execution of that particular task. So the neural network expects a range of inputs and will calculate the correct output based on those. If the inputs are out of range, the output is not guaranteed to be useful. Also, you will only get a meaningful output for a specific purpose, not an output that is meaningful in different scenarios.

The biggest problem here is that we can't think in sufficiently abstract terms and the relations between those terms. Because we cannot imagine 'pure thought', what it looks like and how it can be alternatively represented, we keep pushing buttons in the hope that somewhere we find an interesting response somewhere that indicates some kind of causality of the external world and internal processing.

In order to simulate thought in a computer, one must assume that thought is purely computational, otherwise the motivation and the execution of the research is contradictory. Pure computational thought requires us to think differently about representations and find other ways to represent parts of the meta-model of the outside world. The world out there has a form and when we see a cat, we don't pick it up, put it in our head and reproduce it later in thought. So, thought requires us to model those things differently in our minds such that they can be reproduced later. Whether this be a word or a number is not truly relevant. The relevance is related to how the relations between these concepts can be maintained. So, reasoning about things isn't exactly about representing the concepts, but about representing the relations between concepts, what things do to one another or how they are typically related.

Singular Value Decomposition, often discussed on this blog within the context of collaborative filtering, has the ability to express patterns of co-occurrence or relations between numbers or items. And here starts the rub. For SVD to be useful, the designer / modeler needs to determine the correct way to put the numbers into the matrix before the calculation is started. The model dictates for example that users go into columns and movies go into rows. Then for each combination, a number is inserted, yielding a huge matrix of interrelations between instances. The interesting thing here is then that one movie relates to many users and one user relates to many movies. So, in essence, the preference of a user is modeled within this matrix and related to the type and characteristics of a movie. In a sense, this means that preference is modeled against characteristics. We don't have any data available about movie characteristics or user preferences directly, but generating this matrix we can suddenly start reasoning with those, although the exact meaning of preferences and meaning of characteristics, appearing as numbers, may not be derive-able.

And here goes... in order to make those preferences and characteristics meaningful, one should have a classification system at hand that can compare classes with one another. Classification means comparing two things to one another and trying to find the most important characteristic that make them match or differ. That operation is different from the calculation performed earlier.

So this goes back to our incapacity to think in truly abstract terms. We can get a feeling for something, but then if it is abstract, can't describe it. Although we are certain about incompatibility, incongruence or similarity for example. A computer model where these abstracts can be manipulated and translated into something meaningful, classified and everything backwards is going to be a very important step.

I think of the brain not as a huge number of neurons that are interconnected, but I think of each neuronal group as some kind of classifier or work item. In that sense, one can question whether it makes sense to simulate 100 billion neurons if the total effect of those biological computations can be simulated more effectively using stricter and cheaper mathematical operations, or a simulation of neuron groups in a neural group network instead, severely reducing the dimensions that are (thought) to be necessary.

This is a great question for research. Can a machine that is constructed from bits, which are 0's and 1's, therefore have no intermediate state, work with numbers and symbols in such a way that it starts to approximate fluid thought?

No comments: