Showing posts with label philosophy of mind. Show all posts
Showing posts with label philosophy of mind. Show all posts

Wednesday, November 04, 2009

Abstract thought

Thought... what is it? I've posted before on the topic of consciousness and thought. Without any final conclusion, the topic of thought is discussed in philosophy with differing opinions on the matter. Some say that thought has mystic properties and is only reproducible in biological matters, some in that camp go as far to state that thought is purely human and is what separates us from animals. Could be, since there surely are specific differences in the way we learn, reason and behave, even compared to monkeys. See "Ape Genius" here for example. The last video talks about a very important difference, namely pointing. The questions posed in the research is whether apes are more or less thinking like us or share specific traits that make them behave like us. Looking at the videos and specifically the parts about cooperation and learning, I have personally come to the conclusion that there is not that much in common (the problem is that in a way, apes look more like us than, say, horses, so we're inclined to believe they think like us for that reason. But are they really the same once you completely ignore the 'look-alike'? ). Back to the question at hand... there are other streams in philosophy that believe thought is computational. Then there are once again subdivisions in that region. Some say that the mind is partly computational, but has other traits that are incredibly hard to model and execute on a computer for example.

Scientists now believe that they can recreate thought by replicating neural networks. So the idea is to think of a common task and then proof that this task can be satisfiably executed by an artificial neural network running in a computer. The problem here is that the neural network is trained for a very particular task and there is no reasoning taking place other than the execution of that particular task. So the neural network expects a range of inputs and will calculate the correct output based on those. If the inputs are out of range, the output is not guaranteed to be useful. Also, you will only get a meaningful output for a specific purpose, not an output that is meaningful in different scenarios.

The biggest problem here is that we can't think in sufficiently abstract terms and the relations between those terms. Because we cannot imagine 'pure thought', what it looks like and how it can be alternatively represented, we keep pushing buttons in the hope that somewhere we find an interesting response somewhere that indicates some kind of causality of the external world and internal processing.

In order to simulate thought in a computer, one must assume that thought is purely computational, otherwise the motivation and the execution of the research is contradictory. Pure computational thought requires us to think differently about representations and find other ways to represent parts of the meta-model of the outside world. The world out there has a form and when we see a cat, we don't pick it up, put it in our head and reproduce it later in thought. So, thought requires us to model those things differently in our minds such that they can be reproduced later. Whether this be a word or a number is not truly relevant. The relevance is related to how the relations between these concepts can be maintained. So, reasoning about things isn't exactly about representing the concepts, but about representing the relations between concepts, what things do to one another or how they are typically related.

Singular Value Decomposition, often discussed on this blog within the context of collaborative filtering, has the ability to express patterns of co-occurrence or relations between numbers or items. And here starts the rub. For SVD to be useful, the designer / modeler needs to determine the correct way to put the numbers into the matrix before the calculation is started. The model dictates for example that users go into columns and movies go into rows. Then for each combination, a number is inserted, yielding a huge matrix of interrelations between instances. The interesting thing here is then that one movie relates to many users and one user relates to many movies. So, in essence, the preference of a user is modeled within this matrix and related to the type and characteristics of a movie. In a sense, this means that preference is modeled against characteristics. We don't have any data available about movie characteristics or user preferences directly, but generating this matrix we can suddenly start reasoning with those, although the exact meaning of preferences and meaning of characteristics, appearing as numbers, may not be derive-able.

And here goes... in order to make those preferences and characteristics meaningful, one should have a classification system at hand that can compare classes with one another. Classification means comparing two things to one another and trying to find the most important characteristic that make them match or differ. That operation is different from the calculation performed earlier.

So this goes back to our incapacity to think in truly abstract terms. We can get a feeling for something, but then if it is abstract, can't describe it. Although we are certain about incompatibility, incongruence or similarity for example. A computer model where these abstracts can be manipulated and translated into something meaningful, classified and everything backwards is going to be a very important step.

I think of the brain not as a huge number of neurons that are interconnected, but I think of each neuronal group as some kind of classifier or work item. In that sense, one can question whether it makes sense to simulate 100 billion neurons if the total effect of those biological computations can be simulated more effectively using stricter and cheaper mathematical operations, or a simulation of neuron groups in a neural group network instead, severely reducing the dimensions that are (thought) to be necessary.

This is a great question for research. Can a machine that is constructed from bits, which are 0's and 1's, therefore have no intermediate state, work with numbers and symbols in such a way that it starts to approximate fluid thought?

Monday, August 10, 2009

Philosophy of mind and innovation

In this post I'm going to talk about a possible measure of success for creating new innovations in software. To start off, I'll put some links to put this more into context: Here's a semi-graphical timeline of the development of the GUI. Here's one of my posts made in 2007, where I'm talking about how working with computers becomes more of a conversation than it has been before. Mainframes and the likes just accepted batch jobs and you'd type the command, hit enter and that's it. In web 2.0, the computer is more or less looking over your shoulder what you type, point at and do on the screen and then if it thinks it can help you out with something, it'll pop up some helpful hint or thing next to your focal point. It is related to what we know about and how we think about our bodies and minds.

And all of this started with philosophy, thus it started with the Greek: Aristotle, Plato and Socrates. The first thoughts there however were meta-physical. What is the world made of? What does it mean to be alive? In this context, the most important thought is the separation of body and soul. The years after that, most thinking is based on the philosophy of the greeks. The middle ages turned this around a bit with the introduction of religious thought into philosophy itself. Many people tried to explain things through the use of religious ideas. And then came the others to found modern philosophy, amongst them René Descartes.

Descartes is one of the founders of modern philosophy. In his discourse about the philosophy of mind, he introduced the concept of dualism. He also considered ideas about the working of the human body. The interesting thing here... the explanation used mostly analogies and was inspired by the progress of machinery, tools and things in real life that existed at the time. In that time, the functioning of nerves and blood vessels wasn't entirely clear. For 1,500 years, people thought that "animal spirits" governed the human body. However, during the time of Descartes, certain developments were underway, like the start of hydraulics. Basic, mechanic machines could be built. Descartes knew about those and imagined the body as a kind of automaton as well. To the best of his knowledge, he tried to explain how the body worked, and used the concepts that he had available to him. After Descartes, the thinking about psychology, the mind, the brain and how it all worked together really set off. In a sense, you could also say that the way how we started thinking about things just notched downwards one level.

At this time, the thoughts about the mind didn't really go further then: "there are mechanics at work that bring sensation to a central point somewhere in the brain". This central point was imagined to be a mystic piece of the puzzle [the soul?], where thoughts occur and what can be said to be the 'I, self', when you think. So, in that time, since people observing the machines in those days clearly understood that such a machine was not alive, they didn't (perhaps not all) imagine that the machine could actually produce human thought. But, some people that did not understand its function did say that the machine can be made to do anything we ask it to do.

The function and design of the machines were still used as analogies to explain how other processes could work in the human body. Physics and mechanics research has thus certainly helped the developments in medical science to better understand the function of the heart as a pump and the fluid dynamics of blood in the vessel.

This is just a tiny grasp from everything that's happened in thinking about the mind, surely, but there's not a lot of space left on this post to continue :). It's best to read it from this excellent source I found here: Courtesty Dr. C. George Boeree.

So, philosophy started with the Greek, seeding a new science called psychology along the way and in the 20th century already at the start, we started building computers.

Why is all this "philosophy stuff" relevant?

Well, if you look at the developments in philosophical thinking and the developments in machinery, they seem to go pretty well in step. This is not because they are directly tied together, but because other developments in neuroscience, psychology, electron microscopes and imaging techniques need to be developed in order to .... start asking the right questions. And philosophy, psychology, design, artificial intelligence and sciences are the most important scientific research sources for finding those questions and hopefully the answers.

Analogous to the comparison of the mind to the technology of every age, we also see that we (re)construct our ancient tools into newer technology available. But hold on there... Shouldn't this be new tools in new technology?

Consider the desktop for example. The GUI timeline at the top is a resemblance of how we introduced the computer to the general public. The GUI works great, I don't say it should never have been developed, but there are very old concepts still present in there. Take the desktop and its applications for example:
  • The trash bin is literally there and even called the same.
  • MS Powerpoint is basically a digital slideshow projector with edit functions.
  • MS Word is clearly inspired by writing on paper, thus the typewriter.
  • The "file manager" looks like a filing cabinet in the older GUI's, but this is slowly being replaced by "explorers". The explorer still has a hierarchical view of files however.
I think one of the reasons why this was done is to make it easier for users to familiarize themselves with the environment in which they were 'operating things'. But now that everyone is accustomed to the use of a computer, more or less, there doesn't seem to be a reason to maintain this ancient set of tools in this new environment.

So, one of the problems in software innovation is related to imagination, it's certainly not about technology. A good example for starting to get rid of powerpoint is prezi. Prezi is reminiscent of the concept of mindmapping, but that is great. What is the most important difference between prezi and powerpoint?

Powerpoint is a digitalization of the slideshow projector. Mindmapping is an attempt to convey thought and their relations between them. The first is putting an old thing into a new jacket with more features. The other is understanding the mind, understanding cognition, knowing your mind and how it works, how we consume information better, what allows us to make proper distinctions, what allows us to make better judgements, communication and what allows us to easily derive the correct context and then developing a method to enable that process.

So, to finish off my post in 2007: "Conversation with the machine" with the question:
"How can/will/should user-machine conversations evolve from this point onwards?"
I think innovation should focus on following and enabling the natural flow of cognitive processes, not on the reconstruction of ancient tools of communication and processing, like mail becoming e-mail. Before you feel the urge to send out a mail to someone, there is a reason, a motivation. What if we start from there instead and just consider the computer the ultimate tool in visualization and computation? Oh yeah, and it has connectivity as well.

( footnote: There are certainly other tools like keynote+iMovie and Adobe flash that can be used to produce a prezi-like presentation. Prezi is just mentioned, because I think it is a good example of how to think "out of the box" ).