Sunday, September 23, 2007

How the Mind Works

I bought the book "How the Mind Works" from Steven Pinker. It is a very interesting book regarding the evolution and operation of the mind. You should of course not expect a book detailing the exact workings, since that is still unknown, but a series of philosophical reflections regarding the topic.

Reading the book until now, I can see how the invention of the computer makes people believe that at some point the mind can be replicated in a machine. But I have some serious doubts on this.

I think a couple of items will become very difficult to implement in machines with the current technology developed (since computers are necessarily "formal" machines that operate on "formal" symbols and need deterministic results):
  • The mind is strongly goal-driven. A computer is not.
  • The mind does not compare formal symbols, they appear more as very fuzzy. We compare and develop rules in our mind that matches potential elements with other symbols we perceive or think. (learning is the development and extension of those rules?)
  • The mind follows a goal and extracts, from our memory/experience, relevant symbols for further processing. This can even result in a learning exercise (new rules?). The key point here being that only relevant memories are very quickly extracted at an enormous quick pace. So how does a memory extractor know what is relevant and what is not beforehand?
These are already three large problems that a software engineer should face and solve before any true intelligence is remotely possible. As a key note on neural networks, before we get there... some critics have suggested that only after large amounts of training (100,000 cycles?) does the network show the behaviour that is expected. The human mind however needs a much smaller number of iterations to pick up a new ability or skill.

Hence, my point above about rule-based networks. It is as if the memory extractor picks out certain memories (let's say mentalese fuzzy symbols) that match it to what we are perceiving or comparing, out of which may be developed a new rule that is stored in our memory for further processing.

It should be a very intelligent machine that can develop rules and even has the ability to represent (internally!) fuzzy mentalese symbols. We tend to always represent items as formal elements, since these are ultimately deterministic. So, in a way, our communication with the machine never gets translated to an "inner" representation in the machine, but always as a formal representation that makes it easier for us to analyze.

No comments: