Reading the book until now, I can see how the invention of the computer makes people believe that at some point the mind can be replicated in a machine. But I have some serious doubts on this.
I think a couple of items will become very difficult to implement in machines with the current technology developed (since computers are necessarily "formal" machines that operate on "formal" symbols and need deterministic results):
- The mind is strongly goal-driven. A computer is not.
- The mind does not compare formal symbols, they appear more as very fuzzy. We compare and develop rules in our mind that matches potential elements with other symbols we perceive or think. (learning is the development and extension of those rules?)
- The mind follows a goal and extracts, from our memory/experience, relevant symbols for further processing. This can even result in a learning exercise (new rules?). The key point here being that only relevant memories are very quickly extracted at an enormous quick pace. So how does a memory extractor know what is relevant and what is not beforehand?
Hence, my point above about rule-based networks. It is as if the memory extractor picks out certain memories (let's say mentalese fuzzy symbols) that match it to what we are perceiving or comparing, out of which may be developed a new rule that is stored in our memory for further processing.
It should be a very intelligent machine that can develop rules and even has the ability to represent (internally!) fuzzy mentalese symbols. We tend to always represent items as formal elements, since these are ultimately deterministic. So, in a way, our communication with the machine never gets translated to an "inner" representation in the machine, but always as a formal representation that makes it easier for us to analyze.