Merry Christmas everybody. I'm just writing up some recent thoughts.
Some books I was looking at with Amazon consider the mind as a thing with a modular composition. A module for language, another for reasoning, and so on. Logically it may be possible to dissect it this way, but I don't think it should be confused with physical modularity so quickly.
The previous post considered pattern recognition as the main topic of reasoning. I thought about this more and more and I just felt as if something else is missing. Pattern recognition is all around us and necessary, but it just doesn't feel like AI and neural networks are the beef of what our minds are about. I miss something that constitutes logic. Because, even if we have the ability to recognize words from a stream of noise, visual patterns in what we say or even objects and so on, it does not yet allow us to manipulate those things and combine them with other items.
Or, in other words... In my meanderings I missed the element of "consciousness", what it is about and how it is intertwined within our abilities to recognize patterns. I also think of consciousness as the ability to learn, identify and establish new patterns. For, in order for an artificial network to learn things, something must exist that compares output with input and recalibrates the network. What is that thing inside our mind?
An easier way to think about this is skill-acquisition. When we learn to drive a car or a bike, we combine certain inputs together (balance, sight, motorics, accuracy, action/consequence patterns, danger recognition) and eventually patterns are created, which allow us to 'automatically' perform the task. Before it gets there however, we are consciously accompanying each action and consciously making adjustments until we finally get it. So it feels as if besides pattern recognition that is as some kind of auto-pilot, we consciously need to evaluate the world around us to learn from it. And even then, we consistently apply consciousness throughout a journey, for example when encountering new territories or when certain elements have changed or when traffic is significantly dense. (It is next to impossible to execute other tasks in those events).
So I am basically concluding that pattern recognition by itself is not sufficient for the human mind. But I would not go as far as saying that the mind can be thought of a physically modular kind of thing. I'd rather think of it as a richer neural network than AI constitutes, probably something that still contains other elements for reasoning, logic and learning that we are yet unable to perceive. Memory (and retrieval) is another thing that I started to neglect.
The symbols that flow through the network may not be numbers. But if these are not numbers, what are they? If I consider an AI network in computers that does not use numbers, but keys or some gibberish that I make equivalent to some kind of symbol, will the output product be a sensible product after the network has manipulated and processed it? It sounds too random for that to be true, unless the output product is somehow matched to something else. Maybe these outputs are basically non-sensical symbols that are keyed to some kind of knowledge. Whereas knowledge in AI networks are embedded into the weights, I think of knowledge slightly differently when applied to neuro-science.
New tool in town: KnowledgeGenes.com
7 years ago