Previous articles discussed issues of pattern recognition. Patterns are all around us, you could say that the whole visual world is composed of patterns, audio has certain patterns and there should be patterns in smell. You could even talk of touch patterns, which are essentially structures of surfaces.
Patterns are mostly used for recognition between margins of error. But besides material patterns for material objects, we can also talk about motivational patterns and contextual patterns. A contextual pattern for example would determine within a scene where you are, at what time you are there and then it might form an expectation or induce a goal to get information to develop such expectations. For example, if you are at a train station you would find certain events quite surprising, whilst in a different context such events make absolute sense. The emotion of surprise is a measure of how your expectations are formed and how these expectations are not always true due to random events.
Is consciousness analog with pattern recognition? Consciousness can probably be re-defined in this context as: "knowingly(?) manipulating the symbols of input into the biological neural networks in your brain until you measure the correct (expected?) output". Well, the first question then is whether consciousness IS an output of a neural network or whether consciousness is an encompassing ability to monitor the network from afar. The latter possibility being uncomfortably close to the idea of homunculus.
Therefore, I take it that the output of the neural network is consciousness, but that we have the ability to back-propagate the output on the input and thereby steer the 'thought' process in our brains. Certainly, there are other forces at work that have an effect on the output. Hereby, consciousness is thus the output itself, the production of the output in the neural network might be described as subconsciousness.
In order to steer output, one must have an idea of the expected output. Therefore, another network (separated?) should exist that recognizes the output and compares it to the expected result. The concept of learning teaches us that another neural network might modify its weights and values to comply with the expectation. Therefore, given a certain situation, and understanding that certain outputs occur, we have the ability to train our network to form a certain expectation (generate an output), given a set of input factors. The generalization of these "ideas" (inputs) being very important. For example, you'd expect a phone in a train station. But one should also therefore expect a phone in a metro station, since both stations can be categorized into the category of transportation. Once you think about how you derive this reasoning that a phone should exist in both stations, you should notice that it relies heavily on persisted knowledge and (unconscious?) categorization of what you experience in your environment.
In this model of consciousness, you might say that there is a certain truth in saying that the network IS your consciousness. But it is a dangerous argument in that it is likely regressive (one network monitoring another, but who monitors the final network?).
First of, any system of reasoning will have to start with a final goal. It may be broken down into subgoals. Subgoals are necessary when there is a lack of information to complete the goal directly. Our mind may have a limit to the memory of how many subgoals to complete (after which a problem is so difficult, that it is impossible to solve). But a computer may not need to have this limit.
Let's say that you are on a train station and a train is delayed towards your destination. Previous experience has shown us that the people that expect you at a certain time may get concerned if you do not notify them of the change. Our brain now has a goal to achieve and we start to assess our options to complete the goal. First, do we have a mobile phone? is it active and functioning? is there coverage in the area? We can find out by trial and error. How much time is there left to make the phone call? If we make a phone call from a fixed phone elsewhere, will we return in time to catch the train? All of these questions are assessed and estimated. Let's say there are 20 minutes. We would expect a phone to be somewhere in the vicinity of or on the station (that is a reasonable expectation). So we scan for a phone (color? shape? brand?) within the station. Notice here that as soon as you have set your mind to recognize these shapes, it is easy to overlook phones that look entirely different then those you'd expect (one may also argue whether children that do not have these goals in a young life have a different 'state' of mind in which the objective is to consume as much information as possible about the environment, sometimes to the irritation of the parents).
Then finding the phone, we need to find out how it works. We expect to have to pay. We need to see if it accepts credit card or money or special phone cards. Once we found that out, we'll need to ascertain whether it actually works (that is, whether the phone behaves as we expect it to). Notice that in this case we use auditory information mostly to discriminate between busy signals and other kinds of signals that are not known in advance (defect?) ).
Then there's a regular conversation on the phone, the initial goal is met by communicating this with the recipient (goal retrieved from memory) and we return to the next goal, catching the delayed train.
Whereas pattern recognitions are surely abound in the above story, one cannot miss out certain capabilities that seem to surpass pattern recognition. Knowledge, definitions etc. are continuously retrieved from memory and used in the network to make correct assumptions and expectations. Not finding the general "UK" model of a phone (red booth) around, one might decide to scan for a more global shape of a phone, a scan which generally takes longer to complete as more potential matches have to be made.
Eventually, memory also needs to keep up with the final goal, notifying the other person. There's thus some kind of stack that keeps track of the subgoals. Some people sometimes ask you:"What was I doing again?". This shows me that losing track of goals is easier than we think, especially when other goals are intervening with our current processing.
But what/which/who detects when a certain goal has been achieved? Is that what consciousness really is? How does it know for sure that it has been done?
The objective of this article is to reason about the scope of consciousness. How much of consciousness is not pattern recognition processing and how much is?
Perhaps we should consider the possibility that our brain can switch modes immediately. A learning mode, which uses exploratory senses and allows certain networks and memories to assume information. Another mode that is used to form expectations based on certain inputs. It's probably not useful to confuse these modes of operation at the same time.
This way, if we find that there are no "sensible" expectations that we can form, we know that we need to assume information and learn about the environment. We might learn it incorrectly, but eventually some time we'll find out about this incongruity and correct it (you'll find that your friends have incomplete ideas/beliefs about things, as well as you do).
Within this model then, pattern recognition can be very efficient, given that there were methods to limit the analysis of context. This means that the analysis of your environment and what draws your attention should be limited towards the completion of your current goal, otherwise your brain gets overloaded with processing or learning, or it makes incorrect assumptions. Perhaps categorizing items here helps enormously to eliminate those issues that are not important at the time. But this requires careful analysis of the goal and the expected path of resolution.
Is it possible therefore that we create these paths (with further analogies to other similar paths) in our minds, which can thereafter be "reused" as paths of resolution? Basically... learning? If we analyze our environment in which things happen and we assume everything that we think is important, then when we are inserted into an environment with similar attributes, can we reuse the events in the simulation to make expectations of what is about to happen? Can we "simulate" (think, develop future expectations) of what is going to happen if we apply a certain action?
The above suggests that real intelligence is not the ability to manipulate symbols, but the ability to simulate hypothetical scenarios and expectations and adjust your goals accordingly to the expected outcome of each. The dexterity and precision of these scenarios is then leading.
Dogs can learn things as well, but is it the same kind of learning? Why are we able to use language and dogs can't. Our awareness is better and our methods for making assumptions and expectations of the environment as well. Is this only because we have larger brains? How can we explain the behaviour of a guide dog for the blind? It has learned to behave a certain way based on input signals and surely therefore it must interpret the environment somehow. Does it act instinctively or does it possess a kind of mind that is much simpler, but similar to us humans? Is it fully reactive to impulses or does it have expectations as well? Surely it must be able to deal with goals, which in the case of the dog is received from its boss.
New tool in town: KnowledgeGenes.com
15 years ago
3 comments:
What ever we figure consciousness to be, came across picture of the basket work.
"Here's a nice video demonstrating a technique called serial block face-scanning electron microscopy, which can be used to reconstruct complex neural networks from sequential slices of tissue. The neurons in this case are from the retina of a rabbit."
http://link.brightcove.com/services/player/bcpid263777539?bctid=1313685700
And this is only a part of it. Imagine the complexity for a full human brain!
Another thing I considered this morning in the metro... Biological NN's have cells and these cells burn some energy, fueled by biological processes (blood). Artifical NN's however can only "service" a neuron one at a time, because it depends on a CPU. In your previous post you said you read somewhere that the freq. of the brain is around 40Hz. (allegedly).
I'm wondering if it's possible to think of the human brain as an artificial model where each neuron has its own "CPU", which only has to be 40Hz or so. We could also scale this up to 80Hz CPU's and then load it with 2 neurons.
A theoretical CPU of 3GHz can serve 7.5M neurons. Since processing in a single neuron is not a single clock cycle but more, it would be very interesting research to balance some energy consumption equation... I don't believe that the brain can consume more energy (or consumes x times less energy) than a computer to achieve the same thing. So insight into how much energy the brain burns should provide more insight into the limitations or design challenge for an artificial smart computer.
And this is only a part of it. Imagine the complexity for a full human brain!
“Artifical NN's however can only "service" a neuron one at a time, because it depends on a CPU. In your previous post you said you read somewhere that the freq. of the brain is around 40Hz. (allegedly).”
I didn’t realize that about artifical NNs. What I recall is that there is a 40Hz pattern associated with memory “refresh”. I believe there are a number of other brain waves associated with other activities (like problem solving).
I know the brain accounts for about one fifth of the body’s energy consumption. I find I am not drawn to the actual engineering/construction of NNs, being more of a generalist.
With regard to consciousness I came across this little piece that suggests the fundamental architecture is at least partially culturally driven.
Cultural Influences on Neural Substrates of Attentional Control
The abstract from Hedden et al. of the article with the title of this post (in Psychological Science, Volume 19, pp 12-17, 2008). I thought it was interesting enough to mention, though I don't have access to the full text, so can't determine exactly what is meant by culturally preferred and non-preferred judgements:
Behavioral research has shown that people from Western cultural contexts perform better on tasks emphasizing independent (absolute) dimensions than on tasks emphasizing interdependent (relative) dimensions, whereas the reverse is true for people from East Asian contexts. We assessed functional magnetic resonance imaging responses during performance of simple visuospatial tasks in which participants made absolute judgments (ignoring visual context) or relative judgments (taking visual context into account). In each group, activation in frontal and parietal brain regions known to be associated with attentional control was greater during culturally nonpreferred judgments than during culturally preferred judgments. Also, within each group, activation differences in these regions correlated strongly with scores on questionnaires measuring individual differences in culture-typical identity. Thus, the cultural background of an individual and the degree to which the individual endorses cultural values moderate activation in brain networks engaged during even simple visual and attentional tasks.
http://mindblog.dericbownds.net/2008/01/cultural-influences-on-neural.html
Post a Comment