The one thing that really struck me when I was watching this video, is how important the "fantasizing step" for consciousness is. Fantasizing == imagination == abstract thought and (abstract) manipulations of things seen before in order to construct something new out of past experiences.
So far, neural networks have been viewed from the perspective of recognition, not so much from reproduction of certain action. Also, most neural network activity is one-way. It's true that the learning process requires backwards propagation for weight adjustment, but the general execution phase is from input -> output, never backwards.
But RBM's have the property that they can both be used for recognition as well as production. The production phase is useful for things like prediction. Basically, prediction is all about recognizing patterns or important forces that strongly suggest that reaching a certain state or value is a higher probability than any other. This can be done by reasoning, or calculation or whatever. (see from 21:38 in the video mentioned above to see this).
Now, here comes the fun part. One could consider imagination (+innovation) to be:
- Constructing coarse solutions through reasoning (needs to have A + B not C).
- Filling in the blanks of generated course solutions.
- Backtracking over complete blanks, defining the unknown as a subproblem to resolve prior to resolving the bigger picture.
Think of the mind as a huge network of connections like a RBM with different stacks, where different types of processing occur. At the neurons near the eyes, the light gets interpreted and already it contains some sort of pre-processing like edge detection and so on. The next step is to start recognizing shapes between the edges and blotches of color. What does it all mean? I highly believe that we don't nearly store as much detail as we think we do for visual processing. And 100 billion neurons isn't really that much when you think about the amount of information we're really storing, especially when parts of these neurons are contributed to specific tasks like speech production, visual recognition, speech recognition, audible signals recognition, pre-frontal cortex processing (high-level abstract thought), emotional supression / understanding, and so forth.
Now, with consciousness... what if what we're seeing really is the induction of a high-level abstract thought in the pre-frontal cortex towards the lower hierarchical layers in this huge network? Then consciousness is more or less like reliving past experiences in sound, vision, emotion and the likes. It still raises questions on where this induction starts (ghost in the machine), but this may also be explained by (random?) the inverse of the operation, namely the occurrence of a certain emotion, the observation of a visual appearance, the hearing of some sound or the smell of something.
Now, especially olfactory memory, the latter one, is interesting. By smelling freshly cut grass or other specific smells, we sometimes immediately relive experiences from our youth. This is not something that is consciously driven for example, but as of yet totally happens. This is relived not just by smelling the smell, it's a visual, audible and emotional thing as well. The interesting part in this that we seem able to steer our thoughts (concentrate) on certain parts. Steering away from certain thoughts is much more difficult (don't think of a black cat! oops, I asked you not to think of one! :).
So... here goes... can thought be seen as a loop between abstract ideas and reproductions of those abstract ideas made from previous experiences? Someone not having many experiences won't have a lot of imagination in that sense and others with loads of experiences and knowledge may be able to infer more about certain qualities and consider a range of other possibilities and options (experience).
And the other question... can this be used as a stepping stone to model thought in machines? and if possible, could we call this consciousness?
1 comment:
CritterDroid Recurrent Temporal Restricted Boltzmann Machine Perceptual Hierarchy
http://blog.automenta.com/2011/12/critterdroid-recurrent-temporal.html
Post a Comment