The psychology course I'm taking requires reading through a pretty large book (albeit in not too small type and with loads of pictures). The sensory system is explained, so it's sometimes more like a biology book. It's basically stating that rods are to analyze dim-lit places and cones are for richer-lit places. The cones can discern color and have lower sensitivity.
Researchers have determined that right after the light is transduced after the cones and rods, that nerve cells already start pre-processing the information. You should compare this pre-processing to the execution of an image filter of Photoshop. It runs some edge detection filters for example, improves contrast here and there and then sends it back to the primary visual cortex for further analysis.
I'm taking some personal experiments by looking at a scene for 1-2 seconds, then closing my eyes and attempting to reconstruct that scene. Looking at it for longer or more frequently makes the image more perfect, but the actual image with eyes open has a lot more information than that which I can reliable reconstruct. Or rather... I can reconstruct details of A tree or road, but possibly not THE tree or road out there. So my belief system of what I think a tree is or a road is starts to interfere. The interesting thing is that the actual image I can reconstruct is mostly based on the edges and swats of colors.
The mind thus deconstructs the scene first by edge detection, finding lines, but at the same time highly depends on the ability to identify complete objects. Very small children for example are already surprised or pay attention when objects that were thought to be together suddenly seem to be actually apart.
It does take some time to identify something that we've never seen before, but pretty quick we're able to recognize similar things, although we may not know the expert name for it.
By deconstructing the scene, you could say it also becomes a sort of "3D world" that we can personally manipulate and visualize further (mind's eye). So I don't think we're continuously re-rasterizing heavy and complex objects, but have the ability to consider an object whole by its edges/traces, then rotate it, translate it or do with it as we please.
In these senses, the sciences that deal with signal processing and so on should depend on these techniques heavily. It is possible to recognize objects through its pixels, but perhaps by running filters on it before, the features are easier detected and the pattern recognition mechanism might just be significantly better. Thus... the way in which signals are presented probably always require pre-processors before they are sent to some neural network for further processing. In that sense, the entire body thinks, not just the brain.
New tool in town: KnowledgeGenes.com
7 years ago