One interesting book of Pinker on Cognitive Science discusses the three levels of analysis, it is also on Wikipedia (since it belongs to the field of CS). Anyway, these three levels represent three layers in the OSI model of computing. The physical layer, the algorithmic layer and the computational layer. The algorithmic layer would be compared to the Operating System and the computational layer to the behaviour (which would be applications making use of the infrastructure).
Of course, when mapping these levels to biological counterparts, they map to neurons and so on.
The interesting statement is that it is not only required that you understand the workings of one layer individually (to be able to make sense of something), but you'd need to understand the interaction between the levels of analysis in order to work out how something works.
There are some potentially large pitfalls there. We make assumptions that a neuron has a certain use and can never have more than one use at the same time. I have no information at this time to make any further statements on that though.
One of the questions in the books is what intelligence really is, and it is then purported as some kind of computational machine. Computation is often associated with algorithms, formula and maths.
I feel happy about people saying that mathematics and algorithms somehow describe the world (or can be good means to communicate difficult issues or describe some kind of event), but I don't think it's a good idea to turn this upside down and then state that the world is mathematical in the first place. It's way more dynamic then that. It's maths that describe the world (and then somewhat poorly, as some kind of approximation).
Although very complex things can be described with a sequence and combination of many algorithms together, this presents enormous problems. First of all is that it makes the world deterministic (although incredibly complex with algorithms connected and influencing together) and second is that in order to come to a full implementation, you'll need to understand absolutely everything and model it in some kind of algorithm. That sounds pretty much impossible and a 'dead path'. I think AI needs something more dynamic than that.
There was a lot of enthusiasm on Neural Networks. I've used them too and I like how they operate, but in the end a network with its factors is only useful for one single task. Another task cannot be handled by it, unless it is retrained. So those are very limited as well, plus that I find the speed and method of the human brain for learning immense. Another limitation of NN that I find is that they require inputs and outputs and have a fixed composition for the problem at hand (a number of layers and a fixed number of neurons and synapses inbetween).
So, NN's are also deterministic and limited to its purpose. What and how should something be designed so that it can seem to learn from something and also start reasoning with its knowledge? Reasoning in this context to be interpreted as a very wide term. I explicitly did not use the term compute to make a difference. A Design is often very limited, limited to its area of effectiveness and to the knowledge at hand at the time. The design is limited to our disability to design indetermine things, items that may produce inconsistent and unexpected results.
When we design something, we do that with a purpose in mind. If this is the input, then the result should be that. But this also limits the design to not be capable of doing anything more or different from what it was designed to do.
How difficult is it then, with our minds focused on algorithmic constructs (consistent results), if we are just now trying to work out a design for something that may produce inconsistent and indeterminate results? It's the toughest job ever!
New tool in town: KnowledgeGenes.com
7 years ago