Daniel C. Dennett started out his writing on "Consciousness Explained" with a description of a demon, who is trying to trick a brain inside a vat into thinking it's actually inside its real body and world with real worldly experiences. You can easily compare this account to the hooked up people inside the Matrix, where each individual's brain is fooled into having a body, relatives, material things and the day-to-day worries of their lives, but actually hooked up to a large computer, the Matrix, which is conjuring up these illusions to every brain. Dr. Dennett's is giving this a bit more thought and considers the things you need to do for the brain to get tricked like this. Now, let us assume that, contrary to the story of the Matrix, the people in this thought experiment actually have had real-world experiences to compare these senses too. First of all, you'll need to be able to simulate the senses; vision, smell, hearing, tasting and so forth. And here comes the difficult part. The demon or the computer need to be able to detect some outcome, the chosen action and react on that appropriately as the physical world would re-induce their worldly feedback to such actions, in a way that this brain is used to.
In the example, he's referring for example to lying on the beach with your eyes closed and running your fingers through the sand and the feeling that the course grains of sand give you on your fingers. You could also refer to the action of jumping, the feeling of being in the air for only a little while and the thump when you get back on your feet. Or whistling in some echo-ey tunnel made of mostly metal and the sound this returns to your ears, or the joint coordination of some sports game, etc...
So, the difficulty of tricking a brain is in the first place hooking it up in the right places, sending it the right kinds of signals and reading the brain at the right places. A more difficult thing however is that outside the brain, there's a physical world that the brain expects to get feedback from in very specific ways. Especially once it has experiences in this world, it'll look strange if anything of that changes. The illusion will wear off very quickly due to these little differences (although you would argue that if the illusion is near-perfect, the brain will probably start doubting itself?).
The point here is that the whole thing of "Artificial Intelligence" isn't that close yet as people may try to make you think. Computers live in some kind of "model" of the reality, it's a transduction of what is really there to be perceived. Some other transduction is also very likely affecting us (and trying to tell someone else what you experience or what a bat experiences is very difficult to achieve. We simply cannot easily imagine how it'd be to experience something else entirely). This model is an electronic or otherwise suitable representation of the world around a robot or AI, and therefore subject to our imagination and experience, but not necessarily the most ideal.
Worse yet, biological entities around us are evolved in the physical world. Probably according to Darwin's Origin of Species, where evolution is largely a function of natural selection of creatures, constantly (needing to) adapting to their environment. The same ideas can be found in Genetic Algorithms, a method of engineering where algorithms or parameters thereof are encoded as genetic material and then mutated and crossed over just as biological material. This is sometimes useful in cases where the actual (very long!) functions are very difficult to discover.
In supervised learning, where the actual solution is perfectly known, artificial neural networks can be taught these patterns by feeding the input and then checking the outcome and recalibrating the parameters of the network until the performance of the network no longer improves. There are difficulties in this kind of training, namely that a network that is too large overfits the training and performs generally badly on unknown cases that it may have to predict. A network that is too small is generally not capable of holding the information that is needed to capture the actual function, so underperforms.
So, for neural networks, one either ends up remodelling the entire world and calculating elements of that, such that a network can be trained to recognize or predict it, in which case you're better off sticking to this representation of the world instead. The alternative is not pretty either. One must find a method to evolve a network such that it has the correct architecture for the job at hand and train it without knowing the elements in advance. More so, because these more complex networks don't generally have classification outputs like most ann's do, but instead, I think, probably thrive on the state within the network itself, induced on their neural surroundings.
New tool in town: KnowledgeGenes.com
15 years ago