On Saturday, the 7th February, I'll be doing a presentation on Artificial Intelligence at Sogeti Engineering World 2009. I'll be talking about the Singular Value Decomposition algorithm, ID3 decision trees, the Bayes theorem and I'll dispel a couple of myths about neural networks. You'll need to register to go there (aanmelden).
I've finished reading "Shadows of the Mind" by Roger Penrose. It was a very interesting book with interesting views (the view from physics). Dr. Penrose enters into a long explanation with arguments on his theories, one of the most interesting being the role of microtubules in consciousness. At the end of the book, Dr. Penrose asserts that consciousness cannot be invoked by machines, devices or biological entities which are only composed of computational algorithms and actions. That is, he asserts that something non-computational needs to become part of the equation in order for consciousness to exist.
The arguments are compelling in the book. If consciousness is evoked not by the neurons, but by the smaller microtubules that are part of neuronal cells, then the computational power of the mind exceeds the computational power of computers even further, by a factor of 100,000 or so.
Although at the same time, I'm not so sure about how this theory holds. The book is very explanatory about quantum theory and mechanics and explains a number of different puzzles and examples in quantum theory. One of the key questions it poses is the state vector reduction problem (collapse of the wave function), which is the process inbetween the quantum world and the classical world as we experience it.
Another thing I did not yet see anywhere is the concept as in the previous post, the likelihood of algorithms that influence one another. Thus, rather than a single algorithm which is executed by a single thread or CPU, is consciousness actually the collection of calculations in different threads / CPU's at the same time?
The very interesting thing of the book, if the consciousness is evoked by microtubules, is that neurons are then clusters of calculations, which influence other clusters. Like macro-signals of tiny little calculations that are then sent to other positions where the information is used as input for further calculations. It also may have some relevance to memory?
In connection with other posts, I have written about consciousness and reasoning as be it some kind of fluid algorithm, where the possibilities and concepts are tied loosely together as some kind of oil, with the thread of thought passing through it guiding the selected items. Items that are connected to others on the thread may appear in thoughts, given certain changes in context.
Then we could also make the point that, if Penrose's ideas are true, that microtubules are able to evoke any thought whatsoever, where the choice for the exact thought or idea to come up is made through some sort of calculation or determination. Thus, just as in quantum theory, the thought is not clear and could be any, but through a range of filters or possibilities, the final thought is evoked by the final filter.
Matrices are mathematical tables, which are used to record elements of data in the world around us. These are widely used for example in keeping track of rotation and translation operations like "SLERP" in 3D computations for games or simulations. Matrices are also used in the Singular Value Decomposition and have many other uses. After the recording of data in (possibly huge) matrices, one can perform various operations on the data, often resulting in a destination matrix that conveys a certain meaning.
Matrices thus are very interesting for Artificial Intelligence. It can operate on large datasets with the objective to process that information into something new, which then is used as a shortcut for making predictions for example.
A limitation of matrices is that all the information for a single timepoint or a range of timepoints needs to be available. This is often very difficult to achieve, or the resulting matrix may become so large that the general PC struggles with available memory to perform the computations.
Many academic texts written on consciousness and artificial intelligence are written from the perspective of the computational mind. But they are also written from the perspective of an algorithm. Since most (if not all?) algorithms are serial, this also suggests that the mind or the brain is serial. This is certainly not so, each neuron can fire independently in time and need not be given any CPU time for the neuron to actually fire and influence other neurons.
This suggests a parallel nature as large as the number of neurons available in the human brain. So, not only do we have more neurons in the brain than the common computer can hold by itself (not even counting the memory needed for maintaining connections), each neuron also operates as if it were a CPU by itself.
It's certainly the case that some algorithms can be parallellized, therefore allowing them to run on different devices and then have their results combined to find the answer. This is what is meant with parallel algorithms in the field of computer science.
Here though, we should also consider parallel algorithms to be algorithms that are truly parallel in nature, algorithms which run on many different processors and operate on the same data.
Just recently, I wondered what would happen if some sort of chemical concept were introduced in ANN's. Thus, an ANN would not just execute on neurons, thresholds and biases to find new values, but one could introduce chemicals that would change how neurons fire in the ANN. The applications of this aren't really clear as of yet though.