In some other article I discussed some of my personal perspectives on how the mind works. I've been reading in the book of "Introduction to Cognitive Science" whilst in Paris, sitting in one of the brasseries near Gare de l'Est. Not exactly the most pittoresque places, but any other place would probably distract :).
It's a very interesting book with lots of different views, perspectives and theories. It makes clear that current theories consider three different levels for analysis and these have direct analogies with computers. The lowest level is at the hardware level, where the researcher attempts to understand the mind at the level of the synapse and the biology (which is the level of the circuit board, the volts, current and silicon components). Another level looks at the component level, where and how different components of the mind work together to improve our understanding of the world and contextualize input. The highest level looks at the functional level and thus describes the representation of meaning and the end results of the overall functions.
All levels are very important. The highest level is where philosophy is most helpful, the lowest level is where biology and technology measure. One school of thought suggests that the mind is some kind of associative network that is activated through thoughts themselves (or are recollections of long-term memory).
This, to me, somehow suggests that for Artificial Intelligence to really succeed, it must spend time on re-implementing the very basics of computers. Actually, to go the route of Haskell and Erlang and stackless Python.
To make a clear distinction... the architecture of a Pentium processor uses a stack by default. This is a temporary storage in memory that is used and reserved for the processor and used to "track back" into the main line of a certain program. A program is generally written in a way that it becomes more specific for each function. So a generic function would calculate discounts for an account in a larger process, a called function retrieves the account, another called function retrieves applicable discounts.
The organization of programs this way allows us to get the programs " in our heads". The complexity of a network is highly intensive for us to resolve, as compared to hierarchical trees for example. One suggested reason for this is the limited amount of working memory that is dedicated to solve a small problem.
In my imagination, it's as if we have 3-4 CPU registers and a limited L2 cache and a strange kind of memory. This memory does not work through "locators" externally, but gets "triggered" by input and starts feeding our thoughts system.
One of the most important things to consider is that AI could benefit from computer programming without stacks, so stackless computing. Look for "stackless" python to see some examples. There are significant differences and possibilities when there is no stack in programming:
- Programs can run without pre-determined goal. That is interesting, since programs run and act in a deterministic way. We program them to behave systematically and consistently. In the absence of a stack it is theoretically possible to introduce non-consistent behavior (which might be a pre-requisite for true intelligence).
- General batch program architecture organizes a processing loop of some kind that always perform the same hierarchically organized routines. Without a stack and with different architectures, it is possible to consider a system that has a certain "memory" of what it did before, possibly allowing for contextual determination of certain events.
- Continuation of a program occurs by passing in the address of a function to another. This can both be a function that complements the called function or it can be the function to process next.
Stackless computing is significantly harder to architect and program than stack-based computing. The programs closer resemble a kind of network and there is no longer (necessarily) deterministic behaviour, which is a necessity to resolve a certain problem in a consistent manner. Neural networks used in Artificial Intelligence are examples where patterns are identified, but it is in my imagination impossible to build intelligent systems from neural networks alone.
I started this story with three distinct levels for analyzing behavior. The most basic level is most important, since it's the level where things execute and exchange information. If we attempt to run our functions on incompatible hardware, we're not likely to get good results. Can we redesign the computer not to use stacks, but to require programs that are behaving as different kinds of networks and are compiled to continue execution forward, never unwind a stack-entity and in the process gather and structure their memory and other functions to develop a sense of context? It might be the key to real intelligence :)