In previous posts, I touched base with a couple of requirements that are part of this design:
- The energy equation must hold, that is, the time and energy it takes for a biological brain should more or less equal the time and energy of a technical silicon implementation
- It should be parallel in nature, similar to neuron firings (thread initiations) that fire along dendrites and synapses
- It should be stack-less and not need immense amounts of stack or function unwinds
Some new requirements:
- The frequency of introspection is undetermined at this point, or better yet: "unknown". I don't know how often to check any kind of result in the network to come to any kind of conclusion or result. But I reckon that the frequency is tied to the clock-cycle of the main CPU or anything that can be compared to that. So, someone noted in my blog that the frequency of the brain seems 40Hz. That means it might be needed to inspect 40 times a second (and cleanup old entries, leaving room for new?). The idea is to not push too much in the heap for analysis, but clean up the results regularly and continuously work forward, storing the previous results in different rings of memory
- There should not be any "output dendrite" or "return object".
- The state of the network at any point in time == the output.
- Previous results should eventually be stored in different rings of memory, which have lower qualities of prominence the further away from the source of processing. Most possibly, results in more remote rings of memory may require re-processing in the brain to become again highly prominent.
I'm looking at "stackless python". It's a modified library of python that allows little tasklets to run that do stuff. Basically it's similar to calling a C function that passes in another function address to execute. The calling function unwinds and the CPU can start executing from the new address.
Python further hides tasklets (to run in the same thread), some kind of "green" thread or micro-thread, since it has a scheduler (that is not pre-emptive, but cooperative).
Check it out here:
What is the objective?
The objective for now is to load word lists and process text. It's quite a basic process I'm simulating at this point, but that does not matter, I'm mostly interested in seeing if these methods display any kind of emergent intelligent behaviour:
- loading word lists in memory
- Enter 'learning mode' for my symbolic network
- Process 'stories' that I downloaded from the web
- Establish 'connections' between symbols
- Verify connection results
- ... modify algorithm ... modify implementation ... back to 1
- post results