In programming languages, it's somewhat similar. We're thinking in terms of state, and the more explicit that state is determined, the better considering our ability to convey ideas to others. Also, controlling proper function relies on the verification of states from one point to another.
This may indicate that we're not very apt in designing systems that are in constant motion. Or you could rephrase that as saying that we're not very good at perceiving and describing very complex motions or actions without resorting to explicitly recognizing individual states within that motion and then inferring the forces that are pushing objects or issues from one state to another.
The human brain grows very quickly after embryonal development. Neurons in certain stages are created at a rate of 225.000 neurons per minute. The build-up of the body is regulated by the genes. The genotype determines the blueprint, the schedule (in time) and the way how your body could develop. The phenotype is the actual result, which is a causal relationship between genotype and the environment.
The way how people often reason about computers is in the state A->state B kind of way. It's always making references to certain states or inbetween verified behaviours to next behaviours. When I think of true artificial intelligence, it doesn't mean just changing the factors or data (compared to neuronal connections, neuron strengths, interconnections, inhibitions and their relationships), but the ability to grow a new network from a blueprint.
Turning to evolutionary computing, the question isn't so much to develop a new program, it's about designing a contextual algorithm void of data, which is then used as a model where factors are loaded in. Assuming that the functioning of the model is correct, the data is modified in such a way until it approximates the desired result. This could be a heuristic function, allowing "generations of data" to become consistently better.
Fluid intelligence is a term from psychology, which is a measure for the ability to derive order from chaos and solve new problems. Crystallized intelligence is the ability to use skills, knowledge and experience. Although this cannot be compared one-to-one with Genetic Algorithms, there's a hint of similarity of a GA with crystallized intelligence. Neural networks for example do not generally reconstruct themselves into a new order, they're keeping their structure the same, but modify their weights. This eventually leads to a certain limitation of the system if that initial structure is not appropriately chosen.
The human body works different. It doesn't start from an existing structure, it builds that structure using the genes. Those genes mutate or are different, causing differences in our phenotype (appearance). The brain creates more neurons than it needs and eventually sweeps some connections and neurons once those are not actually used (thus needed). Techniques in A.I. to do something similar exist, but I haven't come (yet) across techniques to fuse structures together using a range of "program specifiers".
The genes also seem to have some concept of timing. They know exactly when to activate and when to turn off. There's a great interaction, chemically, between different entities. It's a system of signals that cause other systems to react.
You could compare the signalling system to an operating system:
- Messages are amino-acids, hormones and chemicals.
- The organs would then be the specific substructures of the kernel, each having a specific task, making sense of the environment by interfacing with their 'hardware'.
- By growing new cells according to the structure laid out in the genes (reaction to messages), the kernel could then start constructing new 'software modules' (higher-level capabilities like vision, hearing), device drivers (transducers & muscles), and so on, possibly layered on top of another.
It'd be good to find out if there are techniques to prescribe the development of a certain piece of software. Is a computer in any way, given its hardware architecture, capable to support such an evolutionary model? Kernels are pretty clever nowadays and together with hardware, they can detect memory violations and replace / remove processes or drivers when needed. They cannot however regulate the internal state of hardware once it's in an incorrect state, unless specific functions exist.
The other very important measure is that there's no better heuristic for evolutionary evaluation than nature. It's there, it (seems?) to promote balance, and it both nurtures the species and threatens them. If there's no diverse system (heuristic) for an evolutionary computer, then any hope to develop a smarter machine seems almost hopeless. If we assume that nature itself is 'external' (but also part of) an organism, then we could also perceive such a computer as a black box and assess its functioning in that way. This would allow us to withdraw from the internal complexities within (verification of state->state), but assess its functioning differently.
Emergent, efficient behaviours promoting the survivability in the situation should be stimulated. The problem with this approach is that, the system which needs to evaluate the performance of the agent could probably be more complex than the agent needing to evolve. Status quo?
2 comments:
Really I appreciated the effort you made to share the knowledge. Thank you!
https://blog.mindvalley.com/crystallized-intelligence/
I learn new information from your blog, you are doing a great job. Keep it up.
https://blog.mindvalley.com/fluid-intelligence-definition/
Post a Comment