- The energy equation between a biological network and a mechanical network should more or less hold, within a certain range and some research should be made if there needs/should be a factor in this equation.
- There shouldn't be an unroll to the functions that are called as part of the symbolic network
- There should not be an expectation of an output state/measurement (the network *is* the state and is always in modification)
You can see some areas of the network are totally unused, whilst others display high states of activity. Of course, it is very important to assess the brilliance factor and brilliance degradation/fallout (time it takes to decrease the brilliance) within the context of this picture.
The brilliance is basically activation of neighboring nodes. So thinking about one concept can also easily trigger other concepts. The "thread" of a certain context would basically guide the correct activation path.
I imagine a kind of network of symbols that are interconnected as the following picture:
The "kind-of" association is not shown here, because I'm not sure it really matters at this point. The "kind-of" assocation in itself can also be associated with a concept, that is, the "kind-of" can be an ellipse itself. So there is some loss of information in the above diagram, but that loss is not being considered at this time.
You can see that concepts are shared between other concepts to form a very complicated mesh network. It's no longer ordered in layers. If you consider the strength of an association (how strongly you associate something with something else) as the line that is inbetween it, then I could ask you: "What do you think about when I mention exhaust gas?". Then your response could be car or bus. The lines thus represent associations between concepts.
Wheels are known by both the concept car and bus. Also notice that this network is very simple. As soon as you gain expert knowledge in a topic, this network will eventually split up into sub-topics with expert knowledges about specific kinds of wheels and specific kinds of buses and specific kinds of cars and how they relate to one another. Generally, we distinguish my car and other cars, which is one example of a topic split. This statement of expert knowledge is derived from my little nephew looking at his book. For him, a motor bike, a bus, a cabriolet, a vw and things that look the same are all cars at this point in time. Later on, he'll recognize the differences and store that in memory (which is an interesting statement to make, as it indicates that this network is both a logical representation and association, but also memory).
The connections in this kind of symbol network can still be compared to dendrites and synapses. The strength of an association of one concept with another is exactly that.
Now, if you consider that you are reading a story and you have certain associations, you can also imagine that these concepts "fire" and are added to a certain list of recently activated symbols. Those symbols together form part of the story and the strength of their activation (through the synapse strength, their associations with other topics and a host of factors, basically what the network has learned) will in different contexts slightly change the way how the gist of that story is remembered.
If you store the gist of this list (produced by a certain paragraph or document), it should be possible to compare this with other gists through some clever mathematical functions, so that gists of one document can be compared with others. Gists are also methods of reducing storage details and storing it in a much compressed form.
Consider the final picture in this post:
It shows a simple diagram of, for example, what could be a very short children's story (well, we're discussing this text at that level basically). Dad goes home in his car and enters the house. He sits on the couch and watches the tele. If you remove the verbs of these statements, you'll end up with a small network of symbols that have some relation to one another. I feel hesitant to jot down the relationships between them in this network of symbols. I'd rather add some layer on top of these symbols that manipulate the path that a certain story or context takes. So, the concepts are always somehow related, but the thread of a story eventually determines how the concepts really relate to one another. Therefore, the thread manipulates the symbolic network in different ways.
So... what about design for an implementation? In game design, even when it was still 2D, the designers already started with large lists of events and lists of nodes for path finding for example. Between each frame, these lists were rebuilt and re-used to update the AI or action. Those design patterns should be reusable in this context:
- Start with reducing a text to its nouns only
- Process the nouns one by one
- For each noun:
- Reduce the activation factor of current concepts in the activation list
- Apply the synapse factor to the current noun
- Add concept to the activation list
- With a reduced activation factor by synapse, add related concepts that are connected to the currently processed concept to the list as well
- Get an inventory of the highest activated concepts in the list
- Store the gist list to describe the text
So motivation and thread of a story is something entirely different from its concepts. Should this be part of the network?? In a way, I think it should be possible to think of it as a layered network above the symbolic network, a different kind of representation with links to the other network to describe actions and objects that are acted upon.