One of my main interests is understanding the dynamics within and between neurons that allows them to tune in to the behaviour of organisms which is in the interest of that organism at a higher level. In previous posts, I discussed a mechanism for these neurons to be excited by neurotransmitters released by a pre-synaptic neuron using, mostly, regulation of ionic fluxes (currents) through ion channels within the membrane.
The problem with these channels is that they are not linear in nature, but exponentially cause the opening of other channels (or inhibition).
Although it sounds like these are great mechanisms as a basis for intelligence, it's only the basis of operations for transmission of a signal from one end to another. Moreover, the channels do not typically behave the same way, because these are open for a different amount of time. So the voltage being measured is the resulting voltage from the sum of channels opening and generating some current. Each channel by itself can be associated with a particular probability for opening or closing and for a certain amount of time. Only at a macro-level when you assume, say, 1.000 channels, will it look like following some binomial or other probability distribution ( which is what also generates the nonlinear behavior at a macro level ).
So the fundamentals of neuroscience are providing a slightly more complete picture of the entire dynamics at work. Where the ion channels provide a mechanism for propagation, summing and integration of signals, it does not yet explain a useful integration and propagation as such. The synapse is still a very mysterious field of research with mostly speculation. It's still very much a mystery whether the famous Hebbian rule of "fire together, wire together" is a reality or more like some probabilistic phenomenon observed most of the time (but where the real dynamics are slightly more complicated).
Interesting discourses related to the power of integration of neurons can be found in the connections of axons from several pre-synaptics towards the dendritic tree of a post-synaptic one. Most neural models connect the axon more or less directly to the neuron cell body, thereby only allowing for the summation and integration of signals at the level of the soma, where all signals come together. The branches in the dendrite tree however may already perform a couple of integrations prior to the voltage potential reaching (or not) the ion channels near the soma, where presumably the action potential is actually truly activated and propagated.
The result here being that some signals, due to their proximity, can lead to very different results when added together at a local level, rather than at a global level. The underlying reason is that an inhibitory signal can entirely cancel out another signal of different magnitude (so the result is not necessarily mathematically correct). A good example of these effects is by looking at formulas in maths. Integration at the level of the soma would equate to a formula like:
Y = A - B + C + D - E
But integration at branching points in the dendritic tree can cancel out any prior summation of positive signals just by applying a signal of similar magnitude on a place higher up (lower up?) in the branch, so you could get:
( C + D - F - G + H + K + L - M - ....... whatever .... ) - E = 0
Y = A - B
The difference here being that the placement of connections in the dendritic tree is important and that there is only a certain range available for excitations or inhibitions. So besides 'just' integration of signals, this suggests that certain more or less 'logical' operations may also exist, which can be very useful for more complex separations of data and signals. It must be said that this tree is not necessarily typically "logical", as some effects have been noted where activations at the remotest end of the dendritic tree cause increases or decreases of the action potential at about 200-300 um into the axon. So the real effects of how neurons connect to one another's tree is still food for thought.
All this is speculation of course. The most useful thing that a neural algorithm needs is a better learning algorithm. Most of these algorithms in classical neural nets are back-propagating ones. This means you measure at the output, calculate the error, then back-propagate this error within the network to follow some kind of gradient. You need the derivatives from the neuron activation functions for this to happen.
If you want this to work in nature, you'd need a second brain to calculate the error and then have this applied inside the network. Hardly plausible. Somewhere is a mechanism at work which has local knowledge only and which adjusts the synaptic efficacy accordingly. This adjustments taking a certain amount of time and which is highly regulated by Long Term Potentiation (the ability of a neuron to become more excitable over a longer period of time). It seems that all in all, there are certain time windows where a neuron starts to synthesize proteins, which eventually modify the behavior of the neuron, change the structure or otherwise influence the synaptic gaps (by making more synaptic vesicles for example).
New tool in town: KnowledgeGenes.com
15 years ago
No comments:
Post a Comment