Thursday, October 30, 2008

Gene expressions as the process for building programs

A gene expression is the process which eventually leads to the production of a complicated protein molecule. Each protein looks slightly different and has a different role overall in the human body. The encoding of the release of proteins, when and where, is encoded in the genes. Basically, from the DNA transcripts are created (RNA's), which could be viewed as part of a blueprint in reverse form and from these transcripts, the proteins are developed in combination with other processes (regulators). Eventually, the protein is assimilated and then it starts executing its 'designed' function. Some biologists are now working on reverse-engineering this process (thus, reverse-engineering the construction of biological processes as you could call it), back into the programming as it is contained in the DNA.

To call DNA 'building blocks' of life is thus a bit of a misnomer. It's a very large blue-print, or rather, information store. I then think of proteins as agents, constructed through the process of translation of instructions (its purpose) from the RNA transcript. Whereas DNA is just a big information store, the proteins actively carry out the duty as laid out in the instruction set of DNA. These duties can vary significantly. Some proteins help in cell construction, others help by being an information carrier, carrying messages from one organ or part of the body to another, where it meets other proteins (called receptors), causing a biochemical response in that cell, which in turn causes another biochemical reaction which can change our behaviour.

The timing of the construction of certain cells (body development) is contained in the DNA. The DNA will ensure that certain parts of the blueprint are released at the desired time to ensure correct development and functioning. It's difficult not to be in awe of the entire design of life, and how the relatively incomplex function of one element in combination with other not-so-complex functions eventually lead to an emergent intelligent behaviour, or rather, biologically balanced system.

One of the challenges in biology is how to discover where a certain protein, having a certain function, effectively was coded in the DNA. Thus... what did the information look like in the DNA structure which caused a certain protein to have its shape, size and function? Reverse-engineering that process will eventually lead to a much greater understanding of DNA itself. At the moment, this reverse-engineering is mostly done by comparing DNA strands of those individuals that have slightly different features, and then guessing where those differences are 'kept' in the blueprint. Although this is useful, it'll only give indications on what the sequence should be to produce that particular feature, it cannot yet be used to induce a feature that is different from both features observed.

The challenge for computer programs using genetic expressions however is even more challenging. There is no DNA yet for programs from which programs can be written. I really doubt whether they should lead to programs 'as we know it', (thus, a single DNA feature leading to a specific, one 'rule' or bytecode).

Imagine an execution environment in which a neural network could be executed. If the DNA contains instructions to form neurons and synapses, then the resulting network is going to be radically different from any NN design we know nowadays. If proteins govern the construction of the network and its size, then the execution environment itself can monitor available memory and take appropriate steps to regulate the proteins + network in such a way, that it gives the best accuracy and yield (function?). Thus, be a certain percentage of the natural selection algorithm.

The problem remains always in the construction of code, or 'function'. The function that is contained in a neural network will generally be constrained by the preprogramming of the environment itself. That is, the execution environment will be programmed to carry out certain functions, but the execution environment itself cannot 'self-innovate' and evolve new functions over time. So, in other words, it's like saying that the functions that a program could ever develop are those functions which are emergent from the simple functions defined in the execution environment.

Nevertheless, can such an environment with only a couple of pre-programmed capabilities lead to new insights and interesting scientific results?

If we produce the following analogy: "nature" is the execution environment of the world around us. We are biological complex life-forms which rely on this 'execution environment' to act. In this sense, 'nature' is an abstract form, all around us, not represented in concrete form. Our biological processes allow us to perceive events from material elements around us (being either other humans, cars, houses, etc.). We can see the representation, hear it, touch it or otherwise interact with that representation.

Similarly, in the execution environment in the computer, we can give a program developed by a gene expression a "world" or a "material representation". It'll be entirely abstract as in bits and bytes, but that doesn't necessarily matter.

We believe the world itself is real, because we experience it as consistent and always there. But if you've seen "The Matrix" (which btw I don't believe is real :), then you can easily understand my point that the experience of something, or being consciousness of something, doesn't necessarily mean that it has to be real 'as we know it'.

Back to the point, if the program doesn't know any better, it'll just be aware of its own world. That world are the inputs of the mouse, the microphone, internet?, keyboard and so on. The output available to it is the video card (thus the screen), the speakers and internet again. Following that pattern, the program could theoretically interface with us directly, reacting to outside real-world inputs, but always indirectly through a proxy and also indirectly provide feedback. It's as if the program always wears VR-goggles and doesn't know any better, but we can see the effects of it's reasoning on-screen or through other outputs.
  • Enormous simplification of nature (biological processes) == execution environment
  • Material objects == "modified" input/output hardware channels
Of course... one needs to start with the design for the execution environment in the first place :).

Wednesday, October 22, 2008

Genetic programming

The main philosophy behind the previous article was that Genetic Algorithms do not modify the structure of a computer program, but only the contents that the program uses. A program in this case is a specific design for a neural network and other things.

The same article hinted at the assumption that we're inclined to think in states, not in dynamics, and that we're able to reason "perfectly" using explicitly defined states with clear boundaries and attributes. The idea of evolving a program's structure as in the previous post has already been suggested before, but not researched to great extent. The interesting line of thought in those articles is that the program itself is something which evolves, not just the parameters that control it.

Possibly, it's just a philosophical discussion on where the draw the boundaries. The computer itself as hardware doesn't evolve from nothingness and the next thing that engineers will claim is that it should be built 'organically' in order to come up with more suitable 'organs' (hardware elements and devices) more suited to its task.

So having said that, is there any use in laying this boundary closer to the hardware? We'll need to define a correct boundary, representing the boundary between the individual or organism and its surroundings. In this comparison, the program becomes the evolutionary organism and the hardware is the environment. The hardware then becomes responsible for evaluating the program as it starts to move about the space. Programs misbehaving or inept in the capacity to perform its intended function should be removed from the space and a new program from the population should be taken.

One complexity here is that nature itself is analogous in nature and doesn't necessarily prevent or prohibit invalid combinations. That is, the design of it is very permissive. Since a computer has been designed by humans based on explicitly defined theories and models, it is not very difficult to reach an invalid state by the individual, thereby halting that individual's progress (it dies) or in worse cases, halting the environment altogether, requiring a reset. The latter, in analogy with the world around us, would mean that specific mutations in a specific individual on this earth might lead to the immediate termination of us all and a "restart" of planet earth.

So to regard the computer as a suitable environment for running evolutionary programs is a bit far-fetched, due to the, so far, required functioning of a computer, which is to behave consistently and explicitly, according to the rules that govern the execution of a certain program or operating system.

Another problem is that certain hardware is already in place and has been designed according to these explicit hardware designs (not organically grown). For example, a computer keyboard is attached to a computer and requires a device driver to read information from that device. Thus, a keyboard is input to an individual, but it's an organ which is already grown and needs to be attached to the program with all the limitations of its design that it may have. On the other hand, we could also regard the keyboard as some element in nature, for example light or auditory information, the signals of which need to be interpreted by the program in order to process it in the expected way.

Because the computer is not so permissive, it may be difficult to converge on a driver or program which starts to approximate that behaviour. There is only a small set of instructions in the gene which could lead to a valid program. In comparison with nature, it is more likely that the organism wouldn't be invalid, just that it would have features that are not as advantageous to its nature (unless the end result is the same... invalid == "dead for sure"?).

As complexity progresses, small mutations should eventually converge on an organism which is better suited to deal with its environment due to the concept of natural selection. Since a computer is so explicit about well-behaving programs and any invalid instruction anywhere might kill the program, this is reason for some thoughts on perhaps a design which comes closer to some better knowledge of the environment in which the program operates. For example, insert the program inbetween inputs/outputs and let it loose within those constraints, rather than allowing it to evolve naturally inbetween all kinds of input/output ports, hopefully evolving into something useful.

Thus, the biggest challenge here is to find a specific, suitable grammar which can be used to form the program itself, and how the elements of that grammar can be represented by lines of genetic instructions, such that any manipulation on the genetic instructions produce valid processing instructions, never invalid. My preference definitely goes out to runtime environments for handling such kinds of information. Both because the complexity of dealing with the hardware itself is greatly reduced and because it's able to run in a more controlled environment.

Another question is how that grammar is compiled. Graph theory? Can we express symbolic reasoning which come closer to the design of the computer, but which is not as explicit as a rule-based system?

It'd be cool to consider a program running in a JVM, which would receive a stream of bits from the keyboard and then is evaluated on its ability to send the appropriate letter to an output stream, which directs it to screen. The challenge here is to find a correct method of representing 'construction instructions' and in what language and what manner this should be translated into valid code, which is able to run on a picky CPU.

Monday, October 20, 2008

Fluid intelligence & redesigning software engineering

To the left is a little diagram which I created, which shows how I believe that we're making sense of things. The black line is a continuous line representing continuous, physical motion or action. The red lines are stops inbetween where we're perceiving the change. Relevance: A discussion on how we're limited to thinking in terms of actual motion or change as opposed state descriptions. We have not much trouble to deduce A->B->C relationships and then call that motion, but the motion underlying the path that is undertaken cannot be properly described (since we describe it as from A to B to C).

In programming languages, it's somewhat similar. We're thinking in terms of state, and the more explicit that state is determined, the better considering our ability to convey ideas to others. Also, controlling proper function relies on the verification of states from one point to another.

This may indicate that we're not very apt in designing systems that are in constant motion. Or you could rephrase that as saying that we're not very good at perceiving and describing very complex motions or actions without resorting to explicitly recognizing individual states within that motion and then inferring the forces that are pushing objects or issues from one state to another.

The human brain grows very quickly after embryonal development. Neurons in certain stages are created at a rate of 225.000 neurons per minute. The build-up of the body is regulated by the genes. The genotype determines the blueprint, the schedule (in time) and the way how your body could develop. The phenotype is the actual result, which is a causal relationship between genotype and the environment.

The way how people often reason about computers is in the state A->state B kind of way. It's always making references to certain states or inbetween verified behaviours to next behaviours. When I think of true artificial intelligence, it doesn't mean just changing the factors or data (compared to neuronal connections, neuron strengths, interconnections, inhibitions and their relationships), but the ability to grow a new network from a blueprint.

Turning to evolutionary computing, the question isn't so much to develop a new program, it's about designing a contextual algorithm void of data, which is then used as a model where factors are loaded in. Assuming that the functioning of the model is correct, the data is modified in such a way until it approximates the desired result. This could be a heuristic function, allowing "generations of data" to become consistently better.

Fluid intelligence is a term from psychology, which is a measure for the ability to derive order from chaos and solve new problems. Crystallized intelligence is the ability to use skills, knowledge and experience. Although this cannot be compared one-to-one with Genetic Algorithms, there's a hint of similarity of a GA with crystallized intelligence. Neural networks for example do not generally reconstruct themselves into a new order, they're keeping their structure the same, but modify their weights. This eventually leads to a certain limitation of the system if that initial structure is not appropriately chosen.

The human body works different. It doesn't start from an existing structure, it builds that structure using the genes. Those genes mutate or are different, causing differences in our phenotype (appearance). The brain creates more neurons than it needs and eventually sweeps some connections and neurons once those are not actually used (thus needed). Techniques in A.I. to do something similar exist, but I haven't come (yet) across techniques to fuse structures together using a range of "program specifiers".

The genes also seem to have some concept of timing. They know exactly when to activate and when to turn off. There's a great interaction, chemically, between different entities. It's a system of signals that cause other systems to react.

You could compare the signalling system to an operating system:
  • Messages are amino-acids, hormones and chemicals.
  • The organs would then be the specific substructures of the kernel, each having a specific task, making sense of the environment by interfacing with their 'hardware'.
  • By growing new cells according to the structure laid out in the genes (reaction to messages), the kernel could then start constructing new 'software modules' (higher-level capabilities like vision, hearing), device drivers (transducers & muscles), and so on, possibly layered on top of another.
Thus, function becomes separate from data (messages), and function itself is able to evolve and through the evolution of function, data interchange and data production will change as well. Possibly, the variability of data (types of messages) and the interpretation could change automatically, possibly regulating this data flow further. Would it become more chaotic?

It'd be good to find out if there are techniques to prescribe the development of a certain piece of software. Is a computer in any way, given its hardware architecture, capable to support such an evolutionary model? Kernels are pretty clever nowadays and together with hardware, they can detect memory violations and replace / remove processes or drivers when needed. They cannot however regulate the internal state of hardware once it's in an incorrect state, unless specific functions exist.

The other very important measure is that there's no better heuristic for evolutionary evaluation than nature. It's there, it (seems?) to promote balance, and it both nurtures the species and threatens them. If there's no diverse system (heuristic) for an evolutionary computer, then any hope to develop a smarter machine seems almost hopeless. If we assume that nature itself is 'external' (but also part of) an organism, then we could also perceive such a computer as a black box and assess its functioning in that way. This would allow us to withdraw from the internal complexities within (verification of state->state), but assess its functioning differently.

Emergent, efficient behaviours promoting the survivability in the situation should be stimulated. The problem with this approach is that, the system which needs to evaluate the performance of the agent could probably be more complex than the agent needing to evolve. Status quo?

Saturday, October 18, 2008

The invisible weight of being

Preparing for the exams, I'm taking some time off to get away from the first order predicate logic, psychology, exam training and so on. I am getting some interesting thoughts and combinations from reading through the book.

First order logic introduces the idea of quantification to propositional logic. The latter are atomic propositions which can be combined by logical connectives, forming a kind of statement of truth. It's the lowest level that you can go in order to make a statement about something. First order logic expands this with quantification and predication. The difference here is that propositional logic can only convey binary relationships between nouns. You could compare this with "if A, then B". So, the existence of one thing can be compared by the existence of another, but nothing more.

In FOPL, you can make predicates like "if everybody can dance, then Fred is a good dance instructor". The difference with the previous statement is that there are verbs included, which are predicates of a capability or property of an element, and the elements are quantified through "everybody" or "Fred" or "there exists at least one".

Now... trying to apply FOPL to our own methods of reasoning about the world, I recognize that we tend to make errors. That is, we're not generally developing a full, exact model of a certain knowledge domain (that is, having each relationship between objects in that world represented by a statement in formal logic), but have rather loose associations between those objects which are used to reason between them.

The deviations in the ability to reason exactly about things (in more complicated situations) may be due to the inability to measure exactly or with great certainty, but other more common reasons include cognitive bias.

If you remain in the FOPL world, this would mean that we could develop incorrect predicates on the functioning of the world around us. Consider the following story:

"The teacher says:'People not studying may fail their exams. People that do study, welcome to the class'".

Does the above mean by exact definition that people who do not study are not welcome? We could easily infer that from the above sentence. If the teacher meant that students not studying are not welcome, he should have said:"People not studying are not welcome here", which he did not. We thus tend to infer additional (incorrect?) knowledge based on a statement that only included part of the student group, but not all. Therefore, we assumed that students not studying are not welcome, because students that do study were explicitly mentioned and were explicitly welcomed.

So, we're not consistently reasoning with explicitly declared knowledge. We're inferring lots of different relationships from experiences around us which may be correct or not.

Learning is about making inferences. Inferring information by looking at a world and attempting to test assumptions. The question is then not so much how we can test assumptions to be true, but how to develop the assumptions in the first place. The cognitive bias shows that we're not necessarily always correct in developing our assumptions, but also that we're not necessarily correct in the execution of our logic, such that we may develop incorrect conclusions even though our essential knowledge does not change.

The interesting thing about FOPL next is that the symbols used for expressing the relationships are simple. Negation, implication, quantification, that is about it. When we use language, it feels as if the verb belongs to the object itself, but in FOPL the action or capability is another associative element through an implication. Since FOPL is a way to express boolese relationships, reasoning with uncertainty makes FOPL not so immediately useful, unless implications include a measure of certainty. But then we cannot reason using FOPL.

We could also ask the question as how far a computer has the ability to develop a hypothesis, and what techniques exist for hypothesis development. Looking at humans, we have different ways of testing our hypothesis. If hypothesis testing is taken as a goal, then we need to introduce some new logic which may or may not be true and test that against our existing knowledge. We'll need to be sensitive to evidence that refutes the hypothesis as well as evidence that supports it. If the hypothesis is definitely incorrect, there's no need to look further. If the hypothesis is inbetween, then we're probably lacking some information or missing an intermediate level which includes other dependencies. Thus... a hypothesis may be successful in that it indicates an indirect relationship between two elements, which can only be further investigated by researching the true relationships that lie between it. A true scientific approach would then establish the goal to prove the relationship exists and in by doing so, attempt to find other relationships with potentially unrelated elements, bring them into the equation and establish sub-goals to verify the truth of the sub-relationship.

It would be very difficult for a computer to find other elements that are contextually related. If first order predicate logic is a means to describe and lay down predicates about the functioning of the world around us, what tools could we use to reverse-engineer the underlying logic of those rules? Imagine that there's a person that has never received formal education. How different is the perception of their world from a person who has? Do they use the same formal knowledge and reasoning methods?

Wednesday, October 08, 2008

A.I.: Modeling reality through supposed (uncertain) associations

If you have an account at Amazon, you may have noticed that somewhere on the screen after your login, the system produces a list of recommendations on things that you may find interesting. This is a little project started by Greg Linden. You could consider this some kind of A.I. engine. The basis of the idea is the assumption/claim that an association exists when customer A buys book X and Y and customer B buys book X only, customer B may also be interested in book Y. This model can further be extended by logging what has been in the shopping cart at some point in time, such that it's probably of interest to a person, even though they end up buying it or not.

Does the relationship really exist? Probably in x % of the cases the relationship is real, but I have bought books for my wife for example and since then, the engine keeps recommending me books on corporate social responsibility. Although I do find the topic interesting, I'd rather hear summaries about it then dive into a 400-page bible describing it :).

But such is life then. A computer has very sparse information about online customers to reason with. And once you develop such technology, it's a good thing to shout about it, since it's good marketing. However, the point of this story is not to evaluate the effectivity of the algorithm or engine behind Amazon recommendations, it's to show that these A.I. systems are not necessarily that complicated.

The first thing to do is to understand modeling this space is all about finding sensible relationships / assocations. In Amazon's case, this is a customer that you may be able to profile further. Do you know their age? what is their profession? are they reading fiction/novels? are they reading professional books? where is their IP from? Can you find out if they're behind a firewall of a large company / university? when you send them some material to try them out, did they click your links? did they then also buy the book? and why?. Of course, you wouldn't start by finding out as much as possible, but you need to think about which properties of a customer are important and figure out a way to determine them.

At the other end of the spectrum are books, waiting for readers. A book has a category, it's got a total number of pages, it has a target age group, it has customer reviews with stars describing its popularity, some are paperbacks, others are always sold when put in the shopping cart, others are removed later, some are clicked on when you send small campaigns to a select customer group. Thus, very soon, the two domains are somehow married together in the middle, but in many different ways, of which some ways cannot be analyzed with great certainty.

A little bit of data-mining helps here to test the certainty of your hypothesis. The next step is to think of a model where you put these things together. You could consider using a neural network.... but why? That'll work well for data that is more or less similar, but can consumer behaviour really considered that way?

Other approaches consider production rules. It's not much different from IF-THEN rules, except that you're not processing them in the order in which they are declared in the program. The problem here lies in the fact that you have millions of books that you may be able to match to millions of customers, but testing every possible combination would certainly cost a lot of processing cycles for nothing. So you need some more intelligence to wisely pre-select sets.

The ideal thing would be to develop a system that is perfectly informed. That is, it knows exactly what your interests are at a certain time and it tries to match products against those interests. Two problems here. Consumer behaviour tells us nobody is going to stop at a website to enter their interests. Second, a customer may not know they're really looking for something until they see it. The second reason being much more interesting, since it's "impulse" buying to a high degree. Exactly what you'd need.

Well, and in case you were expecting a finale where I give you the secret to life, the universe and everything.... :)... This is where it ends. There is no other final conclusion but to understand that a server in 2008 cannot have perfect information about you, especially not when you choose to be anonymous and known at the same time.

So... reasoning and dealing with uncertainty it remains. The efficiency of recommendations is highly dependent (100% dependent actually) on the relationships and associations that you assume in the model. In the case of Amazon, they started with what customer A bought customer B might also find interesting, and developed their concepts further to "wish lists" and mixing it with other information. That still does not capture interests that arise suddenly, which is generally what happens when changes occur in your life. You may for example start buying a house, start a new course in cooking, start a business, have a colleague who talked about DNA and thought it really interesting.

Also, chances are that once you've bought books about a subject and let's say it's technical, you're saturated by that knowledge (or author), and thus your interest wanes. The recommendations you'll see are very likely bound to the same domain. So they are not nearly as effective (except for those who are totally consumed by the subject :).

As a change, you can also attack this from a totally different angle. The information you can build up about your products can be very deep. You could theoretically use consumer behaviour to find out more about your products, rather than applying it to understand your customers better. The idea is to generate intricate networks of associations between your products. Then link those associations back to anonymous users later on. The more you know about your products and those hidden associations they may have, you can react very quickly to anonymous demand. You could also use it to not search for books with a certain term in the title or text, but find books that are ontologically related to the term.

For example, a customer types "artificial intelligence". It's tempting to show books about A.I., but is it really what the customer is looking for? You could make this into a kind of game. Start with a very generic entry point, quickly zoom in on an "area of interest", which is interconnected with a host of products, books and other types. Then start showing 5 options that allows the user to browse your space differently. Always show 20 sample products after that. When a user clicks a specific product, it gets a score to bind that to the terms selected (path) to the product and it's showed again to other users with the same similar path. The higher a product scores (the more popular), it'll automatically pop up more often.

The above model could then be expanded. The idea is that you're not just seeing products that are easily related to the domain of interest, but also have less obvious relationships. That allows customers to see things they wouldn't have looked for themselves and it can peak interest. It's a bit like entering a store without exactly knowing what you want. It's also a bit like searching on the internet. Who knows what you generate if you don't allow the most obvious associations, but only the less obvious ones (which could be part of the heuristics/score).

Just be careful not to get this scenario. :)

Sunday, October 05, 2008

The relationship between rationality and intelligence

Every day, we make decisions on a continuous basis. We've come across many situations before, thus can reliably estimate a path of best resolution for experienced situations. In other cases, we haven't seen too much of a similar situation, but still develop an opinion, gut feeling and most likely undertake on a path for resolution.

We can call our thought and actions rational or not rational. The word rational refers to an ability to fully explain an action and most likely we'll agree that assumptions are not taken as an acceptable means of forming it, unless we have data/information to back up those claims. Thus, rationality involves an act or decision that is developed from a calculation and estimate of existing experiences. Irrational thoughts and actions are the products of assumptions, incomplete data or little experience. You could closely couple rationality with logic, although rationality may be a little larger than logic. Logic requires predicates and through logic and knowledge represented in the rules of logic, one can "reason" about the validity of claims, thoughts and actions. However, since logic follows those rules only, whenever knowledge is not embodied within the rules, the system cannot appropriately confirm or deny a specific claim, thought or action.

Intelligence could be seen therefore as the ability to act outside the realms of logic and rationality, based on the premise of uncertainty, and intelligent reasoning is the ability to infer new relationships through trial and error or 'logical reasoning' with analogous material and developing gut-feel probabilities that another situation will behave in similar ways or slightly different with expectations on how it will differ (although we could be really wrong there).

Induction is the ability to estimate the outcome of a situation based on a set of assumptions, initial states, goals and effects. Deduction is the ability to find out under which conditions a situation came to be. Both are intelligent actions.

A computer is a pure rational machine. It acts within the knowledge it was given and we haven't so far agreed that computers are really intelligent. Although some systems exist that can perform very specific tasks in a very efficient way, those systems are entirely rational and cannot deduce/induce new knowledge from their surroundings (enrich themselves with new programming).

Rational is also defined sometimes as "void of emotion and bias". This bias is caused by how easy it is for you to recall memory from similar situations. Stronger emotional situations generally are easier to remember (and this is generally for the good). Many times, we're over-compensating risks related to explosions, accidents or attacks, more than what is needed to appropriately reduce the risk. Some academic research is highly biased, because the author wanted to find the evidence that his claims are true, rather than remain open to find contradictory results. Rational reasoning thus requires us to eliminate the bias, not be guided by opinion, but rely on facts and computation to come to a conclusion.

The following text is related to power and rationality:

http://flyvbjerg.plan.aau.dk/whatispower.php

The interesting question that you can derive from the text is: "How can people in important governmental positions correctly apply the power that is given to them and make rational decisions in the interest of the people they serve?".

In order to make rational decisions, we may not be biased by irrational opinion. That is... the thoughts and arguments that we come up with must be fully explainable and not be tainted by personal expectations from the leader. We can choose to trust the leader on those claims, but without any explanation given, there is little reason to provide that trust.

Artificial Intelligence in this sense can be applied to some of these problems, although it should probably not be considered leading? There are some AI programs for example in research that can be used by the justice system to analyze historical cases. A current case can then be evaluated against the historical punishments, such that the judge has an extra tool to ensure the punishment given is fair and enough, considering the situation and previous cases. Certainly, each case by itself is one to be considered individually, but the programs give an indication of the similarity. It's thus a tool for the judge to verify his own bias, if any exists.

Saturday, October 04, 2008

1+1+1=5

The title refers to the fact that in emergence, the total result of interactions of smaller elements is more than their sum. Or rather, having many small elements or organisms perform actions according to simple rules, the overall result of the following of these rules yields a new kind of behaviour of the system itself, which may far surpass the expected sum of the results.

Where some people consider a neuron the most simple building block available in constructing a network (it either fires or doesn't and it can be influenced with chemicals), each cell in our body is actually an agent by itself which on a lower level has very intricate capabilities and behaviours. A cell in itself could in a way be considered an organism, even though a human has many of those and they are interrelated.

In order to follow my drift, you should look up the article on the definition of a cell in biology: Wikipedia link
Each cell is at least somewhat self-contained and self-maintaining: it can take in nutrients, convert these nutrients into energy, carry out specialized functions, and reproduce as necessary. Each cell stores its own set of instructions for carrying out each of these activities.
Cell surface membranes also contain receptor proteins that allow cells to detect external signalling molecules such as hormones.
Or... each cell has the ability to sustain itself, has its own behavior and purpose and follows a set of simpler rules than the entire organism. Cells can generally multiply, although this depends on the type of cell. The more complicated a cell is, the less its capability to multiply. Some cells are said not to be able to multiply at all (neurons), although other research has indicated that this is not entirely the case.
Cells are capable of synthesizing new proteins, which are essential for the modulation and maintenance of cellular activities. This process involves the formation of new protein molecules from amino acid building blocks based on information encoded in DNA/RNA. Protein synthesis generally consists of two major steps: transcription and translation.
Proteins have very complicated structures and may contain specific receptors, such that certain proteins may react to chemicals in the environment. This reaction may trigger a certain kind of behavior, thereby serving a particular purpose. For example, liver cells may give off chemicals to indicate to the body that there's a falling level of nutrients, thereby causing a desire to eat:

http://en.wikipedia.org/wiki/Hunger

The chemical is released by receptors (protein molecules):
In biochemistry, a receptor is a protein molecule, embedded in either the plasma membrane or cytoplasm of a cell, to which a mobile signaling (or "signal") molecule may attach.
So, each cell in this system has a very specific function. It monitors levels of hormones or sugar (type of molecules) and the entire functioning of the organism basically is the recognition of signatures of a certain complex molecule, generally proteins.

The proteins are specified by DNA. The DNA is a large blueprint, which on being split results in a template (RNA), which then reconstructs DNA from that point. Unfortunately, during this templating process, it is possible that certain "errors" occur, which are basically mutations of the original DNA. You've started with a DNA signature that is the result of the merger of two cells of your father and mother. Through that set and in your lifetime, the cells in different parts of your body re-uses that DNA signature to renew and recreate other cells. The older you get, the more likely it becomes that one cell multiplication leads to a certain kind of errors that give a cell a potential fatalistic behaviour: cancer. The cell basically becomes rogue in that it starts to multiply quickly, thereby breaking some rules in the aggregate system. It develops a lump of some sort. When the cell also develops the ability to move (which some cells do and others don't), things become dangerous, since the cells that bear DNA where the cell multiplies at a very high rate move to other parts of the body.

Thus... in short... considering a neuron as a cell that fires and as the lowest important building block of a neural network is a grave mistake. Each cell itself has very, very complicated workings, reactions and behaviours that are each in itself very important, as these define the simple rules of the behaviour of the cell. If the cell has behaviour which may change over time or be heavily influenced by changes in the environment, we cannot assume that ignoring that effect in a 100-billion neural network will not make any difference as opposed to the view where it's considered of the utmost importance to understand and model them.

In previous posts (important numbers and statements), I've done some calculations on the memory requirements for a human brain. The result is that, assuming 4 bytes per neuron and connection, you'd need 400 Terabyte (400,000 Gigabyte) of memory in order to store all the connections and neuronal information.

Now... each neuron is a complicated cell, which through changes in its immediate environments or differing levels of chemicals, could slightly modify its behaviour. It could start to fire more often or fire less. Thus, in the simplest form for a model, each neuron would need to have a threshold modifier, which is influenced by another system, to regulate its individual responsiveness.

If we take into account that besides the processing of signals, the brain also responds strongly to chemical changes brought about by external factors, such as fear or emotions, then one could say that "emotive neurons" are those neurons that give off proteins of a certain type on the recognition of danger, causing other neurons to become much more responsive in their processing of signals. The exact level of chemicals produced is dependent on the number of cells that would produce a certain chemical and how strongly they produce it. Since this also depends on learning, the question remains whether the producer learns to produce less or whether the signal processors inhibit the signal more as soon as it is observed.

Thus... there may be three very complicated effects at work in the human brain, relating to consciousness and our efficiency of acting in our environment. We have the neuron cells, which I see as pattern recognizers, which also learn and where sub-assemblies of neurons work together to create a learning experience (process signal, recognize situation, provide stimulus to react to situation, verify effectivity of reaction, reduce/increase stimulus).

And the glial cells, which outnumber neurons by a factor 10, but which have so far not been researched in great detail. Could it be that there's a secret in the interaction of neurons + chemicals and the glials that together, as three complicated systems, produce that which we call "consciousness"?