Monday, February 02, 2009

The world explicitly in "math vision"

Computers don't have sense of anything. They're basically only good at processing according to a couple of defined guidelines (the program) and nothing much else really. It's just a processor, like a blender processes food and was designed to do so.

Artificial Intelligence is so interesting, because it comes up with novel ways to reason with input. In a very narrow (or is it broad?) definition, any computer program is artificially intelligent, because any program uses "if-then" rules. However, engineers typically do not accept that definition, because the intelligence conveyed by such programs are not surprising and do not supersede our own capacity for reasoning. Popularly speaking, being intelligent means that someone, some animal or something is behaving in a way that surprises one.

A (regular) program cannot execute other rules than those "if-then" rules it has been designed to handle. That generally makes it very explicit and consistent in its behaviour (unless bugs are in the program). Let's assume that any program I'm describing here has been 100% tested and is guaranteed 100% bug-free.

The world to physicists and mathematicians looks different in many ways. There's a constant awareness of approximations of behaviours through formulas and the awareness that some problems that look ridiculously simple are astoundingly hard to solve or describe mathematically.

For A.I. to continue into its own field, it'll be an ongoing battle to get the computer to reason and "understand"? (this latter term should be used very carefully) its environment better. Roger Penrose highlighted four different viewpoints on the mind (mind != brain), where on one extreme the mind is 100% mystical and unexplainable and at the other extreme it's 100% computational. Dr. Penrose is a physicist and he doesn't seem to be inclined to believe that the mind is 100% computational (viewpoint A), but there's a strange missing link that allows computational processing machines to become aware (although one could also argue that awareness actually means introspection abilities), the state vector reduction between quantum physics and that of classical physics.

When one mentions "make the computer smarter", one generally assumes that the computer should become more like us.... but thinking about it... there's no reason why it must or should. I've argued before that humanity is pretty arrogant when it comes to words about intelligence and it basks in the light of its own narcissistic tendencies. Consciousness is not truly a pre-requisite for life-like intelligent action. Although consciousness itself is very likely not achievable in non-biological machines... can some sort of consciousness be simulated or modeled?

Some posts back I wrote about rule mining. For a computer to simulate consciousness, it should be able to deduce new rules, descriptions by analysing its perceptions. However, in order to even start doing that, it must see the importance of doing so in the first place. And in order to see the importance, it must understand the context and environment. So this is a circular argument, seemingly? Well, we certainly don't get born with objectives and understanding from day one. So there's a learning element involved and impulses that determine our goals. Baby's do have certain goals, although simple: "eat, poop, sleep" and they'll keep on crying until the goals are satisfied. Babies are human, but popularly, we're not considering them conscious yet. Possibly we start considering little children conscious when they start talking...?

Here's an account of a professor who has autism. The read is very interesting, especially the last summation, where four different levels of consciousness are given:
  1. Consciousness within one sense.
  2. Consciousness where all the sensory systems are integrated.
  3. Consciousness where all the sensory systems are integrated with emotions.
  4. Consciousness where sensory systems and emotions are integrated and thoughts are in symbolic language.
Next to the brain, we're also responding to chemical changes in the brain. In fact, just as the ear receives auditory information and the eyes receive visual information, we could think of chemicals and proteins to produce chemical information for our brains to process. Those proteins and chemicals indicate our own state to ourselves, besides the faster processes like pain (if pain were transmitted through chemicals, it'd probably take at least 20 seconds before a response occurred, not efficient!).

Quite some time ago, I mentioned that emotions are the driving forces behind humanity. What I really meant was that without emotions or feelings, we won't feel any urge to start doing anything. It's like a computer on a desk with 0% CPU usage, 0% disk I/O. Only when the proper impulse is given, is the goal generated and will we start to find ways to achieve those goals.

So it naturally follows that a 'conscious' computer should have goal-generating abilities to function more like an animal. The problem here is that one doesn't just code a "goal-generating algorithm". Different people pursue different goals. It depends on experience, outside stimuli, upbringing, different chemical compositions, talent, preference... So it's something that more or less 'grows on you'. How can the same thing be grown in an A.I.?

At a lower level... the building blocks that make US tick and develop the preference in the first place... what is it? I mean, what actually did develop consciousness or shaped it into being? If we assume babies are not conscious, then something at a lower level is developing/placing something there.

This suggests a lower level of being that is using consciousness as a tool for achieving its ultimate goal(s). That goal may be very simple (survival? pro-creation? endless aim for better well-being?), but through our conscious "processing"? layer, it translates this into several different sub-goals.

No comments: