Showing posts with label analysis. Show all posts
Showing posts with label analysis. Show all posts

Friday, July 20, 2007

When malvolent factors collide...

It's generally a combination of all factors all colliding together (like Murphy's law) that shape such a serious incident. I take it now confirmed that the reverse of the right thruster had been disabled. Having said that, the effect of the reverse thruster at high speed is not necessarily high.

The design of the automatic braking of the Airbus has been criticized before, as it does not in all situations guarantee that the plane will actually brake in time. When to brake is sensed by a couple of things. It's better to let you know from different sources and other accounts where similar events developed:

http://www.rvs.uni-bielefeld.de/publications/Incidents/DOCS/ComAndRep/Warsaw/leyman/analysis-leyman.html

http://www.msnbc.msn.com/id/13773633/

http://www.kls2.com/cgi-bin/arcfetch?db=sci.aeronautics.airliners&id=%3Cairliners.1993.670@ohare.chicago.com%3E

http://answers.yahoo.com/question/index?qid=20070718063700AA4OCc5

Well, since all I can do is speculate, I'm leaving it to the following course of events:
  • The thrusters were not in operation, but their effect during landing is minimal. Not sure how much of a difference they would have made on this account.
  • The speed of the aircraft at landing was higher than normal. This may have caused hydro-planing and Airbus has a system where the rotation of the wheels need to be at 45knots minimum for the braking system to kick in. This is when automatic braking is in use.
  • The runway was too short to recover in any way possible. When the braking apparently started to work, the runway left was too short to bring the plane to a full stop. The pilot reverted his decision and attempted an emergency take-off.
Thus. the combination of all undesirable factors together:
  • Some mechanical disability that makes a difference (reverse thrusters disabled)
  • Probable failure of the plane to recognize it was on the ground, causing the braking not to kick in on time
  • Failure of the pilot to recognize this occurrence and apply braking manually (if possible)
  • Too short a runway to give more lee-way in the recovery of these emergency situations
  • Rain puddles on the runway (see rainspray) that caused the hydro-planing in the first place
The actual course of events can only be found out for sure when the report comes out. Some of these findings can only be truly confirmed with the data of the black box.

Friday, July 13, 2007

Semantic Intelligence

I'm reading up as much as I can about semantic search. What I find on the Internet so far are quite a number of marketing materials, which shows that the concept of semantics is still very new. The direction taken in these materials is generally the analysis of language, linguistics, attempting to re-create common sense in a computer, as if it were possible to allow it to reason.

I'm very skeptical about these approaches at the moment, but don't totally discard it. The problem with a computer is that it is a fairly linear device. Most programs today run by means of a stack, which is used to push information about current execution context. Basically, it's used to store contexts of previous actions temporarily, so that the CPU can perform other tasks either deeper or revert to previous contexts and continue from there.

I'm not sure whether in the future we're looking to change this computing concept significantly. A program is basically something that starts up and then, in general, proceeds deeper to process more specific actions, winds back, then process more specific actions of a different nature.

This concept also more or less holds for distributed computing, for many ways this is implemented today. If you look at Google's MapReduce for example, it reads input, processes that input and converts it to another representation, then stores the output of the process towards a more persistent medium, for example GFS.

I imagine a certain model in the next paragraphs, which is not an exact representation of the brain or how it works, but it serves to purpose to understand things better. Perhaps analogies can be made to specific parts of the brain later to explain this model.

I imagine that the brain and different kinds of processing work by signalling many nodes of a network at the same time, rather than choosing one path of execution. There are exceptionally complex rules for event routing and management and not necessarily will all events arrive, but each event may induce another node, which may become part of the storm of events until the brain reaches more or less a steady-state.

In this model, the events fire at the same time and very quickly resolve to a certain state that induce a certain thought (or memory?). Even though this sounds very random, there is one thing that gives these states meaning (in this model). It is the process of learning. The process where we remember what a certain state means, because we pull that particular similar state from memory and that state in another time or context induced a certain meaning. In this case, analogy is then pulling a more or less similar state from memory, analyzing the meaning again and comparing that with the actual context we are in at the moment. The final conclusion may be wrong, but in that case we have one more experience (or state) to store that allows us to better define the differences in the future.

So, in this model, I see that rather than processing a many linear functions for a result, it's as if networks of different purposes interact together to give us the context or semantics of a certain situation. I am not entirely sure yet whether this means thought or whether this is the combination of thought and feeling. Let's see if I can analyze the different components of this model:
  • Analysis
  • Interpretation
  • Memory
  • Instinct, feeling, emotion, fear, etc.
That is interesting.

Well, the difference that this model shows is that semantic analysis talks about generally accepted meaning rather than individual meaning. The generally accepted meaning can be resolved by voting or allowing people to indicate their association when a word is on screen. This seems totally wrong. If for example a recent event, like 9/11 occurs, and the screen shows "plane", most would type "airplane" and the meaning of that word will very quickly distort other possible meanings: a surface, an "astral" plane, geometric plane, compass plane, etc. Meaning by itself doesn't seem to bear any relationship with frequency.

If this holds true, then it means that as soon as any model that shapes semantic analysis in computers has any relationship with frequency, it means the model or implementation is flawed.