Tuesday, February 22, 2011

Hauppage USB live 2 on Linux

I've used the USB live2 stick for displaying analog (tv) video on Linux, when I was still on Ubuntu Lucid. Things worked ok back then, so kept the card. Then in a flash of non-inspiration, the "update-manager" appeared and I upgraded to the most recent version. The drivers immediately stopped working and these were pretty special at the time, because I compiled them to get it to work.

I use this stick in combination with a a GoPro HD camera, which in the same timeline I did the Ubuntu upgrade was upgraded to new firmware, which allowed it to stream TV out at the same time as recording video. Great feature! Unfortunately, since the new firmware allowed configuration settings for PAL, I decided to change that along with it. This appeared the real problem for the driver problems.

On Windows the driver is getting its output and all the lights work. So I figured it must've been a driver problem. Turns out that when I configure the GoPro camera to use the NTSC standard instead, I am getting output on Ubuntu Maverick and a decent one at that. For some reason, it appears as if the combination of the driver with PAL and GoPro output is incompatible with one another.

So, if you have a GoPro, attempt to use this together with USB Live2, try changing settings to NTSC instead and see if you get output that way. By the way, I'm using this in combination with an analog video receiver and yes, the same problems apply!

Sunday, February 20, 2011

Philosophy of Mathematics

In my line of work, I'm often confronted with people that face problems and want to have resolved. In some of these problems, mathematics are an essential part in the resolution of these problems. In some private part of research, I'm trying to find the real origins of intelligence and find myself going way back and forwards in time, space and mathematics, trying to come up with the answers. Some of the questions that are popping up sometimes is that there may be things incomplete about the language of mathematics itself, rather than failure in trying to find the right set or sequence of equations / formulas to apply. A lot of research over the past decennia in Artificial Intelligence has produced enormous amounts of very important and interesting applications, but none of these I find exhibit a strong sense of generality in their applications, which allow the same technique to be used over and over and over again in different situations. Most AI applications require hard-wired components of machinery in order to provide any solution.

This causes one to go back in time to try to find the origins of mathematics, in search for an answer whether maths by itself is (eventually) inherently limited and whether there's a bound for reality, a bound for mathematics or whether both of these worlds will run parallel forever (they are complementary forever), or whether the abstract thought being developed in mathematics will eventually diverge from reality by so much, that we're now dabbling in the abstract model itself to find both problems and solutions within that model, even there's no physical counterpart that would be subject to the abstract problem.

If you look at civilizations as they develop language, at some point in their language they start to associate a "count" of something with a body part. Some civilizations evolve this further to start using more abstract tokens like sticks to count beyond the maximum number of body parts you may have. In simple societies, it is unlikely you need more than the number of parts on your body to explain some concept (you could also modify the definition of how you refer to something). Those which do evolve, eventually use abstract representations to refer to some abstract notion as a "count". This "count" has no other concept other than our visual perception of being some number of concepts.

The numbers 0-9 as we know them now have evolved over a rather long period of time and came to us from India and Arabia. The number system is base-10, which allows for relatively very easy manipulation of the numbers during calculus. For this reason, they were eventually adopted and used over the Roman glyphs that dominated, for example, in Italy at the time.

The reason why numbers became useful are related to trade. The problem with trade is that you need to figure out how much to give of this for how much of that. So the practical problem required some way to refer to some 'count' of this and some 'count' of that, also some notion eventually that 'x' of this equals 'y' of that. Hence, the bartering and trading very quickly gave birth to the notion of equality and thus the equation.

Geometry evolved after that and served to be able to make rather precise calculations about areas of land, as well as how to carve and build appealing feats of engineering, build houses, bridges, etc. Even though not all forms and shapes could be accurately described at that point, there were some basic rules that could be used already to help out in the engineering effort. It is for these purposes already necessary to think in terms of half objects or fractional objects, like a third of a pie or two-thirds down a bridge. Engineering also required the use of unknowns

As you may notice until now, the roots of mathematics are housed in the manipulation of the 'counts' of things... how many meters, how many pears, how many of that for ... .

Then Newton came along and decided to use equations not just for static problems, but dynamic problems like apples falling from a tree. And here we also notice introductions of for example the differential equation. What the differential actually does is chop up some event over a larger period of time into many smaller parts, analyze their behavior in these smaller parts and develop a new equation that exhibits how the system changes over some time assuming that there is not significant deviation within that system. For a singular system, i.e. one that does not interact in the abstract model it is given with any other system, this kind of mathematics is very well suited for solving problems.

After Newton, a lot of new discoveries were made primarily on the side of physics. We do not only know how to count cows, trade land and figure out how far something is, but we can also use it to describe movement and how things move in space over time (however, with important assumptions). With Newton and the mathematics thereafter, people started to feed more abstract ideas into the language. Take into account that for every addition to this language, the deliberations have to be tested against the axioms of the language itself in order to provide consistency.

The problem with more abstract ideas is that some notions may have no reality counterparts, or that the elements that they describe in theory cannot be measured because they are either too small or too big (infinity is one such example). Just thinking about infinity itself and whether it existed or not has driven people mad (literally!).

Newtonian equations work very well for situations in which you assume a disturbance and the rest of the system is free of distortions for a certain length of time and this system has consistent and homogeneous properties (friction, etc.). But for different systems, even a very simple pendulum where you deal with oscillations, even the single system without a second interacting pendulum can only be practically computed to some degree of accuracy. That is, the real exact solution is the elaboration of some power series, depending mostly on the amplitude of the system.

So there exists already a rather simple dynamic system for which there's no real exact solution possible, because the power series extends towards infinity. If we use a supercomputer to compute the exact result, we'll never be able to calculate the result solution before the point in time we'll need it. And yet... looking at the real world and looking at a pendulum swing, there's the thing doing it. What's causing this inherent problem in mathematics, where it cannot be used with 100% accuracy on a pendulum (given some assumed mass), but can be used very precisely on the exchange of goods on a market?

There's something about mathematics that's horribly incomplete yet and it's something to do with recursion in mathematics. We need an ability to compute the outcome of recursion sets very, very quickly. The above demonstrates that the model of the real world of mathematics is really just a model and breaks down for certain practical uses of mathematics, depending on the complexity of the situation.

Tuesday, February 15, 2011

Chaos Theory

Chaos. The word itself evokes feelings of disorder, of things that are not orderly arranged, a jumbled up room full of stuff, stripes of paint seemingly without reason on a canvas, the results of the actions of satan, uninterpretable perceptions, everything that cannot be described with a simple description or looks untidy. The scientific meaning of chaos however is slightly different. It's not so much about being tidy, but about losing predictability and periodicity. The interesting thing is that from a scientific perspective mos
t, if not all, things around us have chaotic properties and are in one sense or another chaotically interfering with one another. Chaos theory researches the effect of sensitivity to initial conditions, which is when a very slight error in a volume, speed or other characteristic may lead to profound differences in the outcome of results over a longer period of time. Lorenz first discovered that certain systems are highly sensitive to initial conditions when he tried to predict the weather. He ran the simulation once and then printed results. At some point he wanted to verify his findings by running the algorithm again and to his astonishment, even after he verified that the numbers were the same, the outcomes were significantly different. The only difference was that the interpretation of the numbers by the computer were slightly truncated somewhere at the 1000th decimal number.

Normal periodic and linear systems do not typically amplify these errors, but just show a similar, linear difference in the outcome. Basically, your result is slightly off. What Lorenz found here is that after some point in time, the system started behaving completely differently from the initial run of the process. Sensitivity to initial conditions is what he discovered and he came up with a strong analogy for the phenomenon; the "Butterfly Effect". The analogy is that sensitivity to initial conditions could mean that a butterfly flapping its wings in Brazil could in theory cause a tornado in Texas to occur.

Other interesting discoveries were made by the russian Belousov, who mixed up a couple of chemicals together and discovered that it changed color to yellow, but then back again. Not only that, it was actually oscillating between clear and yellow. This phenomen had never been witnessed and at that time was seen as impossible. For that reason, his paper that he submitted to a journal was straight-out rejected. Even after a revisal nobody wanted to publish the results on the basis of lack of evidence. It was only years later after informal circulation in Moscow that eventually the results were picked up by Western scientists, who improved the experiments further and demonstrated that a petri-dish with a certain solution of chemicals may eventually demonstrate autonomous oscillation, autonomous meaning without induction of external disturbances. Thus, a system which switches between states in a temporal manner. The actual patterns that occur in such dishes *may* look like the following. The interesting bit is that this is dependent of..... the exact initial conditions!

As for the pattern itself... there's another great scientist called Benoit Mandelbrot, who's not a typical mathematician in the sense that he knew algebra very well :). He studied in Paris in the 2nd world war, so naturally the study was frequently interrupted. Also, he wasn't always that much interested in doing math tables and all that, but instead he had a great visual attention to detail. This made him look at coastlines and mountains and discover recurrences of smaller details in larger ones and come up with the idea of a very simple formula, describing a hugely complex shape overall. He called that a fractal:

The idea is that a very simple formula, z <=> z^2+c, gives rise to the picture above (calculated in the complex plane of course and where the result does not escape to infinity. The figure is self-similar in the sense that one can zoom in on the image and discover that the same shape is in many other different smaller locations at a fraction of the size, but in this case equal to the first one.

The interesting idea here emerges that very simple rules of interaction between elements can produce hugely complex systems at a larger scale. The complexity of the figure and the simplicity of the equation should give you some idea of that power. The relationship between the two has always been quite clear from an intuitive perspective, but reviewing these mathematical details suddenly changes that.

Chaos theory has put the world of Newtonian physics upside down. The idea of being in control of particular phenomena or occurrences just because we are able to predict it (to some extent).

The notions of chaos and order are not necessarily exclusive. In the majority of cases, when scientists mention chaos they do not mean "100% randomness" in their discourse, but they probably refer to: "some chaotic elements involved that deny a straightforward linear solution to the problem". This is because 100% randomness in systems yields no patterns whatsoever, just white noise. Therefore, there is a grey area between the notions of order and chaos and in many cases, when you feed energy into a system that behaves periodical, at some point you'll push it into chaos, where it'll behave unpredictably, but may eventually return to predictability and periodicity again, although that pattern of order may be different from the one you had before. Many systems, given a certain feed of energy, swing back between the two forever. This is what the Lorenz attractor at the top demonstrates, as well as demonstrating how the system is highly dependent on initial conditions (here, interpret this as infinitesimally small differences in the initial condition, the reciprocal of infinitely large).

What is different in mathematics when you compare Newtonian physics with Chaos Theory?
  • The expressions in chaos are very simple, but recursive.
  • Chaos math usually deals with interactions between systems or elements.
  • Newtonian physics require orderly systems to be able to predict what happens.
  • Chaos has its own cycles and may skip from apparent order to chaos and flip between the edge of chaos and back without warning.
  • When you put too much energy into chaotic systems, they become totally unstable and generate totally unpredictable results, leaning towards randomness the more energy you put in.

Saturday, February 12, 2011

New kind of science

I'm reading a book by Stephen Wolfram, which is called "A new kind of science". I picked up the title after viewing a number of very interesting lectures on YouTube from Robert Sapolsky at Stanford University about "Human Behavioral Biology". It is a privilege to be able to peek into his classes this way. One of the lectures is dedicated to cellular automata and he's explaining their relevance to biology. There's a book mentioned from Stephen Wolfram, so that's how I got there.

Anyway, there are very mathematical ways to explain how CA's work, but here's Wikipedia's one. One way to look at CA is also as a kind of state machine with many different states at very short intervals from one another, where these states are actually macro-states, the global sum of internal states of each cell. Because rather small changes in internal states can significantly affect the global outcome of the global state, the horizon over which one can make calculations to derive future states is rather limited. I.e., one needs to calculate every state inbetween in order to find the final answer.

Some three centuries ago we started discovering/inventing physics laws and formulas to make our lives easier. Nowadays these laws and formulas were used to construct airplanes and we went to the moon with them. Most of these laws come with rather large assumptions. Most of the time, it is:"Assuming nothing happens that introduces a significant error, we can derive our future position/velocity/acceleration by multiplying x with y over a z time period". We're just lucky that macro-objects like our vehicles behave that way in a consistent manner.

But looking at smaller interactions or larger systems like the weather, we can't use those laws as directly as that. The number of collisions and forces between objects make the entire thing so complex, that you can no longer work with laws that require these assumptions. So the complication is that you now have to represent many other bodies interacting with your system and calculate the state of this "universe" or "world" for each intermediate state, until you get to the goal state you want. Luckily the interactions are not usually really complex when you get to an appropriate level. Unfortunately, exactly knowing these interactions remains difficult in many cases and very slight differences in the "rule" can eventually produce very large deviations from the overall pattern.

It is the expectation that this kind of thinking will produce more understanding about the world around us, as there are so many processes that function according to these principles:
  • the billowing of smoke and vapour
  • pressure of gas
  • the way vortexes are produced by wings
  • interactions between neurons?
  • the structure of snowflakes
  • the ways how cells react to other agents?
Also really interesting is the way how such cellular automata can be used in combination with stochastic processes, the idea being that knowledge may not be complete for each "cell", but given their observations so far, they may like to assume certain facts about the overall structure and modify their behavior accordingly.