tag:blogger.com,1999:blog-175581672024-03-13T08:32:37.115-07:00A Radial MindNot hindered by any lack of knowledge, this blog aims to provide challenging thoughts on various topics.Gerard Toonstrahttp://www.blogger.com/profile/17067969645449987498noreply@blogger.comBlogger312125tag:blogger.com,1999:blog-17558167.post-54479165836644211642014-06-16T04:24:00.002-07:002014-06-16T04:24:44.162-07:00Getting ceph and rados runningI finally managed to get a rados gw going (it's a service exposing an S3 like API), so that you can access files on the huge cluster through a webserver. There were some issues where radosgw couldn't connect to ceph, but that was eventually resolved.<br />
<br />
radosgw essentially creates new pools in ceph and at startup it does this incrementally. If this pool isn't healthy, apparently updates to it are stalled and processes that use it seem to halt. What I did to remedy it was to set the replication level to 1 for all pools. Here's my output:<br />
<blockquote class="tr_bq">
<i><br /></i>
<i># ceph osd lspools<br />0 data,1 metadata,2 rbd,3 .rgw.root,4 .rgw.control,5 .rgw,6 .rgw.gc,7 .users.uid,8 .users,</i></blockquote>
When you use this command:<br />
<br />
<i># radosgw-admin user create --uid=johndoe --display-name="John Doe"</i><br />
<br />
It stalls when .users is not set yet and this is created after you rerun.<br />
<br />
So a remedy here is to start ceph and rados, keep looking for new pools when you continue on the tutorial and set the replication level to 1. There should be another way to create new pools with the level set correctly from the start.<br />
<br />
<i># ceph osd pool set <poolname> size 1</poolname></i>Gerard Toonstrahttp://www.blogger.com/profile/17067969645449987498noreply@blogger.com1tag:blogger.com,1999:blog-17558167.post-10870502854651748592014-06-15T17:54:00.001-07:002014-06-16T04:32:24.435-07:00Running Ceph on standard Ubuntu 14.04I'm looking at how to configure and run a simple Ceph cluster on a single machine only for development and integration of some other services. Ceph has grown since last time I checked it and also grown in complexity and outdated documentation.<br />
<br />
It is moving towards the use of "ceph-deploy", but on the current version of Ubuntu this was getting me issues with host resolution, even though hosts and hostname are correct.<br />
<br />
There's another page that uses the older method of creating a cluster, but this also creates problems when the OSD is to be started ( the one that saves your files ).<br />
<br />
It did get me further. The link is here:<br />
<br />
<a href="http://ceph.com/docs/dumpling/start/quick-start/">http://ceph.com/docs/dumpling/start/quick-start/</a><br />
<br />
So I just followed that guide. When you make an error, you can't just remove the osd directories, because the keyring is copied along and then you get authentication issues. So on an error also remove the mds and mon and just rerun the mkcephfs command again.<br />
<br />
I don't have a special partition available to use for ceph, so I just have files in /var/lib/ceph for now. When the service is restarted however, it complains about this:<br />
<pre style="margin: 0em;">Error ENOENT: osd.0 does not exist. create it before updating the crush map</pre>
<br />
One solution for this is to start the OSD's yourself:<br />
<br />
<tt>ceph-osd -i 0 -c
</tt><tt>/etc/ceph/ceph.conf</tt><br />
<br />
That'd get you halfway there. You only need to do this once, afterwards the automated start script from ceph will work. The next thing is that ceph health shows issues, which is because the standard replication level is 3. This means you need a minimum of 3 servers to get items replicated and we just configured 2 <br />
<br />
On my machine, I don't activate replication, so I ran:<br />
<br />
# ceph osd pool set data size 1<br />
# ceph osd pool set metadata size 1<br />
# ceph osd pool set rbd size 1<br />
<br />
You can query all pools configured:<br />
<br />
# ceph osd lspools<br />
<br />
The other step is to configure a rados gateway so that it's possible to access files a la Amazon S3 style. There's some sites that claim they know how to do this, but I found this one here:<br />
<br />
<a href="http://ceph.com/docs/dumpling/start/quick-rgw/">http://ceph.com/docs/dumpling/start/quick-rgw/</a><br />
<br />
There should be a better way to do this for simple setups. For real clusters, I think issues should be a little bit easier as not everything is running on the same machine. I think that's causing some things to break here or there.<br />
<br />
<br />Gerard Toonstrahttp://www.blogger.com/profile/17067969645449987498noreply@blogger.com0tag:blogger.com,1999:blog-17558167.post-14282451821674318752013-03-16T08:27:00.001-07:002013-03-16T08:27:29.523-07:00Transparency = f(distance) with OpenGL shaders I'm working on a product at the moment where some OpenGL code is involved. I've worked on OpenGL before, so it's fun to work on that again. In this work I'm rendering video on a set of quads and these cover the entire screen. There is a virtual reality overlay that renders a virtual tunnel over this image.<div>
<br /></div>
<div>
I don't want this tunnel to extend indefinitely into the distance and in OpenGL you'd typically use some fog functionality to give a sense of depth. If you have a black background and black fog, you can create the illusion indeed as if the tunnel is disappearing in the background. From there, you'd think to apply an alpha value to fog, so that it doesn't just recolor the pixels, but it would also apply a transparency value. </div>
<div>
<br /></div>
<div>
Unfortunately, fog doesn't use transparency. So if you have a transparent cube painted in fog, chances are you're going to see outlines in the transparency.</div>
<div>
<br /></div>
<div>
Instead I turned to vertex shaders to make objects fade away into the background. This saves a lot of work in the pre-processing pipeline, where you'd otherwise have to set alpha values on each vertex. Since I use vertex objects (uploaded to the GPU already) to paint my tunnels very quickly and without overhead, that'd mean I'd need to re-upload them every time if position changes.</div>
<div>
<br /></div>
<div>
Although it's initially a bit challenging to work with vertex shaders, as soon as you have a function that loads and compiles them with error detection, they're pretty straight-forward from there. Here's the vertex shader, which gets called first:</div>
<div>
<br /></div>
<div>
<div style="font-size: 11px;">
<span style="font-family: Courier New, Courier, monospace;">varying float fogFactor; </span></div>
<div style="font-size: 11px; min-height: 13px;">
<span style="font-family: Courier New, Courier, monospace;"><br /></span></div>
<div style="font-size: 12px;">
<span style="font-family: Courier New, Courier, monospace;">void main()</span></div>
<div style="font-size: 12px;">
<span style="font-family: Courier New, Courier, monospace;">{</span></div>
<div style="font-size: 12px;">
<span style="font-family: Courier New, Courier, monospace;"> //Compute the final vertex position in clip space. </span></div>
<div style="font-size: 12px;">
<span style="font-family: Courier New, Courier, monospace;"> <span style="font-size: 11px;"> </span>gl_Position = ftransform(); </span></div>
<div style="font-size: 11px;">
<span style="font-family: Courier New, Courier, monospace;"> // Pass through the front color (textures require something different)</span></div>
<div style="font-size: 11px;">
<span style="font-family: Courier New, Courier, monospace;"> gl_FrontColor = gl_Color;</span></div>
<div style="font-size: 11px; min-height: 13px;">
<span class="Apple-tab-span" style="white-space: pre;"><span style="font-family: Courier New, Courier, monospace;"> </span></span></div>
<div style="font-size: 11px;">
<span style="font-family: Courier New, Courier, monospace;"> // calculate the fog factor based on EXP2 type fog.</span></div>
<div style="font-size: 11px;">
<span style="font-family: Courier New, Courier, monospace;"> const float LOG2 = 1.442695;</span></div>
<div style="font-size: 11px;">
<span style="font-family: Courier New, Courier, monospace;"><span class="Apple-tab-span" style="white-space: pre;"> </span>gl_FogFragCoord = gl_Position.z;</span></div>
<div style="font-size: 11px;">
<span style="font-family: Courier New, Courier, monospace;"><span class="Apple-tab-span" style="white-space: pre;"> </span>fogFactor = exp2( -gl_Fog.density * </span></div>
<div style="font-size: 11px;">
<span style="font-family: Courier New, Courier, monospace;"><span class="Apple-tab-span" style="white-space: pre;"> </span> gl_Fog.density * </span></div>
<div style="font-size: 11px;">
<span style="font-family: Courier New, Courier, monospace;"><span class="Apple-tab-span" style="white-space: pre;"> </span> gl_FogFragCoord * </span></div>
<div style="font-size: 11px;">
<span style="font-family: Courier New, Courier, monospace;"><span class="Apple-tab-span" style="white-space: pre;"> </span> gl_FogFragCoord * </span></div>
<div style="font-size: 11px;">
<span style="font-family: Courier New, Courier, monospace;"><span class="Apple-tab-span" style="white-space: pre;"> </span> LOG2 );</span></div>
<div style="font-size: 11px;">
<span style="font-family: Courier New, Courier, monospace;"><span class="Apple-tab-span" style="white-space: pre;"> </span>fogFactor = clamp(fogFactor, 0.0, 1.0);</span></div>
<div style="font-size: 12px;">
<span style="font-family: Courier New, Courier, monospace;">}</span></div>
<div style="font-family: Helvetica; font-size: 12px;">
<br /></div>
<div style="font-family: Helvetica; font-size: 12px;">
Here's the fragment shader, which gets called later:</div>
<div style="font-family: Helvetica; font-size: 12px;">
<br /></div>
<div style="font-size: 12px;">
</div>
<span style="font-family: Courier New, Courier, monospace;">varying float fogFactor; </span><br />
<div style="font-size: 11px; min-height: 13px;">
<span style="font-family: Courier New, Courier, monospace;"><br /></span></div>
<span style="font-family: Courier New, Courier, monospace;">void main(void) </span><br />
<span style="font-family: Courier New, Courier, monospace;">{ </span><br />
<div style="font-size: 11px;">
<span style="font-family: Courier New, Courier, monospace;"> // Actual fragment color is the fogFactor as alpha multiplied by alpha setting</span></div>
<div style="font-size: 11px;">
<span style="font-family: Courier New, Courier, monospace;"> // in gl_Color.</span></div>
<div style="font-size: 11px;">
<span style="font-family: Courier New, Courier, monospace;"> gl_FragColor = vec4(vec3(gl_Color), gl_Color.a * fogFactor );</span></div>
<span style="font-family: Courier New, Courier, monospace;">}</span><br />
<div style="font-family: Helvetica;">
<br /></div>
<div style="font-family: Helvetica;">
So in the end it's very simple. This code doesn't work for textures, where you need to work a little bit differently and you may need to implement your own lighting to get things to work correctly. Those examples are easy to find online.</div>
<div style="font-family: Helvetica;">
<br /></div>
<div style="font-family: Helvetica;">
The OpenGL fog system still needs to be enabled for this bit to work. It uses the configuration set in those parameters to achieve the effect. That also makes this a parametrizable program at runtime, which is good!</div>
<div style="font-family: Helvetica;">
<br /></div>
</div>
Gerard Toonstrahttp://www.blogger.com/profile/17067969645449987498noreply@blogger.com0tag:blogger.com,1999:blog-17558167.post-449038950064500262012-08-11T00:49:00.002-07:002012-08-11T01:09:10.845-07:00Throwing it all away and starting new<span style="font-family:Georgia, serif;"><span style="font-size: 100%;">In the previous post I attempted to recover the partitioning of the drive and it looked like I had done it, but there were still issues afterwards apparently, mostly due to rEfit screwing things up (it has its own utility in the startup screen). </span></span><span style="font-family: Georgia, serif; font-size: 100%; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal; ">I went back into the Disk Utility, which attempted a repair and then into gdisk, which caused more issues even then. The disk looked like this:</span><div style="font-family: Georgia, serif; font-size: 100%; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal; ">1 EFI</div><div style="font-family: Georgia, serif; font-size: 100%; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal; ">2 Mac OSX </div><div style="font-family: Georgia, serif; font-size: 100%; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal; ">3 Recovery</div><div style="font-family: Georgia, serif; font-size: 100%; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal; ">4 Recovery </div><div style="font-family: Georgia, serif; font-size: 100%; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal; ">5 Linux boot</div><div style="font-family: Georgia, serif; font-size: 100%; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal; ">6 Linux home</div><div style="font-family: Georgia, serif; font-size: 100%; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal; ">7 swap</div><div style="font-family: Georgia, serif; font-size: 100%; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal; "><br /></div><div style="font-family: Georgia, serif; font-size: 100%; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal; ">It was impossible to zap any recovery and the MBR was a complete mess. I've basically done the following:</div><div><ol><li><span style="font-family:Georgia, serif;"><span style="font-size: 16px; ">Recreated my external disk. 20G allocated to the mountain lion installer and the rest as a new partition for Time Machine. I just used Disk Utility and set up 2 partitions, pretty straight-forward. Make sure the partition type in Options is set to GUID partition table.</span></span></li><li><span style="font-family: Georgia, serif; ">Created the Mac OSX installer partition onto the external drive. That's basically using the "Show package contents", finding the installerESD.img file and copying that into the partition using "Restore" in Disk Utility.</span></li><li><span style="font-family:Georgia, serif;"><span style="font-size: 16px; ">Saved my Mac OSX partition into Time Machine. Takes a while. </span>Waited for Time Machine to complete. </span></li><li><span style="font-family:Georgia, serif;">Restarted the Mac and booting into the external drive with the installer. Press the Option key when rebooting the machine.</span></li><li><span style="font-family:Georgia, serif;">Completely repartitioned the main drive using the disk utility in two partitions. The first being the main Mac partition that I intend to use mostly as Mac OSX extended (Journaled). The second as free space. Note that there's no specific partition for the Recovery HD here yet. EFI is automatically created.</span></li><li><span style="font-family:Georgia, serif;">Then I restored the copy from Time Machine into the main partition. Waited for this to finish.</span></li><li><span style="font-family:Georgia, serif;">Rebooted into the Mac using the main drive (partition just created and loaded up). </span></li><li><span style="font-family:Georgia, serif;">Now you can recreate the Recovery partition at the end of this partition using this page: <a href="http://musings.silvertooth.us/2012/03/restoring-a-lost-recovery-partition-in-lion/">http://musings.silvertooth.us/2012/03/restoring-a-lost-recovery-partition-in-lion/</a> . Some more background info: </span><a href="https://plus.google.com/108724035107725322855/posts/Y33cF3cJR9o">https://plus.google.com/108724035107725322855/posts/Y33cF3cJR9o</a></li><li><span style="font-family:Georgia, serif;">Then I installed Ubuntu Linux using a special Mac Linux amd64 installer. The 10.10 version is available here: </span><a href="http://releases.ubuntu.com/oneiric/">http://releases.ubuntu.com/oneiric/</a>.</li><li><br /></li></ol></div>Gerard Toonstrahttp://www.blogger.com/profile/17067969645449987498noreply@blogger.com1tag:blogger.com,1999:blog-17558167.post-35752742895829210462012-08-09T14:12:00.006-07:002012-08-11T00:49:27.815-07:00Mountain Lion installed!Just found out why things were failing. There's an EFI partition on every disk that you're not necessarily seeing. First, you'd probably want to enable debug mode for the "Disk Utility" application:<br /><br /><span style="font-family:courier new;">defaults write com.apple.DiskUtility DUDebugMenuEnabled 1</span><br /><br />Then, start Disk Utility, select the Debug Menu and tick "Show all partitions".<br />This allows you to see EFI partitions there.<br /><br />The best indication of whether something is wrong or not is done by "diskutil". From the terminal, use:<br /><br /><span style="font-family:courier new;">diskutil list</span><br /><br />This will pop up the partitions for your disks. My EFI was borked, as it showed "Microsoft Basic Data" there, but had the right size of 209.7 MB. This is the culprit that didn't allow my install of Mountain Lion to go ahead, plus some other issues with refit and those sorts. I've fixed this in a roundabout way, but you need an 8G or so USB disk or a spare HD (one of those USB types).<br /><br />First, format the spare disk (disk1, where disk0 is your main startup disk) using the Disk Utility. Just select the spare disk, hit "partition" and make one large partition there. Then in Options, make sure "GUID partition table" is selected and that the type is "Mac OSX Extended (Journaled)" . Click OK to partition the drive. You can then "zero" the main partition as well by erasing it.<br /><br />If you've downloaded the Mountain Lion Installer from the AppStore, this should be in your Applications folder. Don't open it to run it, but rightclick and "Show Package Contents". Then find the InstallESD.img file that you need in the "Contents/SharedSupport" folder there. This is the file you want to restore to the drive.<br /><br />In Disk Utility, select the entire spare disk and select "Restore". Then drag the installESD.img file from the finder window into the source and the spare disk into the Destination. Hit Restore to transfer the image. What you need on the spare disk is the image, but also a prepended EFI partition on the spare too. This will be needed to copy the contents of the EFI to the main startup disk after we recreate it.<br /><br />Now you can reboot the Mac. Keep the "Option" key pressed to reboot into this spare drive. What this does is that it frees up your main drive for some editing you need to perform.<br /><br />Once you're finished booting into the spare mountain lion setup, open a terminal from the Utilities menu. In this terminal, verify your main startup disk is disk0, I'm assuming that for the following commands:<br /><br /><span style="font-family:courier new;">diskutil list</span><br /><br />Verify that your main startup disk partitions are there. Verify that your spare disk has both the normal volume and the EFI partition too for copying that later. Do not continue if that isn't there. There's probably another mounted disk which is the install image contents.<br /><span style="font-family:courier new;"><br />gpt -r show disk0<br />diskutil unmountDisk disk0<br />gpt remove -i 1 disk0<br />diskutil unmountDisk disk0<br />gpt add -b 40 -i 1 -s 409600 -t C12A7328-F81F-11D2-BA4B-00A0C93EC93B<br />diskutil unmountDisk disk0<br />gpt -r show disk0<br />diskutil list</span><br /><br />You should now see your EFI partition back on the main drive.<div>The final step is to copy the contents of the spare drive EFI into the new EFI, assuming disk1 is the spare and disk0 is the main drive:</div><div><br /></div><div>dd if=/dev/disk1s1 of=/dev/disk0s1</div><div><br /></div><div>Let's reboot back onto the main startup disk to fix up some more issues.<br /><br />On my drive, the MBR or partition scheme (whatever that is) was out of sync with the GPT table or whatever. Now you'd like to run "Verify Disk" on the main drive to verify it's all ok. This probably calls for a repair to sync things back up, set some other things back to what they should be. Once that's done, you should finally be able to install Mountain Lion on your main startup disk.<br /><br />Note that I do have an Ubuntu & rEfit setup that caused this issue probably. I didn't care about Ubuntu being corrupted, so didn't pay any attention there. Later on I'll probably reinstall refit and verify things a bit better before continuing.<br /><br />Hope that helps. If anything else is wrong, please don't write, because I'm far from a Mac expert.<br /><br />My main issue with the install was that the installer complained about "Mountain Lion cannot boot from this disk".</div><div><br /></div><div>Edit: After careful review I noticed there still were issues and Ubuntu still didn't like booting up. I've fixed that in a new post.</div>Gerard Toonstrahttp://www.blogger.com/profile/17067969645449987498noreply@blogger.com0tag:blogger.com,1999:blog-17558167.post-31448494491034781522012-08-03T11:07:00.003-07:002012-08-03T11:17:21.542-07:00<a href="http://4.bp.blogspot.com/-XEgrYaAoVOw/UBwTt1GWufI/AAAAAAAAAko/aSVCYUjhQ3U/s1600/mountain-lion.jpg" onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}"><img style="float:left; margin:0 10px 10px 0;cursor:pointer; cursor:hand;width: 297px; height: 304px;" src="http://4.bp.blogspot.com/-XEgrYaAoVOw/UBwTt1GWufI/AAAAAAAAAko/aSVCYUjhQ3U/s400/mountain-lion.jpg" border="0" alt="" id="BLOGGER_PHOTO_ID_5772510500450908658" /></a>I'm one of those victims unable to upgrade from Apple Lion (10.7) to Mountain Lion (10.8). I'm working with a Mac Mini and have a dual-boot configuration. At first, the installer wouldn't let me upgrade at all because the RecoveryHD was missing. Through the help of a blog, I could download the Recovery installer from Apple (1.0 version) and then with a couple of terminal commands, I could recreate this in the current partition. In my setup, I moved my entire disk from what I had to a new SSD, so somewhere along the way the RecoveryHD got lost probably. Anyway, after that was created, the installer seems happy to prepare the install process and reboot. That's where the problems start. When the system reboots, I see the big "X" install process starting up and then it suddenly comes up with a dialog saying that mountain lion cannot be installed to the target location. That's when you find yourself in the purgatory between the installation process and nothing else, because not even EFI works at that point.<div><br /></div><div>I've tried to slightly reduce the size of my "MacintoshHD" partition by 256MB, no luck. I've tried using the terminal to remove the EFI partition, because in my setup it shows as "Microsoft Basic" of some kind, which should actually be EFI. No luck due to "Resource busy".</div><div><br /></div><div>In the end, at the top left in the Apple command, you can select the "disk" to use to restart the system. Select your regular, trustworthy MacintoshHD there and you're back into your regular EFI startup screen, where you can select either Apple or Ubuntu.</div><div><br /></div><div>Some things are seriously wrong in the update process and I'm not sure I want to use the current software to do my updates. I'm going to try to write a report about this experience to see if Apple can redo some of this process or "auto-fix" some of the issues. I don't have anything valuable in the Ubuntu partitions that I care about (it's re-creatable in 1.5 hours or so), but the Apple philosophy is that things should just "work", and they don't without a proper indication of what failed.</div><div><br /></div><div>So yeah... not a standard Apple as it's a dual-boot, but certainly using their hardware and some of the indications could be a bit more informative as to why things failed.</div>Gerard Toonstrahttp://www.blogger.com/profile/17067969645449987498noreply@blogger.com0tag:blogger.com,1999:blog-17558167.post-78131292072225105032012-07-04T09:08:00.004-07:002012-07-04T09:13:26.038-07:00<a href="http://1.bp.blogspot.com/-Yyl9hMRMhhc/T_Rqhg8geNI/AAAAAAAAAkY/s-pmuaEMqIw/s1600/asus-gtx-580-directcuii-3.jpg"><img style="float:right; margin:0 0 10px 10px;cursor:pointer; cursor:hand;width: 400px; height: 320px;" src="http://1.bp.blogspot.com/-Yyl9hMRMhhc/T_Rqhg8geNI/AAAAAAAAAkY/s-pmuaEMqIw/s400/asus-gtx-580-directcuii-3.jpg" alt="" id="BLOGGER_PHOTO_ID_5761346947325655250" border="0" /></a>I've just upgraded my graphics card on the 2 year old beast and switched in a 1-yr old GTX 580 from Asus. I'm on a PCI-E 16x 2.x, so figured that the 3.x would yet be a bit of a waste of money and my primary concern is CUDA anyway and the number of cores is enough. This card will take me 2 years in the future anyway. The biggest issue when installing was figuring out the power connections. After the card was plugged in, the monitor showed "no signal". Fearing I was dealt a wrong deck of "cards", I tried switching the supply, but nothing. Then I figured out that the 6-pin PCI-E connectors also had a floppy 2-pin on the side. I stuck that in on both ends and the card magically worked. I don't know why the 8-pin to 2x 6-pin is supplied, but I'm not using that at all.<br /><br />So my first test here on Linux was to see if the drivers work. OpenGL is active and it's all dandy.<br /><br />Started up Blender and made a Cycles render with the CPU. This took 22 seconds for a complicated scene on the CPU. Then activated "GPU render" mode and the same window rendered in like 3 seconds. Fantastic difference when you consider the amount of time it may take for an animation, as this will cut down the rendering time a factor 8 or so.<br /><br />Nice card, I'm happy. Now to wait to see if the card manages to stay on and perform well for the next couple of days. I saw on forums people sometimes get issues when playing particular games after a couple of days.Gerard Toonstrahttp://www.blogger.com/profile/17067969645449987498noreply@blogger.com0tag:blogger.com,1999:blog-17558167.post-88177966106850369692012-02-09T20:52:00.001-08:002012-02-09T21:14:31.362-08:00Cogl or OpenGL for 3D clutter scenes?<a href="http://4.bp.blogspot.com/-YGfmniZ9NPY/TzSirf7MvnI/AAAAAAAAAkE/2IOJpFBlpBQ/s1600/Cogl-logo.png"><img style="float:right; margin:0 0 10px 10px;cursor:pointer; cursor:hand;width: 192px; height: 138px;" src="http://4.bp.blogspot.com/-YGfmniZ9NPY/TzSirf7MvnI/AAAAAAAAAkE/2IOJpFBlpBQ/s400/Cogl-logo.png" alt="" id="BLOGGER_PHOTO_ID_5707365495973133938" border="0" /></a>I'm using Cogl for a project, where I create an intuitive user interface for pilots. The starting point is a 2D background, actually a video through gstreamer, which shows the actual world. An overlay on that video, also called "OnScreenDisplay" or OSD or HUD provides some interesting numbers on plane speed, altitude and whatever else is of interest. In this different app though, I'm also painting in some 3D objects over the real image, which is intended to provide additional information on where things are located. Basically augmented reality. Since this is intended for video piloting, this seems like a very good combination.<br /><br />In my first implementation I was using OpenGL directly. After some issues related to how to set this up, I had that working and things were showing up quite nicely and were spot on referenced. I then switched to gstreamer + gl extensions (glupload + glimagesink), but found that these were somewhat difficult to get absolutely right. The texture of the video was not as good as it could be due to some mipmapping issues. The gstreamer extension of clutter turned out better and clutter provides some interesting capabilities for painting text and a variety of other things, like bitmaps with animation and cairo for custom drawing.<br /><br />Unfortunately, clutter uses cogl in the backend (apparently) and this means that the original code for augmented reality objects no longer worked properly. I tried to get this to work by saving opengl states directly and then paint it over the image, but that didn't work out. Although the position seemed correct, the material turned a solid or transparent grey. Due to the complexity of getting this to work properly, I decided to just bite the bullet and do this in cogl entirely.<br /><br />What I started out with was to define a custom actor that can be plugged into the main render loop. The idea is then that this actor gets access to the state of the application at the right time, so that this decomplicates finding alternative ways to do your painting. If you don't do that, then you'll be painting direct opengl anywhere the application decides you should paint. Usually that's after buffers were flushed already, so you have issues with blending and things just look weird. So when you want to use 3D in clutter, you should use cogl to save yourself the pain. There are absolutely no guarantees if you do use plain old OpenGL and things don't work. I had unexplainable issues related to color, but the rest looked ok to me. That's why I went the cogl way to get more predictable results.<br /><br />Once you have a custom actor, then override the paint method with this:<br /><pre class="code">static void scene_actor_paint (ClutterActor *actor) {<br />CoglMatrix mvMatrix, pMatrix;<br /><br />cogl_get_modelview_matrix( &mvMatrix );<br />cogl_get_projection_matrix( &pMatrix );<br /><br />cogl_set_modelview_matrix( &matrix );<br /><br />cogl_perspective( fov,<br /> (float)stage_width / (float)stage_height,<br /> zNear,<br /> zFar );<br /><br />// Let pilot know its position and attitude<br />pilot_setPosition( telemetry.lat, telemetry.lon, telemetry.alt );<br />pilot_setAttitude( telemetry.pitch, telemetry.roll, telemetry.yaw );<br /><br />// This rotates the world around the pilot...<br />pilot_display();<br /><br />// show some objects<br />yyyyyyyy_display();<br />......<br /><br />cogl_set_modelview_matrix( &mvMatrix );<br />cogl_set_projection_matrix( &pMatrix );<br />}<br /></pre>And then in order to render some other object in this scene:<br /><pre class="code">void yyyyyyyy_display() {<br /> cogl_push_source( material );<br /><br /> cogl_push_matrix ();<br /> cogl_translate( home_e, home_n, home_d );<br /> cogl_rotate( home_hdg, 0.0, 0.0, 1.0 );<br /> cogl_rotate( home_elev, 1.0, 0.0, 0.0 );<br /><br /> cogl_polygon( vertices, 12, FALSE ); <br /> cogl_pop_matrix ();<br /><br /> cogl_pop_source();<br />}<br /></pre>So this is how you jump out of the clutter loop:<br /><ol><li>Define a custom actor. I did one in C, another example uses the C++ version. See also <a href="http://docs.clutter-project.org/docs/clutter/1.8/clutter-subclassing-ClutterActor.html">here</a>.<br /></li><li>Define some properties that modify how things are rendered and some other general behavior.</li><li>Override the paint loop. Save the matrices, define your own matrices, call your custom drawing code in 3D (has to be cogl!) and then put the matrices back as you found them.</li></ol><p><br /></p>Gerard Toonstrahttp://www.blogger.com/profile/17067969645449987498noreply@blogger.com0tag:blogger.com,1999:blog-17558167.post-17121816920897505872012-01-15T01:28:00.000-08:002012-01-15T01:31:44.879-08:00W5100 chip garbage outputI've got my hands on an Arduino + W5100 ethernet chip. This allows an Arduino to communicate with other clients over an ethernet connection.<br /><br />When I compiled the example in Arduino however, there is one noticeable bug that is resolved in the newest ethernet library distribution:<br /><br /><a href="http://code.google.com/p/arduino/issues/detail?id=605&start=200">http://code.google.com/p/arduino/issues/detail?id=605&start=200</a><br /><br />The other thing is that the example code contains: <span style="font-style: italic;">if ( client == true )</span> , which can be replaced by a simpler <span style="font-style: italic;">( if client )</span>.<br /><br />After these fixes, the W5100 behaves properly. On to the next challenge!Gerard Toonstrahttp://www.blogger.com/profile/17067969645449987498noreply@blogger.com1tag:blogger.com,1999:blog-17558167.post-30055924868852751602011-10-14T00:27:00.000-07:002011-10-14T00:32:03.160-07:00Ocelot installI've just installed Ocelot and ran into loads of problems. The install process crashed on me right when it was finalizing some updates, so I had to rerun a couple of dpkg configures. Ubuntu didn't even boot into X, nor the normal shell, but I managed to boot into a previous Linux kernel of natty.<br /><br />The most important thing that could go wrong is probably the '/var/run' to '/run' relocation. The install script is supposed to copy contents of /var/run to /run, contents of /var/lock to '/run/lock' and then delete /var/run and /var/lock altogether and replace them by symlinks.<br /><br />(i) create directories /run and /run/lock,<br />(ii) move contents of /var/run into /run and /var/lock into /run/lock,<br />(iii) delete directories /var/run and /var/lock<br />(iv) create replacement simlinks; e.g. 'ln -s /run /var/run' and 'ln -s /run/lock /var/lock'<br /><br />Not doing this gives you problems like 'waiting 60 seconds for network...' and longer boot times, but X never starting. Before you think it's an issue with the graphics driver (which is also likely), make sure the above is correct and sorted first.Gerard Toonstrahttp://www.blogger.com/profile/17067969645449987498noreply@blogger.com0tag:blogger.com,1999:blog-17558167.post-25817419230374594322011-10-05T11:18:00.000-07:002011-10-05T12:04:52.101-07:00Charmed by pythons<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://3.bp.blogspot.com/-lp0087xxrHI/ToyiM0FRt9I/AAAAAAAAAj8/Pp9EwJbsZTA/s1600/python-logo.png"><img style="float:left; margin:0 10px 10px 0;cursor:pointer; cursor:hand;width: 280px; height: 280px;" src="http://3.bp.blogspot.com/-lp0087xxrHI/ToyiM0FRt9I/AAAAAAAAAj8/Pp9EwJbsZTA/s400/python-logo.png" alt="" id="BLOGGER_PHOTO_ID_5660077172720777170" border="0" /></a>In my latest project I'm using Python to construct the basis for a GUI application. Because one of the main design goals is to make this as modular as possible, it is also used to construct an application messaging bus and another object to keep application data in one single place. My experiences so far are extremely positive. I'm used to formally specified languages where there is no possibility to become confused about the meaning of a parameter or what its type is. This makes it slightly easier at times to understand what a parameter is doing, but at the same time removes some of the flexibilities that these programming languages offer. Python seems to be the ultimate mix between form and function, although it takes some time to get used to the idiosyncrasies of this particular language. Once you stop worrying that your application isn't going to be used after three years anyway and that nobody wants to extend your particular piece of code, Python becomes something that you can start to embrace.<br /><br />What I had to get used to at first:<br /><ul><li>How python expects you to indent your code. I set my editors to 4 spaces instead of tabs to make my life easier. Still, you download a snippet of code from the Internet and you end up rewriting tabs as spaces and vice versa.<br /></li><li>The same indentation levels is how scope is managed, whereas C and Java use scope braces.</li><li>The ability to simply assign a variable some value and how it persists over time. There is still a gotcha or something to remember here, because sometimes variable assignments are persisted in the instance and not the class. But usually this turns up soon enough.</li><li>Some short-hand notations for iterators over collections, sets, lists, deques and dictionaries. It takes some time to get used to how double braces differ from brackets and from square brackets (they mean different things), but when you know Java and the differences between sets, maps and lists, these notations become rather natural.</li><li>How some declarations or references of C libraries eventually must be interpreted to understand which classes must be instantiated and where C-enumerated types are declared in the python bindings (at least it's consistent!)<br /></li></ul>The awesome thing in python is that it's not just something you run on the command line anymore. We're using this together with the Gtk 2/3 libraries, Clutter and libchamplain. These are highly graphical applications written in C or C++ and python with the GObject bindings give you access to all the functionalities in those classes.<br /><br />The two coolest things in python is that we now have access to a very clean and empty user interface application that we can enrich using a set of plugins. If you know what a model/view/control (MVC) separation of concerns is, then python definitely knows how to support that. For our data and for our messaging bus, we've created a singleton object in the VM which every object can get to in a very simple way. Any plugin can declare data items that it wants to store and it can itself use the message bus to declare new kinds of signals that other plugins can react to, or it uses the messaging bus to declare interest in messages of other plugins.<br /><br />This way, the application is 100% modular, but there's still a sense of control onto what kind of data is stored, where it is stored and it warns developers when a plugin wants to get access to data that hasn't been put there in the first place.<br /><br />The plugins we've defined are all of a specific type and have their specific pre-determined uses. Communication plugins usually run in a separate thread and they're responsible for opening their own sockets. They then receive or send information from/to the system. Using the messaging bus notification signals, they extract information from the model and send this on, or they receive new information from the environment and add this to the model.<br /><br />At some point though, one needs to be aware that any application can ever do so much. The multi-threadedness is highly governed by the ability of the main thread to keep up with whatever is going on in the environment. That is... in a graphical environment like clutter or gtk you can't update or manage components from just any thread, but you can only do that from the main thread that is running Clutter.main() or Gtk.main(). This usually means you add notifications to the message queue of the main thread, which is only emptied when the main thread becomes idle.<br /><br />Thus... if you are in an environment where lots of user interactions happen and the UI is never truly idle, the communication message handling may start to delay by quite a bit and you may notice 'halts' in the UI updates from these systems. Because of that, this is not necessarily the way to go for everyone. But this is the best of both worlds really... you can't have blocking sockets in UI thread code, you can't/shouldn't obstruct the general UI thread with system messages (making user interaction choppy) and other considerations like that.<br /><br />So, the main design concepts of this system are:<br /><ul><li>Keep data in one place wherever possible (if multiple plugins use that data, don't copy it for every plugin).</li><li>Allow data to be private to plugins when no other plugin or code uses it.</li><li>Pass in required references to objects that make sense to be externally referenced. Because the use of each plugin is clear, you can also separate these.</li><li>Communication plugins probably need a separate thread for communication handling. Be careful with blocking sockets, because UDP sockets may continue blocking forever. Therefore, TCP sockets may be blocking (as far as you shutdown and close them). UDP sockets should not be blocking.</li><li>Use a special singleton instance for a messaging bus where messages are declared on and where hooks can be inserted. This allows you to manage mbus code in one place and you have a nice intermediate class that passes signals around.</li><li>Do not pass large amounts of data on this message bus. If large, hierarchical pieces of data are manipulated, store them in one place in a model and allow plugins to query them if they are so interested on the receipt of these signals.</li><li>Define what your UI should look like. That is probably the only thing that ends up being highly specific code to the application. But if you have your mbus+model objects defined already (and these are generic), you'll find the main application window is nothing but a 'shell' from which plugin code is run and the logic is defined by what plugins do and which kind of clutter/gtk classes are contained in your widgets.</li></ul>So yes... I've been slightly charmed by the elegance of python in certain expressions. It's a rather mathematical way of seeing things, but it starts to make sense a lot. The abundance of libraries, extensions and base libraries, most especially its support for Gnome repository bindings for all sorts of purposes make this a very attractive language to program in.Gerard Toonstrahttp://www.blogger.com/profile/17067969645449987498noreply@blogger.com0tag:blogger.com,1999:blog-17558167.post-16661965326022862232011-08-18T11:47:00.000-07:002011-08-18T12:42:01.975-07:00Ritewing Zephyr<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/-ilGnWz8jgLM/Tk1fJbo3GsI/AAAAAAAAAjs/YYzgUdgLZNU/s1600/5724798171_6e4f69ebc4.jpg"><img style="float:right; margin:0 0 10px 10px;cursor:pointer; cursor:hand;width: 400px; height: 300px;" src="http://4.bp.blogspot.com/-ilGnWz8jgLM/Tk1fJbo3GsI/AAAAAAAAAjs/YYzgUdgLZNU/s400/5724798171_6e4f69ebc4.jpg" alt="" id="BLOGGER_PHOTO_ID_5642270523807701698" border="0" /></a>So to the right is an example <a href="http://www.ritewingrc.com/">Ritewing Zephyr</a>. I'm working on building my own zephyr and the build log is on my website: <a href="http://www.radialmind.org/projects/zephyrbuild">http://www.radialmind.org/projects/zephyrbuild</a>. This plane will be a joy to fly. I'm looking forward on having everything done. There are quite a number of videos on them already. Check below for some examples.
<br />
<br /><iframe width="560" height="345" src="http://www.youtube.com/embed/L_K0-RvC4cg" frameborder="0" allowfullscreen></iframe>
<br />Gerard Toonstrahttp://www.blogger.com/profile/17067969645449987498noreply@blogger.com0tag:blogger.com,1999:blog-17558167.post-45582030096430384732011-07-04T14:07:00.000-07:002011-07-04T15:02:49.877-07:00Digital Video Broadcasting... how it works...<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/-ZXy6z4rDQec/ThIr1bxmMkI/AAAAAAAAAjg/8pSKlHRqrns/s1600/dvb-s-worldmap-big.gif"><img style="float:left; margin:0 10px 10px 0;cursor:pointer; cursor:hand;width: 400px; height: 212px;" src="http://4.bp.blogspot.com/-ZXy6z4rDQec/ThIr1bxmMkI/AAAAAAAAAjg/8pSKlHRqrns/s400/dvb-s-worldmap-big.gif" alt="" id="BLOGGER_PHOTO_ID_5625607081528013378" border="0" /></a>I'm reading up on Digital Video Broadcasting standards. The DVB standard is for receiving digital video back in your home. There are a couple of subtypes in this main category which distinct themselves by their error correcting facilities (based on what's needed over the medium that they are transmitted), bandwidth, etc. The DVB-S standard is one of the things I'm most interested in. Digital Video Broadcasting is usually done with MPEG-2 transport streams. Suppose you have a video on your computer and a bit of music, along with some information on what else is available on your channel. The transport stream is composed by multiplexing all the information together (rather fast) and creating one larger bitstream that can be transported using either DVB-T (terrestrial), DVB-S (satellite) or DVB-C (cable).<br /><br />I had thought that analog TV would be more resilient to noise created in the atmosphere, but this is not necessarily the case. If you send a file over the ether composed of 0's and 1's, then any noise or interference in the stream may cause a bit to be misread or misinterpreted or missed. Since the playback of a file is usually dependent on all the bits being read correctly, this is where you may get huge problems already. One or slightly more bits falling over may already cause the entire stream unusable.<br /><br />Unless.... you add error correction. But this increases the size of the entire stream... How then...? Well, the MPEG-TS doesn't carry nearly as much information as an analog video stream, because analog stuff is not compressed, although in the analog world you can remove some information without significantly reducing the quality of your experience (one example here is mp3). In analog video, this means you can easily reduce a bit of the color in an image, although luminance (that which you'd see as black and white) is far more important for a person's perception of an image.<br /><br />Back to MPEG-2 however... digital compression standards rely on encoding those things that matter only once, where 'motion' in the video would typically require you to encode a bit more about some spatial event for example. So a green screen that doesn't change a pixel will be very easy to transmit and extremely cheap, whereas a fast-paced action movie may temporarily reduce in quality a bit, because all the parts on screen are in motion all the time.<br /><br />Let's assume that we have some digitally encoded video+audio and that it is ready for transmission. For transmission in DVB-S and all the error correction abilities we need to have at the receiver side, the huge file is packetized into 187 bytes and then a sync byte attached to the start of this "packet". The interesting thing here is that this file may be rather regular in terms of how one byte and its neighbor relate to one another. One interesting finding is that equal bytes that follow up one another may cause more reception problems at the rx side than a noisy transmission will be, because there's more variation.<br /><br />For this reason, each byte in the packet, excluding the sync byte, is 'AND-ed' with a pseudo random number generator (a simple one that is). This means that some bits now turn on, others turn off and this machine has a certain period over which it operates. This PRNG is reset after every 8 frames of transmission.<br /><br />What we're getting now is already an interesting stream of information that's nicely packeted, more resistant to some errors. Each packet is fed through a "Reed Solomon" encoder. This is an error-correcting encoder that has the ability to correct 8 bytes of information from this packet at the rx side. So this is the first stage where we're adding additional information to the stream that is going to help us later on. Reed-Solomon (RS) is also used frequently in other mechanisms, like storage, data transfer for other applications (CD? HD? etc.). Sometimes it's getting replaced by other algorithms like Turbocoder (space missions, etc.) and so on. Just think... the images you're seeing from Space sent by those satellites also use these schemes to ensure no bits get inverted/changed during this transfer.<br /><br />The next step after the RS is some interleaving. Interleaving is a process where you shuffle parts of one packet with parts of another packet. The reason for doing this is that errors typically occur in bursts, not like 'hit and run' errors. By shuffling the original position of bytes in one packet with another, the deinterleaver relocates the parts to their original position later. If any error burst occurred, the spread of the damage caused by the error burst is much lower (it didn't zero out 3 bytes in a row, but perhaps one). Thus, it makes the signal again more robust against interference and errors.<br /><br />After the interleaving, another forward error correction scheme is used called "Viterbi encoding". This may in worst case duplicate the number of bits in the transmissions stream. More bits mean higher bandwidth. The challenge is to fit the entire MPEG-2 stream within around 6MHz of RF bandwidth, so both the original MPEG-2 stream bitrate as well as what happens after that is very important. If Viterbi encoding can be less aggressive, slightly more MPEG-2 data can be sent in the channel before it uses all the allocated (planned) bandwidth.<br /><br />The steps after this are 'baseband shaping' and 'I/Q modulation'. This means that the digital signal is mapped to an analog signal for transmission. Words you'll see here are "constellation". The kind used in DVB-S is quadrature phase shift keying. This means that a sequence of 2 bits is taken together and mapped to some vector 45 degrees in a constellation space. Being this far apart prevents any errors that may occur and you'd typically choose that based on the amount of expected noise in the channel. DVB-C, the cable kind, has so little expected noise that it uses 64 positions in this constellation instead. This means that in theory, it has 16 times more bandwidth.<br /><br />Different analog video signals occupy 5-8 MHz in bandwidth. Expressed in Mbits/sec, PAL video equates to 216 Mbits/sec, whereas MPEG-2 compressed PAL reduces that to 2.5-6 Mbit/sec. High value means bitrate when there's lots of motion, the other when there's little. Compressed HDTV is higher than that in the order of 12-20 Mbits/sec. However, this is measured against MPEG-2. H.264 encoding is three times more efficient, so this gets HD video back into reach for actual transmissions. The alternative would be to use different modulation techniques or to occupy a larger portion than 6 MHz in the transmission region.<br /><br />Some issues still arise... DVB-S was specifically created for LOS conditions away from reflective buildings and other interference sources. As soon as DVB-S is used for terrestrial transmissions this may have a huge impact on video quality. Tests so far indicate this is not necessarily the case and I reckon that with the circular polarized antennas that for example FPV fliers are using for their analog video, the multipathing issues that threaten DVB-S may well be reduced to a minimum.Gerard Toonstrahttp://www.blogger.com/profile/17067969645449987498noreply@blogger.com0tag:blogger.com,1999:blog-17558167.post-78147999281997108242011-06-28T11:36:00.000-07:002011-06-28T12:14:59.690-07:00Quad tuning<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/-ddUnwYb95Lk/TgohrmhSYrI/AAAAAAAAAjY/_TcLbIHVjVs/s1600/path5451.png"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 400px; height: 60px;" src="http://1.bp.blogspot.com/-ddUnwYb95Lk/TgohrmhSYrI/AAAAAAAAAjY/_TcLbIHVjVs/s400/path5451.png" alt="" id="BLOGGER_PHOTO_ID_5623344117683741362" border="0" /></a>I made <a href="http://www.diydrones.com/profiles/blogs/arducopter-code-mods">a post to the diydrones website</a> some days ago, explaining some modifications I made to the Arducopter source code. A colleague of mine explained me a couple of things about issues related to aircraft control. The standard Arducopter code has the attitude controller in place and the GPS Hold. I added the velocity controller. As you can see from the diagram, this means that the GPS Hold controller immediately changes the attitude of the quad on the basis of some difference in position. The attitude controller itself just maintains a certain angle setting.<br /><br />The attitude of the quad may induce a certain acceleration into some direction. When you steer a quad to some location by hand, you don't maintain the angle until you get there, but steer towards the other direction for a brief moment to zero the velocity with the intention to have zero velocity on the intended position.<br /><br />My experiences with the GPS Hold code in the arducopter are poor. Others have had more success, but I could never find the right settings that made the quad behave correctly in all circumstances.<br /><br />Because the GPS Hold controller controls the angles directly, but doesn't look at the velocity, it will only zero the velocity after it has passed through the setpoint. This means that with some larger drifts around a setpoint, overshoot cannot be avoided. Aggressive settings then cause overshoot into one direction; the quad then slows down, reverses direction and overshoots the other direction. Thus it oscillates around a position. Higher D-gains help in this regard, but I couldn't get this to calibrate correctly. The I-term does more evil than good and should be used very sparingly.<br /><br />With the velocity controller in the middle I had more success. The velocity controller is also a better means to control where one is going. Letting go of the sticks means that the quad already attempts to hover around doing nothing. With little wind you'll see that this leads to a near-perfect GPS Hold operation. The GPS Hold code that you do put in than only removes the little offset that does take place due to small disturbances and other inaccuracies due to some dampening filter on the GPS course/speed readings.<br /><br />In order to calibrate things correctly, start with the last controller going backwards. The attitude of the quad must be maintained with near perfection. Indoors in a large enough area, it should not travel significantly in any direction. If the quad does that, it may indicate:<br /><ul><li>most likely cause: too many vibrations in the quad causing the IMU to get slightly confused at times or over time. It may then tilt somewhat into any direction causing speed to build up.<br /></li><li>motors not pointing straight up, so that propellers have thrust in the xy-plane.</li><li>Incorrect response of ESC / motor due to incorrect ESC calibration, defect motor, etc.</li></ul>I cannot stress enough how important it is to remove vibrations as best as you can, because you get much better results that way. In my case, I've flown with the quad in a situation where it was moving about quite a bit in outdoor environments and making a sound like a lawn mower (you know, where the mower blades cut reeds, those kinds of 'graty' sounds). I found out eventually that my bolts appeared to be tightened, but with a proper spanner could still tighten it further by 1/4 turn. This improvement for about 5 bolts resolved the graty sounds entirely and on the next launch, it hummed perfectly. That is how much bolts / nuts may impact stability, the way how motors are balanced and in turn impact the vibrations on the IMU. So make sure that works ok.<br /><br />To calibrate the attitude controller, set the tx into attitude control mode of course. Then zero I and D and start with the P setting. You're looking for a P-setting that is just high enough to cause the quad to just about oscillate. Then lower the P setting a little notch (this is a relative operation) and work on the I and D terms next. The I-gain has two purposes:<br /><ul><li>Increase the speed at which you find your setpoint.</li><li>Resolve any bias that may accumulate in your system.</li></ul>The bias for example is wind. If your quad already remains very level, in outdoor environments a level quad will drift away slowly on the wind. Suppose that you're activating the GPS Hold controller. Without the I-gain, you'll drift downwind until proportionally speaking, your quad has such an angle that it finds an equilibrium with the wind conditions. The I-gain will start to kick in, increase the angle and the idea is that the build-up of the I-gain over time and the decrease of the P-gain eventually establish a new equilibrium on the exact setpoint. That would be perfect.<br /><br />The D-gain is there to reduce the speed of approximation towards some setpoint, such that it reduces overshoot of the setpoint (similar to how the GPS Hold working on angles should work).<br /><br />For the attitude controller, I'm using some suggested values that are not special at all: P=3.4, I=0.015, D=1.2. These are the values for my quad and mine is custom built with relatively large distances between props. It's likely that if you have a smaller quad, you can sustain some more aggressive values.<br /><br />Soon as the attitude controller is stable, work on the velocity controller. This only has two variables to adjust: P and I. At some point, especially with systems that have low frequency of reads, there's no point to use D-terms anymore. The P-gain for the velocity controller should not be too high to prevent instability. The velocity controller depends on the GPS information and this is basically some complicated piece of hardware nowadays with its own filters, dampeners and other algorithms. It's likely that a high frequency GPS (10Hz) together with doppler shift readings for speed give the best results. I set the P-gain to 0.04, which equates to a 4 degree angle when speed is 1 meter per second. If this is set more aggressive, it's possible that you see a circling motion occur due to the way how ground course is calculated in some GPS's. The I-term is basically determined on the basis of how much 'angle' one would need to compensate for windy conditions (in order to still develop a certain velocity).<br /><br />Since the velocity controller is already very effective in keeping the quad fixed in place, the GPS Hold controller is just there to resolve any difference in position that still does occur over times in the 20-30 second area. It slowly develops a certain velocity that the quad should have towards the setpoint and slowly retargets the quad towards a certain position. My GPS hold controller only uses a P-setting. An I-term could be added to make it slightly more aggressive, but I never felt a need to do that.Gerard Toonstrahttp://www.blogger.com/profile/17067969645449987498noreply@blogger.com0tag:blogger.com,1999:blog-17558167.post-75530363211884712472011-06-08T13:10:00.001-07:002011-06-08T13:25:12.154-07:00Declaring multiple variables one line of code<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/-gDlCnUM1UBg/Te_XnD5fpoI/AAAAAAAAAjQ/W2ofwceRMjY/s1600/quiz.jpg"><img style="float:left; margin:0 10px 10px 0;cursor:pointer; cursor:hand;width: 343px; height: 400px;" src="http://1.bp.blogspot.com/-gDlCnUM1UBg/Te_XnD5fpoI/AAAAAAAAAjQ/W2ofwceRMjY/s400/quiz.jpg" alt="" id="BLOGGER_PHOTO_ID_5615944326415623810" border="0" /></a>Quiz time! Consider the following variable declaration in a C program:<br /><br /><span style="font-family:courier new;">float x, y = 0.0f;</span><br /><br />What is the value of x?<br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><span style="font-weight: bold;">Answer</span>: <span style="font-style: italic;">undetermined</span><br /><br />This issue hit me, after so many years of ultra-explicit programming. A colleague way back in the UK taught me to prefer the explicit style of programming, since it's less likely to fall into traps like these. So coding programs in that style looks like:<br /><br /><span style="font-family:courier new;">char temp[ 512 ] = {"\0"};</span><br /><span style="font-family:courier new;">int x = 0;</span><br /><span style="font-family:courier new;">float y = 1.0f;</span><br /><br />Everything gets initialized immediately after it is declared, so there is much less of a probability of picking up rogue / uninitialized values that way. This style also caused me to declare one variable per line.<br /><br />I decided to take a shortcut after so many years for a quick experiment. Not just that... I decided to do this within a piece of embedded code running on a <a href="http://en.wikipedia.org/wiki/Quadrotor">quadrotor</a>.<br /><br />The above shortcut led to the uninitialized value picking up the negative maximum value for an Arduino float: -2,147,483,648. Subsequently, this value was used in a calculation to add this particular value to an existing position. The result was a negative max float for latitude and longitude. This led to a quadrotor immediately hitting the limiter of the control system (-20 degree bank angle) and taking off to some undetermined location fractions after it was told to go into a position hold mode (where it stays in the same location in the xy plane at least).<br /><br />After this line was changed to:<br /><br /><span style="font-family:courier new;">float x = 0.0f;</span><br /><span style="font-family:courier new;">float y = 0.0f;</span><br /><br />Things started working again. Since debugging on embedded systems is a huge pain in the *&(@#$, it took me some time to find and slap my head in disbelief.<br /><br />This kind of thing is really easy to read over when you review code and definitely has the potential to have immense consequences... Another thing to seriously look out for.<br />( the assumption is that reviewers assume x = 0.0f as well, since it's part of the same line).<br /><br />Proper code for multiple declarations in the same line look like this:<br /><br /><span style="font-family:courier new;">float x = 0.0f, y = 1.0f;</span><br /><br />or:<br /><br /><span style="font-family:courier new;">float x = y = 0.0f;</span>Gerard Toonstrahttp://www.blogger.com/profile/17067969645449987498noreply@blogger.com1tag:blogger.com,1999:blog-17558167.post-28868492530722207952011-05-31T10:24:00.000-07:002011-05-31T10:46:58.727-07:00Video annotations made easy<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/-Qe32LB7fXos/TeUkzdYnJbI/AAAAAAAAAi0/w7xGCYvoZGE/s1600/PostIt_16.jpg"><img style="float:right; margin:0 0 10px 10px;cursor:pointer; cursor:hand;width: 400px; height: 267px;" src="http://4.bp.blogspot.com/-Qe32LB7fXos/TeUkzdYnJbI/AAAAAAAAAi0/w7xGCYvoZGE/s400/PostIt_16.jpg" alt="" id="BLOGGER_PHOTO_ID_5612932977066976690" border="0" /></a>There's a project in the lab that looks at the use of some new technology and how this technology is best applied within a certain context (also perhaps, how people should change their behavior to improve the outcome). Anyway, what we will end up doing is observe a number of people and making video recordings. Throughout the experiment, these people will be talking to exchange ideas and point all noses in the same direction. At the same time, some interaction will occur through this technology, which is not easily captured on screen. However, because we deal with the tech directly, we can send events or information on a different channel, such that it can be superimposed back on video or at least related in time with certain points in the interaction.<br /><br />I considered that for the purposes of analysis, it would be handy to make annotations on the video stream itself and refer to it later in time. The idea is that you can call attention or apply markers on the stream, such that particular events are easier to recognize and navigate to later. In essence, it's the same as what YouTube provides, except that we don't want these videos to be put on there yet, also because the duration of the video may be well about an hour or so.<br /><br /><a href="http://www.lat-mpi.eu/tools/elan/">ELAN</a> is a nice tool that I found that has all the characteristics that we intend to use. It allows you to import an audio or video stream, which then can be annotated over the entire timeline for different events. As far as the technology events go, I've proposed to overlay that on the original video using a library called <a href="http://opencv.willowgarage.com/wiki/">opencv</a>. What you get is a static image that has all the events of the interaction between people, their audio, the things they did using the technology with annotations (in the form of subtitles) added by the experimenters. That way, the output video is a comprehensive output of the entire experiment, which can be replayed in good quality video players that can use subtitles in the SRT format.<br /><br />Anyway, ELAN can also export to other text formats, including HTML or anything, which will also allow you to translate or transcribe entire videos and post a log of what happened somewhere. The only thing that it doesn't allow you to do yet is to output the whole thing to some directory, where you get a flash video file of some sorts, together with an HTML file + Javascript to be able to jump to each annotation from there.Gerard Toonstrahttp://www.blogger.com/profile/17067969645449987498noreply@blogger.com0tag:blogger.com,1999:blog-17558167.post-27986170770280445382011-05-15T10:38:00.000-07:002011-05-15T11:13:15.212-07:00Arduino MAX7456 OSD & APRSI've just finished up the hardware and software on an Arduino Duemilanove, connected to a MAX7456 OSD chip and implementing an APRS / AX.25 datastream over the audio channel. Video from a PAL or NTSC cam is fed into the OSD chip, which overlays the image with some relevant data variables. These variables then become visible to a human pilot in the form of some kind of HUD, allowing the pilot to make better decisions on throttle settings, landing, coming back home and so on.<br /><br />The APRS / AX.25 link borrows most of the code for the signal sending from the Trackuino project. With the right filter and decoupling behind it and the right Fast PWM implementation, the signal quality is very impressive indeed (with quality meaning how perfectly the signal approximates sine waves at different frequencies).<br /><br />The OSD used to be hooked up by a simple loop, where the OSD was temporarily turned off to refresh the video buffer and then turned on again. Needless to say, this results in flicker occurring at times and also characters sometimes appearing in wrong locations (due to the internal generation of VSYNC signals and the write operations being carried out at the same time).<br /><br />The current hardware implementation uses INT0 on the arduino (Pin 2 on Duemilanove), which is connected through a 1K Ohm resistor to +5V and with a wire to the VSYNC pin on the OSD chip. This allows the chip to work already. Interesting points here:<br /><ul><li>I used to refresh the buffer every VSYNC trigger, resulting in no image whatsoever. The OSD now writes new information every x cycles or whenever anything has changed.</li><li>After every change to the buffer, you should re-enable the display by writing 0x0C to VM0.</li></ul>The APRS / AX.25 link on audio was already seemingly working, but I couldn't get the data parsed for some reason. I suspected that the tools I was using ( multimon / soundmodem ) couldn't deal with the data or were expecting different formats. By closely inspecting the incoming audio signal however, I noticed some strange plateau's in the signal, as if the arduino stopped writing in the y-direction for a brief moment in time. Turned out that the VSYNC interrupt was interfering with the AX.25 modem interrupts, so I just made sure that either of these interrupts is active at any given time and wait for the other interrupt to finish before starting the other work. This shouldn't cause a huge performance problem for receivers downstream.<br /><br />The RC circuit I use to clean the signal is in the config.h file of the trackuino sources:<br /><br /><span style="font-family: courier new;">// 8k2 10uF</span><br /><span style="font-family: courier new;">// Arduino out o--/\/\/\---+---||---o</span><br /><span style="font-family: courier new;">// R | Cc</span><br /><span style="font-family: courier new;">// ===</span><br /><span style="font-family: courier new;">// 0.1uF | C</span><br /><span style="font-family: courier new;">// v</span><br /><br />This reduces the 5V pin-out signal to 500mV peak-to-peak in the process of generating a very nice output. Together with the <a href="http://arduino.cc/en/Tutorial/SecretsOfArduinoPWM">FastPWM</a> implementation, this generates a very nice sine wave indeed.<br /><br />It is very important that this signal is clean and sine-wave like. The slight delay caused by the VSYNC meant that, due to CRC checking at the RX end, the signal didn't validate. I caught on to this by being able to, once in a while, decipher a single slash '/', but longer strings couldn't be parsed at all.<br /><br />The output of this signal goes to the mono audio in of the A/V transmitter on the craft. The audio signal is received by the receiver, is converted into a line-out, which is then sampled by the on-board ADC within my USB Hauppauge stick. The laptop can query the digital audio samples from the stick directly and analyze the signal to determine the frequencies. The frequency modulation is converted into a bitstream of 0's and 1's and eventually, the complete string rematerializes at the receiver end.<br /><br />As said, there are some utilities for doing this on an Ubuntu computer. I've tried out soundmodem, which gives you a KISS / MKISS interface, but it's probably too complex for the simple purpose I need this for (which is to parse the string out of the data and hand this to some other process). I found 'multimon' as well in AFSK1200 mode and this does the job very nicely as well. 'multimon' was written in 1997 or so and works using the OSS interface on Linux, which is now deprecated (the old /dev/dsp interface ).<br /><br />You can however load a set of alsa oss tools to simulate OSS devices and convert things on the CPU if needed. What I use to use multimon on an ALSA computer without having to modify any of the internal code:<br /><br /><span style="font-family: courier new;">> aoss multimon -a AFSK1200</span><br /><br />This then outputs the data strings to the console.<br /><br />So there you have it. One single, heavily used Arduino board to generate the OSD video stream and periodically (300ms?) send more telemetry (to your liking) to the ground station using APRS/AX.25 on the audio channel of the A/V transmitter. It is not a weight-effective means of doing this, because it adds one full arduino board to the weight, but it does handle all the processing quite nicely. You do need a 328P processor at least due to the size of the execution image that is to be loaded and the RAM that the code uses for internal buffers and so on.Gerard Toonstrahttp://www.blogger.com/profile/17067969645449987498noreply@blogger.com0tag:blogger.com,1999:blog-17558167.post-73507269437839473422011-05-09T11:30:00.000-07:002011-05-09T11:39:38.576-07:00HAM license<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://3.bp.blogspot.com/-mwam1IdilU0/TcgzGTGnhlI/AAAAAAAAAio/RHdRcb0WZzI/s1600/11954241121641654833johnny_automatic_whole_ham.svg.hi.png"><img style="float:left; margin:0 10px 10px 0;cursor:pointer; cursor:hand;width: 400px; height: 303px;" src="http://3.bp.blogspot.com/-mwam1IdilU0/TcgzGTGnhlI/AAAAAAAAAio/RHdRcb0WZzI/s400/11954241121641654833johnny_automatic_whole_ham.svg.hi.png" alt="" id="BLOGGER_PHOTO_ID_5604785919562843730" border="0" /></a>Well, nothing to do with the picture at the left actually, but I got my HAM license. This basically means that I can, as amateur and non-commercially, use some otherwise restricted frequency bands to perform research and other experimentations. One of the reasons to look into this relates to my work/hobby of dealing with UAV's. These require stable control lines, where delays in reception or processing over one second may incur a loss of the craft and also relates to getting direct video feeds from these aircraft using transmission equipment and sophisticated antennas.<br /><br />Interestingly, the uav hobby seems to be ever increasing, especially recently when there are more kits around that are affordable and where one can get a craft in the air for under $200. There are also more self-built models in the sky and people are fooling around with new and old antennas and finding ways of making them easier or less expensive to build.<br /><br />I don't have my callsign yet. I may at some point acquire some tx/rx equipment and start listening on some frequencies and explore this world a bit further. About the exam: 2 questions wrong out of 40. This is not a bad score at all. One question wrong about the use of capacitors in a feed line to a loudspeaker and the other one was I think something about legislation.Gerard Toonstrahttp://www.blogger.com/profile/17067969645449987498noreply@blogger.com0tag:blogger.com,1999:blog-17558167.post-86709082365321405182011-03-07T12:08:00.000-08:002011-03-07T12:15:51.153-08:00Faster than windHow fast can you go in a vehicle going downwind in relation to the wind, using only the same wind for propulsion? Can you actually go <span style="font-style: italic;">faster</span> than this wind, overtake it?<br /><br />This is an interesting Wired story about someone who proves that you can achieve up to 3.5x or 2.8x the velocity of the wind. The real speed is governed by a number of factors, including friction and the strength of the vehicle itself:<br /><a href="http://www.wired.com/magazine/2011/02/ff_fasterthanwind/all/1"><br />http://www.wired.com/magazine/2011/02/ff_fasterthanwind/all/1</a>Gerard Toonstrahttp://www.blogger.com/profile/17067969645449987498noreply@blogger.com0tag:blogger.com,1999:blog-17558167.post-20486900875006216252011-02-22T12:15:00.000-08:002011-02-22T12:22:18.247-08:00Hauppage USB live 2 on LinuxI've used the USB live2 stick for displaying analog (tv) video on Linux, when I was still on Ubuntu Lucid. Things worked ok back then, so kept the card. Then in a flash of non-inspiration, the "update-manager" appeared and I upgraded to the most recent version. The drivers immediately stopped working and these were pretty special at the time, because I compiled them to get it to work.<br /><br />I use this stick in combination with a a GoPro HD camera, which in the same timeline I did the Ubuntu upgrade was upgraded to new firmware, which allowed it to stream TV out at the same time as recording video. Great feature! Unfortunately, since the new firmware allowed configuration settings for PAL, I decided to change that along with it. This appeared the real problem for the driver problems.<br /><br />On Windows the driver is getting its output and all the lights work. So I figured it must've been a driver problem. Turns out that when I configure the GoPro camera to use the NTSC standard instead, I am getting output on Ubuntu Maverick and a decent one at that. For some reason, it appears as if the combination of the driver with PAL and GoPro output is incompatible with one another.<br /><br />So, if you have a GoPro, attempt to use this together with USB Live2, try changing settings to NTSC instead and see if you get output that way. By the way, I'm using this in combination with an analog video receiver and yes, the same problems apply!Gerard Toonstrahttp://www.blogger.com/profile/17067969645449987498noreply@blogger.com2tag:blogger.com,1999:blog-17558167.post-65053965321266301662011-02-20T12:21:00.000-08:002011-02-20T13:44:47.578-08:00Philosophy of Mathematics<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://2.bp.blogspot.com/-BzZHr0Nm4pA/TWF8bYAFXcI/AAAAAAAAAhs/ep5SWp6g45k/s1600/401067121_3a1667933b_z.jpg"><img style="float: right; margin: 0pt 0pt 10px 10px; cursor: pointer; width: 400px; height: 267px;" src="http://2.bp.blogspot.com/-BzZHr0Nm4pA/TWF8bYAFXcI/AAAAAAAAAhs/ep5SWp6g45k/s400/401067121_3a1667933b_z.jpg" alt="" id="BLOGGER_PHOTO_ID_5575874623402499522" border="0" /></a>In my line of work, I'm often confronted with people that face problems and want to have resolved. In some of these problems, mathematics are an essential part in the resolution of these problems. In some private part of research, I'm trying to find the real origins of intelligence and find myself going way back and forwards in time, space and mathematics, trying to come up with the answers. Some of the questions that are popping up sometimes is that there may be things incomplete about the language of mathematics itself, rather than failure in trying to find the right set or sequence of equations / formulas to apply. A lot of research over the past decennia in Artificial Intelligence has produced enormous amounts of very important and interesting applications, but none of these I find exhibit a strong sense of generality in their applications, which allow the same technique to be used over and over and over again in different situations. Most AI applications require hard-wired components of machinery in order to provide any solution.<br /><br />This causes one to go back in time to try to find the origins of mathematics, in search for an answer whether maths by itself is (eventually) inherently limited and whether there's a bound for reality, a bound for mathematics or whether both of these worlds will run parallel forever (they are complementary forever), or whether the abstract thought being developed in mathematics will eventually diverge from reality by so much, that we're now dabbling in the abstract model itself to find both problems and solutions within that model, even there's no physical counterpart that would be subject to the abstract problem.<br /><br />If you look at civilizations as they develop language, at some point in their language they start to associate a "count" of something with a body part. Some civilizations evolve this further to start using more abstract tokens like sticks to count beyond the maximum number of body parts you may have. In simple societies, it is unlikely you need more than the number of parts on your body to explain some concept (you could also modify the definition of how you refer to something). Those which do evolve, eventually use abstract representations to refer to some abstract notion as a "count". This "count" has no other concept other than our visual perception of being some number of concepts.<br /><br />The numbers 0-9 as we know them now have evolved over a rather long period of time and came to us from India and Arabia. The number system is base-10, which allows for relatively very easy manipulation of the numbers during calculus. For this reason, they were eventually adopted and used over the Roman glyphs that dominated, for example, in Italy at the time.<br /><br />The reason why numbers became useful are related to trade. The problem with trade is that you need to figure out <span style="font-style: italic; font-weight: bold;">how much</span> to give of this for <span style="font-weight: bold; font-style: italic;">how much</span> of that. So the practical problem required some way to refer to some 'count' of this and some 'count' of that, also some notion eventually that 'x' of this equals 'y' of that. Hence, the bartering and trading very quickly gave birth to the notion of equality and thus the equation.<br /><br />Geometry evolved after that and served to be able to make rather precise calculations about areas of land, as well as how to carve and build appealing feats of engineering, build houses, bridges, etc. Even though not all forms and shapes could be accurately described at that point, there were some basic rules that could be used already to help out in the engineering effort. It is for these purposes already necessary to think in terms of half objects or fractional objects, like a third of a pie or two-thirds down a bridge. Engineering also required the use of unknowns<br /><br />As you may notice until now, the roots of mathematics are housed in the manipulation of the 'counts' of things... how many meters, how many pears, how many of that for ... <some>.<br /><br />Then Newton came along and decided to use equations not just for static problems, but dynamic problems like apples falling from a tree. And here we also notice introductions of for example the differential equation. What the differential actually does is chop up some event over a larger period of time into many smaller parts, analyze their behavior in these smaller parts and develop a new equation that exhibits how the system changes over some time assuming that there is not significant deviation within that system. For a singular system, i.e. one that does not interact in the abstract model it is given with any other system, this kind of mathematics is very well suited for solving problems.<br /><br />After Newton, a lot of new discoveries were made primarily on the side of physics. We do not only know how to count cows, trade land and figure out how far something is, but we can also use it to describe movement and how things move in space over time (however, with important assumptions). With Newton and the mathematics thereafter, people started to feed more abstract ideas into the language. Take into account that for every addition to this language, the deliberations have to be tested against the axioms of the language itself in order to provide consistency.<br /><br />The problem with more abstract ideas is that some notions may have no reality counterparts, or that the elements that they describe in theory cannot be measured because they are either too small or too big (infinity is one such example). Just thinking about infinity itself and whether it existed or not has driven people mad (<a href="http://en.wikipedia.org/wiki/Georg_Cantor">literally</a>!).<br /><br />Newtonian equations work very well for situations in which you assume a disturbance and the rest of the system is free of distortions for a certain length of time and this system has consistent and homogeneous properties (friction, etc.). But for different systems, even a very simple pendulum where you deal with oscillations, even the single system without a second interacting pendulum can only be practically computed to some degree of accuracy. That is, the real exact solution is the elaboration of some power series, depending mostly on the amplitude of the system.<br /><br />So there exists already a rather simple dynamic system for which there's no real exact solution possible, because the power series extends towards infinity. If we use a supercomputer to compute the exact result, we'll never be able to calculate the result solution before the point in time we'll need it. And yet... looking at the real world and looking at a pendulum swing, there's the thing doing it. What's causing this inherent problem in mathematics, where it cannot be used with 100% accuracy on a pendulum (given some assumed mass), but can be used very precisely on the exchange of goods on a market?<br /><br />There's something about mathematics that's horribly incomplete yet and it's something to do with recursion in mathematics. We need an ability to compute the outcome of recursion sets very, very quickly. The above demonstrates that the model of the real world of mathematics is really just a model and breaks down for certain practical uses of mathematics, depending on the complexity of the situation.Gerard Toonstrahttp://www.blogger.com/profile/17067969645449987498noreply@blogger.com1tag:blogger.com,1999:blog-17558167.post-65371008806140156512011-02-15T12:56:00.000-08:002011-02-15T14:17:17.016-08:00Chaos Theory<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/-tU493TDbXG8/TVrqNkXixmI/AAAAAAAAAhM/xVU9-yS4D6U/s1600/z209761761.jpg"><img style="float: left; margin: 0pt 10px 10px 0pt; cursor: pointer; width: 370px; height: 400px;" src="http://1.bp.blogspot.com/-tU493TDbXG8/TVrqNkXixmI/AAAAAAAAAhM/xVU9-yS4D6U/s400/z209761761.jpg" alt="" id="BLOGGER_PHOTO_ID_5574025007646033506" border="0" /></a>Chaos. The word itself evokes feelings of disorder, of things that are not <span style="font-style: italic;">orderly arranged</span>, a jumbled up room full of stuff, stripes of paint seemingly without reason on a canvas, the results of the actions of satan, uninterpretable perceptions, everything that cannot be described with a simple description or looks untidy. The scientific meaning of chaos however is slightly different. It's not so much about being tidy, but about losing predictability and periodicity. The interesting thing is that from a scientific perspective mos<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/-c-iygEKmBTU/TVrx6VezteI/AAAAAAAAAhU/S4baSlIst5Y/s1600/bz.jpg"><br /></a>t, if not all, things around us have chaotic properties and are in one sense or another chaotically interfering with one another. <a href="http://en.wikipedia.org/wiki/Chaos_theory">Chaos theory</a> researches the effect of sensitivity to initial conditions, which is when a very slight error in a volume, speed or other characteristic may lead to profound differences in the outcome of results over a longer period of time. Lorenz first discovered that certain systems are highly sensitive to initial conditions when he tried to predict the weather. He ran the simulation once and then printed results. At some point he wanted to verify his findings by running the algorithm again and to his astonishment, even after he verified that the numbers were the same, the outcomes were significantly different. The only difference was that the interpretation of the numbers by the computer were slightly truncated somewhere at the 1000th decimal number.<br /><br />Normal periodic and linear systems do not typically amplify these errors, but just show a similar, linear difference in the outcome. Basically, your result is <span style="font-style: italic;">slightly off</span>. What Lorenz found here is that after some point in time, the system started behaving completely differently from the initial run of the process. <span style="font-weight: bold; font-style: italic;">Sensitivity to initial conditions</span> is what he discovered and he came up with a strong analogy for the phenomenon; the "Butterfly Effect". The analogy is that sensitivity to initial conditions could mean that a butterfly flapping its wings in Brazil could in theory cause a tornado in Texas to occur.<br /><br />Other interesting discoveries were made by the russian Belousov, who mixed up a couple of chemicals together and discovered that it changed color to yellow, but then back again. Not only that, it was actually oscillating between clear and yellow. This phenomen had never been witnessed and at that time was seen as impossible. For that reason, his paper that he submitted to a journal was straight-out rejected. Even after a revisal nobody wanted to publish the results on the basis of lack of evidence. It was only years later after informal circulation in Moscow that eventually the results were picked up by Western scientists, who improved the experiments further and demonstrated that a petri-dish with a certain solution of chemicals may eventually demonstrate autonomous oscillation, autonomous meaning without induction of external disturbances. Thus, a system which switches between states in a temporal manner. The actual patterns that occur in such dishes *may* look like the following. The interesting bit is that this is dependent of..... the exact initial conditions!<br /><br /><a href="http://1.bp.blogspot.com/-c-iygEKmBTU/TVrx6VezteI/AAAAAAAAAhU/S4baSlIst5Y/s1600/bz.jpg"><img style="display: block; margin: 0px auto 10px; text-align: center; cursor: pointer; width: 182px; height: 184px;" src="http://1.bp.blogspot.com/-c-iygEKmBTU/TVrx6VezteI/AAAAAAAAAhU/S4baSlIst5Y/s400/bz.jpg" alt="" id="BLOGGER_PHOTO_ID_5574033473325479394" border="0" /></a>As for the pattern itself... there's another great scientist called <a href="http://en.wikipedia.org/wiki/Benoit_Mandelbrot"><span style="font-style: italic;">Benoit Mandelbrot</span></a>, who's not a typical mathematician in the sense that he knew algebra very well :). He studied in Paris in the 2nd world war, so naturally the study was frequently interrupted. Also, he wasn't always that much interested in doing math tables and all that, but instead he had a great visual attention to detail. This made him look at coastlines and mountains and discover recurrences of smaller details in larger ones and come up with the idea of a very simple formula, describing a hugely complex shape overall. He called that <span>a</span><span style="font-style: italic;"> <span style="font-weight: bold;">fractal</span></span>:<br /><br /><a href="http://4.bp.blogspot.com/-6hLK_YQnHiM/TVr0Mx58tlI/AAAAAAAAAhc/e91y6gfYI8I/s1600/mandelbrot_large.png"><img style="display: block; margin: 0px auto 10px; text-align: center; cursor: pointer; width: 400px; height: 300px;" src="http://4.bp.blogspot.com/-6hLK_YQnHiM/TVr0Mx58tlI/AAAAAAAAAhc/e91y6gfYI8I/s400/mandelbrot_large.png" alt="" id="BLOGGER_PHOTO_ID_5574035989216409170" border="0" /></a>The idea is that a very simple formula, z <=> z^2+c, gives rise to the picture above (calculated in the complex plane of course and where the result does not <span style="font-style: italic;">escape to infinity</span>. The figure is self-similar in the sense that one can <span style="font-style: italic;">zoom in</span> on the image and discover that the same shape is in many other different smaller locations at a fraction of the size, but in this case equal to the first one.<br /><br />The interesting idea here emerges that <span style="font-weight: bold;">very simple rules</span> of interaction between elements <span style="font-weight: bold;">can produce</span> <span style="font-weight: bold;">hugely complex systems</span> at a larger scale. The complexity of the figure and the simplicity of the equation should give you some idea of that power. The relationship between the two has always been quite clear from an intuitive perspective, but reviewing these mathematical details suddenly changes that.<br /><br />Chaos theory has put the world of Newtonian physics upside down. The idea of being in control of particular phenomena or occurrences just because we are able to predict it (to some extent).<br /><br />The notions of <span style="font-style: italic;">chaos</span> and <span style="font-style: italic;">order</span> are not necessarily exclusive. In the majority of cases, when scientists mention chaos they do not mean "100% randomness" in their discourse, but they probably refer to: "<span style="font-style: italic;">some chaotic elements involved that deny a straightforward linear solution to the problem</span>". This is because 100% randomness in systems yields no patterns whatsoever, just white noise. Therefore, there is a grey area between the notions of order and chaos and in many cases, when you feed energy into a system that behaves periodical, at some point you'll push it into chaos, where it'll behave unpredictably, but may eventually return to predictability and periodicity again, although that pattern of order may be different from the one you had before. Many systems, given a certain feed of energy, swing back between the two forever. This is what the Lorenz attractor at the top demonstrates, as well as demonstrating how the system is highly dependent on initial conditions (here, interpret this as infinitesimally small differences in the initial condition, the reciprocal of <span style="font-style: italic;">infinitely large</span>).<br /><br />What is different in mathematics when you compare <span style="font-style: italic;">Newtonian physics</span> with<span style="font-style: italic;"> Chaos Theory</span>?<br /><ul><li>The expressions in chaos are very simple, but <span style="font-weight: bold;">recursive</span>.</li><li>Chaos math usually deals with interactions between systems or elements.<br /></li><li>Newtonian physics require orderly systems to be able to predict what happens.<br /></li><li>Chaos has its own cycles and may skip from apparent order to chaos and flip between the edge of chaos and back without warning. </li><li>When you put too much energy into chaotic systems, they become totally unstable and generate totally unpredictable results, leaning towards randomness the more energy you put in.<br /></li></ul>Gerard Toonstrahttp://www.blogger.com/profile/17067969645449987498noreply@blogger.com0tag:blogger.com,1999:blog-17558167.post-79552121573151970492011-02-12T11:32:00.000-08:002011-02-12T11:58:17.794-08:00New kind of science<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/-hXPlZTk1Lps/TVbh3HD0hdI/AAAAAAAAAhE/KQORhcMnoMw/s1600/CA_rule30s.png"><img style="float: right; margin: 0pt 0pt 10px 10px; cursor: pointer; width: 400px; height: 200px;" src="http://1.bp.blogspot.com/-hXPlZTk1Lps/TVbh3HD0hdI/AAAAAAAAAhE/KQORhcMnoMw/s400/CA_rule30s.png" alt="" id="BLOGGER_PHOTO_ID_5572889925821695442" border="0" /></a>I'm reading a book by Stephen Wolfram, which is called "<a href="http://www.wolframscience.com/">A new kind of science</a>". I picked up the title after viewing a number of very interesting lectures on YouTube from <a href="http://en.wikipedia.org/wiki/Robert_Sapolsky">Robert Sapolsky</a> at <a href="http://www.youtube.com/user/StanfordUniversity">Stanford University</a> about "Human Behavioral Biology". It is a privilege to be able to peek into his classes this way. One of the lectures is dedicated to cellular automata and he's explaining their relevance to biology. There's a book mentioned from Stephen Wolfram, so that's how I got there.<br /><br />Anyway, there are very mathematical ways to explain how CA's work, but here's <a href="http://en.wikipedia.org/wiki/Cellular_automaton">Wikipedia's one</a>. One way to look at CA is also as a kind of state machine with many different states at very short intervals from one another, where these states are actually macro-states, the global sum of internal states of each cell. Because rather small changes in internal states can significantly affect the global outcome of the global state, the horizon over which one can make calculations to derive future states is rather limited. I.e., one needs to calculate every state inbetween in order to find the final answer.<br /><br />Some three centuries ago we started discovering/inventing physics laws and formulas to make our lives easier. Nowadays these laws and formulas were used to construct airplanes and we went to the moon with them. Most of these laws come with rather large assumptions. Most of the time, it is:"Assuming nothing happens that introduces a significant error, we can derive our future position/velocity/acceleration by multiplying x with y over a z time period". We're just lucky that macro-objects like our vehicles behave that way in a consistent manner.<br /><br />But looking at smaller interactions or larger systems like the weather, we can't use those laws as directly as that. The number of collisions and forces between objects make the entire thing so complex, that you can no longer work with laws that require these assumptions. So the complication is that you now have to represent many other bodies interacting with your system and calculate the state of this "universe" or "world" for each intermediate state, until you get to the goal state you want. Luckily the interactions are not usually really complex when you get to an appropriate level. Unfortunately, exactly knowing these interactions remains difficult in many cases and very slight differences in the "rule" can eventually produce very large deviations from the overall pattern.<br /><br />It is the expectation that this kind of thinking will produce more understanding about the world around us, as there are so many processes that function according to these principles:<br /><ul><li>the billowing of smoke and vapour</li><li>pressure of gas<br /></li><li>the way vortexes are produced by wings<br /></li><li>interactions between neurons?</li><li>the structure of snowflakes</li><li>the ways how cells react to other agents?<br /></li></ul>Also really interesting is the way how such cellular automata can be used in combination with stochastic processes, the idea being that knowledge may not be complete for each "cell", but given their observations so far, they may like to assume certain facts about the overall structure and modify their behavior accordingly.Gerard Toonstrahttp://www.blogger.com/profile/17067969645449987498noreply@blogger.com0tag:blogger.com,1999:blog-17558167.post-67268999611846691242011-01-29T05:02:00.000-08:002011-01-29T07:05:54.912-08:00TryCopterHere's a video of a tricopter I was building:<br /><br /><a href="http://www.youtube.com/watch?v=wfkaXEuCcUc"><iframe title="YouTube video player" class="youtube-player" type="text/html" width="640" height="390" src="http://www.youtube.com/embed/wfkaXEuCcUc" frameborder="0"></iframe></a><br /><br />There were some issues to resolve, but nothing much out of the ordinary. The biggest problem was ensuring the right firmware is loaded on the controller board and that this firmware functions properly. I bought the blue controller board from korea (<a href="http://www.kkmulticopter.com/">www.kkmulticopter.com</a>), but in my case the "blue board only" firmware created problems with yaw compensation. Most notably, the correction for pitch was the wrong way around, so if I were to fly this thing for real, it would have flipped over head over heels, so to say.<br /><br />Other than that, there are loads of upgrades possible for this thing. My first approach was to ensure I can get this thing to fly. Other attempts will focus on more precision of the frame itself, possibly attaching my ArduPilot Mega controller board for GPS hold / altitude hold and those kind of applications and it would be very nice if I could use that board for more precision when flying.Gerard Toonstrahttp://www.blogger.com/profile/17067969645449987498noreply@blogger.com0tag:blogger.com,1999:blog-17558167.post-34736574898574857352011-01-03T13:30:00.000-08:002011-01-03T14:06:57.939-08:00Brazil's progress<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/_uxas7ckP_0A/TSJBDHEMUII/AAAAAAAAAgc/-xghq2mBfJs/s1600/69762199-dilma-rousseff.jpg"><img style="float: left; margin: 0pt 10px 10px 0pt; cursor: pointer; width: 400px; height: 325px;" src="http://1.bp.blogspot.com/_uxas7ckP_0A/TSJBDHEMUII/AAAAAAAAAgc/-xghq2mBfJs/s400/69762199-dilma-rousseff.jpg" alt="" id="BLOGGER_PHOTO_ID_5558076411820658818" border="0" /></a>Taking a bit of time off in Brasil, I'm really surprised by the changes that have taken place over the last 2-3 years. The first thing that one notices after disembarking an airplane is the large number of cars that are circling the roads nowadays. The cars themselves are no longer the usual about-to-break-down 9-15 year old Fiat Uno's anymore. Brasilians nowadays drive Honda Civics with the same options and luxury as in Europe (although Flex choice here is probably unique), the chinese are exporting their cars over here, you see large Hyundai 4x4's, Range Rovers, Mercedes, BMW and a lot more normal cars that are more reliable than 3 years ago. Indications that the global crisis that has hit Europe and US hasn't in the least bit hit anything in Brasil. There are now small funds available here and there for starting companies, the business itself is becoming less this-and-that, the people at the beach that used to walk by now have handcarts or Puchs, for those people that have made a name for themselves and/or sell reliable products.<br /><br />In fact, the Brasilian analysts are commenting that yet another middle-class has surfaced in Brasil, which I somehow guessed due to the disappearance of the garbage/bottle collectors that used to roam the area around our appartment. It seems that the poorest of Brasil don't need to go as far in the city anymore in order to survive. My guess is that the poorest are now roaming around the outskirts of the city, closer to the poorer suburbs.<br /><br />A disadvantage of the development is that these changes are taking place so fast that it is impossible to keep up with infrastructure. Building hospitals, roads, trains, metro's and all that takes time and there are likely not enough people to provide the capacity required to build all that even if the planning would be ready. Besides that, any type of construction requires engineers and engineers is exactly the kind of people that the world is short of.<br /><br />The city I am in has also been built on the perspective that there are no serious changes in the economics of Brasil, for example considerable increases in salary or requirements of role/ability. The high-rise buildings are spread around the city and you could assume that each one of those appartments owns a car. The problem starts when those people purchase a second car for the wife, or even a third or fourth car for their children. One family, four cars. It is a good possibility in this city if the wages allow it, because many people still do not feel safe enough to walk on the streets or take public transport. The car as a status symbol, even if it's just a trodden down Fiat Uno of 6-year old, still better than taking a taxi. Probably, with the amount of traffic in the city, it's even a cheaper option.<br /><br />The concern here is not so much the traffic on the roads or the traffic jams, but the result of a never-ending traffic jam in the city and what this does to people. Three years ago there were times in the city that you could just pass through and not be bothered much by any other car. Now people get delayed easily by 10-30 minutes per trip and this is costly to both their time available for family / business, as well as their health (nerves).<br /><br />A small trip to the beaches in the south is a clear demonstration how in three years time, people are now reckless road warriors, competing for their own piece on the road. Even though the objective is to go to a tranquil place near the beach to relax after a long weekend, it seems that the haste of Europe has now been implanted in their brains and they seek to pass cars in front of them by any means possible, resulting in dangerous situations everywhere you go.<br /><br />About every 10-20 minutes, there is a reckless driver behind, on the side or before you, trying to take another opportunity to move one car ahead in the long line of cars that everyone is part of. Family of mine driving in PE counted the number of hours on the road and the number of road deaths or very dangerous situations. They counted 6 such situations and have been on the road for a total of 6 hours.<br /><br />One of the problems with such growth is that only few people drive defensively. Most drivers on these roads are offensive, they try to pass on every side, only thinking about how they left their homes 30 minutes late, then lost another 20 minutes in traffic and how to make up for that lost time. My advice to Brasilian drivers would be to take traffic into account, plan their days before and take safety seriously. From here on, the traffic in the cities will only get worse as there is still no end to the increase of car sales. The government has been very slow to respond to increase the road infrastructure and reliable public transport is inexistent.<br /><br />So... even though the economic situation in all of Brasil and especially Pernambuco is very favourable, there are serious challenges ahead for the president and all governors to ensure that this growth will continue. One of those challenges is good, reliable public transport that also middle-class people will want to take advantage of. And invest in more road police work and make it work smarter. A lot of drivers are so badly mannered, they pass in convoys on the right and left in order to get ahead. They do that because there is no police stopping them. A good measure for people using the hard shoulders, reserved for ambulance and police is to be waiting at the end at a large terrain, where every individual can be fined in all tranquility. Who cares if they have to wait three hours before they can continue their journey?<br /><br />Possibly, a number of new campaigns are necessary for awareness in road safety, educations in driving behavior and so on for the road situation to become more bearable for those drivers that are more serious and responsible.Gerard Toonstrahttp://www.blogger.com/profile/17067969645449987498noreply@blogger.com0