Saturday, February 27, 2010

AI Challenge



Simon Funk put an AI challenge online, where you need to control two mechanical hands and flap a ball from right to left. The simulation itself is all in python and getting started is really easy.

My approach is extended from some ideas I am looking at with a colleague. The video above are some initial results of my implementation. The best performance is seemingly at the start I reckon, although you can definitely see that the left hand is replaying its strategy once it is getting into the same situation (the ball slowly rolling towards it).

The research is highly focused on how intention, (intrinsic) reward and perceptions from the external environment all work together to hopefully do something useful in the end. It's a great challenge to work on, there are great dynamics in this physical simulation.

I'm using an ATI GPU for the computation, which I hope to extend further in the future to get better results. Notice in the video how the arms don't really move very determined in all cases. For a large amount of the time, they are fighting the gravity and seem to have trouble generating enough energy to overcome the counterforces playing on their 'muscles'. They don't really need to much about the state of the entire world in order to execute some "good" actions (once again, 'good' is in the eyes of the beholder and depends on how much fun something is, social interaction, the benefits it "feels" it gets, etc).

Anyway, these are the results after one day so far. Now let's see if it can be made better.

Monday, February 15, 2010

FlightGear hacking

On the right is an image of a free flight simulator called FlightGear. It's used a lot for research purposes, but certainly also just to have fun and it is licensed under the GPL. You can fly anywhere in the world with the planes provided and the entire framework is reasonably extendable. There is actually a choice for the Flight Dynamics Model that you wish to use. Each FDM has different strengths and weaknesses. One focuses on airflow around the body parts and weight and another one more on the steering and so forth. The objective is realism and trying to simulate how things behave prior to real life development and tests. Thus, that cuts down on the costs of development significantly really. Its objective is not to compete with consumer-level flight simulator, nice flashy graphical details and so on, but the focus is on correctness of the flight model and other items around it.

For example, the airports are correct, the runway placing, approach lights, the sky and environment model are accurate qua stars, sun and moon given the time of day, the handling of the instruments are correct (the magnetic compass lags behind a bit just as in a real airplane), some planes may break when put under too much stress (although the pilot is likely to break before that!). The scenery uses the SRTM data that I also used to do the hillshading in mapnik (see previous post).

There are a lot of customization capabilities for FlightGear too, making it very suitable for research. The way it works is pretty simple. You specify an option parameter that states a direction of input or output (if input, then the simulator accepts an external file or socket for instructions to control ailerons, rudder, etc. ). More details can be found under /usr/share/games/FlightGear/Docs/README.IO if interested. The option looks roughly like this:
--generic=socket,out,10,localhost,16661,udp,myprotocol
So, this chooses a socket transport, outgoing data, 10 times per second, localhost, port 16661, udp protocol using the protocol as specified in the myprotocol.xml file. Easy!

The receiving application can then start processing a stream of data coming from the flight simulator. It can also send back a number of commands to control the aircraft, the environment, sounds, AI models or whatever. Basically, it's quite a nice way to try out a couple of concepts before having to dig too deep into the bowels of the flight simulator package itself. The same methods are actually used to store a flight path of a plane and then replay the flight later (at the specified frequency intervals). The same methods are used to connect flightgear instances together and do some multiplayer flights or track the positions of flightgear users on a large map.

Sunday, February 14, 2010

Hill shading OpenStreet maps


I've been fooling around with OpenStreetmaps again, this time just trying out a couple of experiments that others did and hopefully in the process develop some interesting ideas to use the data. I've blogged about Openstreetmaps before. A couple of things have changed as Karmic came out. Some build steps require you to download more software and some packages may have been removed since then. I've had bigger problems getting mapnik to work, but that was related to the projection of the import of OSM into Postgis. When starting 'osm2pgsql', don't use the "-l" flag, because that indicates conversion to latlong. The standard mapnik projection however is mercator. Most of the times, when nothing shows up in the rendered image, it's a projection problem somewhere. Just verify it all makes sense going backwards from the mapnik process and you should find it quick enough.

Anyway, a very interesting experiment I tried relates to hillshading. This is an effect where elevation data from different sources are merged (nowadays people use 'mashed') with map data in order to shade particular areas of the map to show where the hills are located. Elevation data for most of the world can be downloaded for free from the Shuttle Radar Topography Mission. You should get version 2.1. For the US, there's elevation data available with 30m horizontal accuracy, the rest of the world only gets 90m (or 3 arcseconds). Some tools exist to help you out in managing that data: srtm2osm (I didn't use this) or a script called srtm_generate_hdr.sh (which I did use). Downloaded the data first, then produced the HDR files.

Then run a simple script:

cd /home/gt/srtm/
for X in *.hgt.zip
do
yes | /home/gt/srtm/srtm_generate_hdr.sh $X
done

This basically generates a number of TIF files for each particular region. These TIF files are height maps. When you look into them, you may be able to recognize certain parts of your country, but don't over-interpret the images you see, because the actual height doesn't really show up there.

In the Ubuntu repositories, there is a package called "python-gdal", which is a set of utilities in python to drive gdal utilities. This can easily be used to stitch the generated TIF files together to generate one huge height map for the entire region. I've used this to get a height map for the Benelux for example, resulting in a 55.0MB file.

Problem is that the projection is incorrect, so you need to assign the latlong projection first, then warp the image, reprojecting it to mercator (what mapnik uses) and in the process attempt to take out any invalid data points (set to 32767).

#> gdal_translate -of GTiff -co "TILED=YES" -a_srs "+proj=latlong" combined.tif combined_adapted.tif
#> gdalwarp -of GTiff -co "TILED=YES" -srcnodata 32767 -t_srs "+proj=merc +ellps=sphere +R=6378137 +a=6378137 +units=m" -rcs -order 3 -tr 30 30 -multi combined_adapted.tif warped.tif

Cool! So we now have a warped.tif in mercator projection that is basically an elevation map of the terrain. This is however not yet an image which uses the elevation data to 'shade' particular areas of the map. Think of putting the elevation data under a spotlight and then moving the spotlight to some particular origin (somewhere NW), located high above the map. This would create shady areas on the parts of the map where there are hills facing away from the lightsource. The other sides would be normally visible.

Perry Geo luckily did this work for me. You can download his utility here. I had to modify the Makefile to set the GDAL include path (-I/usr/include/gdal) and modify the included gdal library (gdal1.5.0). These tools create a 400-500MB hillshade file (18555x24128 pixels) if you use it for the entire Benelux region. That's still manageable, but in the process of getting there you may need 4GB of memory and a fast processor. The following image shows a minute detail of the Dutch coastline (scaled down 33%).

So that's cool. This image can be used as a bottom layer for some mapnik generated tiles. If mapnik then paints other layers above it slightly translucent, the shading will slightly shine through the image, creating a sense of height throughout the map.

A standard mapnik image from a region in Belgium, close to Maastricht which has a number of hills looks like this:

The same region in Google Maps, rendering terrain data, looks like this:

So, my rendered image looks different from Google's, but the styling wasn't optimized for terrain data in the mapnik renderer.

There's also a reason why the zoom level wasn't increased to about 18 or so. Remember that we downloaded the SRTM3 set, which provides 90m accuracy in the horizontal plane. The height map therefore interpolates between subplanes of equal elevation. This doesn't necessarily improve quality :). Google Maps doesn't go beyond zoomlevel 15, which is a reasonable one. Trying to render images at increased zoomlevels creates pixelated images that don't look nice at all. The height map can improve slightly with better algorithms, but no matter what you do, it will never be perfect. Google Maps even has a couple of areas with hills that look a bit blotchy, irregular or pixelated. The idea is just to get a sense of height of a mountain range anyway and drawing the contours around this map certainly helps to find out where the mountains are (especially for hiking and cycling maps, which use these techniques regularly).

Original sources with the tutorials are here and here.