The APRS / AX.25 link borrows most of the code for the signal sending from the Trackuino project. With the right filter and decoupling behind it and the right Fast PWM implementation, the signal quality is very impressive indeed (with quality meaning how perfectly the signal approximates sine waves at different frequencies).
The OSD used to be hooked up by a simple loop, where the OSD was temporarily turned off to refresh the video buffer and then turned on again. Needless to say, this results in flicker occurring at times and also characters sometimes appearing in wrong locations (due to the internal generation of VSYNC signals and the write operations being carried out at the same time).
The current hardware implementation uses INT0 on the arduino (Pin 2 on Duemilanove), which is connected through a 1K Ohm resistor to +5V and with a wire to the VSYNC pin on the OSD chip. This allows the chip to work already. Interesting points here:
- I used to refresh the buffer every VSYNC trigger, resulting in no image whatsoever. The OSD now writes new information every x cycles or whenever anything has changed.
- After every change to the buffer, you should re-enable the display by writing 0x0C to VM0.
The RC circuit I use to clean the signal is in the config.h file of the trackuino sources:
// 8k2 10uF
// Arduino out o--/\/\/\---+---||---o
// R | Cc
// 0.1uF | C
This reduces the 5V pin-out signal to 500mV peak-to-peak in the process of generating a very nice output. Together with the FastPWM implementation, this generates a very nice sine wave indeed.
It is very important that this signal is clean and sine-wave like. The slight delay caused by the VSYNC meant that, due to CRC checking at the RX end, the signal didn't validate. I caught on to this by being able to, once in a while, decipher a single slash '/', but longer strings couldn't be parsed at all.
The output of this signal goes to the mono audio in of the A/V transmitter on the craft. The audio signal is received by the receiver, is converted into a line-out, which is then sampled by the on-board ADC within my USB Hauppauge stick. The laptop can query the digital audio samples from the stick directly and analyze the signal to determine the frequencies. The frequency modulation is converted into a bitstream of 0's and 1's and eventually, the complete string rematerializes at the receiver end.
As said, there are some utilities for doing this on an Ubuntu computer. I've tried out soundmodem, which gives you a KISS / MKISS interface, but it's probably too complex for the simple purpose I need this for (which is to parse the string out of the data and hand this to some other process). I found 'multimon' as well in AFSK1200 mode and this does the job very nicely as well. 'multimon' was written in 1997 or so and works using the OSS interface on Linux, which is now deprecated (the old /dev/dsp interface ).
You can however load a set of alsa oss tools to simulate OSS devices and convert things on the CPU if needed. What I use to use multimon on an ALSA computer without having to modify any of the internal code:
> aoss multimon -a AFSK1200
This then outputs the data strings to the console.
So there you have it. One single, heavily used Arduino board to generate the OSD video stream and periodically (300ms?) send more telemetry (to your liking) to the ground station using APRS/AX.25 on the audio channel of the A/V transmitter. It is not a weight-effective means of doing this, because it adds one full arduino board to the weight, but it does handle all the processing quite nicely. You do need a 328P processor at least due to the size of the execution image that is to be loaded and the RAM that the code uses for internal buffers and so on.