Showing posts with label linux. Show all posts
Showing posts with label linux. Show all posts

Tuesday, February 22, 2011

Hauppage USB live 2 on Linux

I've used the USB live2 stick for displaying analog (tv) video on Linux, when I was still on Ubuntu Lucid. Things worked ok back then, so kept the card. Then in a flash of non-inspiration, the "update-manager" appeared and I upgraded to the most recent version. The drivers immediately stopped working and these were pretty special at the time, because I compiled them to get it to work.

I use this stick in combination with a a GoPro HD camera, which in the same timeline I did the Ubuntu upgrade was upgraded to new firmware, which allowed it to stream TV out at the same time as recording video. Great feature! Unfortunately, since the new firmware allowed configuration settings for PAL, I decided to change that along with it. This appeared the real problem for the driver problems.

On Windows the driver is getting its output and all the lights work. So I figured it must've been a driver problem. Turns out that when I configure the GoPro camera to use the NTSC standard instead, I am getting output on Ubuntu Maverick and a decent one at that. For some reason, it appears as if the combination of the driver with PAL and GoPro output is incompatible with one another.

So, if you have a GoPro, attempt to use this together with USB Live2, try changing settings to NTSC instead and see if you get output that way. By the way, I'm using this in combination with an analog video receiver and yes, the same problems apply!

Thursday, August 05, 2010

Hauppauge USB Live 2 & Ubuntu

I'm running Ubuntu and purchased a USB Live 2 stick from Hauppauge to visualize some direct composite video coming from a 5.8Ghz receiver. This video is transmitted from an RC plane to see what's going on up there on a netbook. I'm not flying it FPV for lack of goggles and because I want to keep track of where the plane is. Basically, the intention is to put the family behind the netbook to see what the plane sees and I'll stare at the dot in the distance.

The USB stick arrived yesterday in the mail and after plugging it in, nothing happened. USB devices get recognized by their ID's and as soon as you register a driver for it, the kernel will visit all eligible drivers to see if one wants to step up to communicate with the device. Apparently the support for USB live 2 is not available in the stock distribution of Lucid yet. After searching around I suddenly hit a post made on the 3rd of August (sic!) that support for USB Live 2 was just completed on a development branch in v4l2. So the day before I got the stick development just finished with the first version of the driver.

Blog: http://www.kernellabs.com/blog/?p=1445&cpage=1#comment-1632

I managed to install this on both my netbook and general PC. These are a 32-bit Lucid Lynx distribution on a Samsung N210 and a 64-bit installation on a more powerful machine with ATI HD5850 and i7 920. You should make sure that you're not holding back any linux packages, since I got screen corrruptions with kernel 2.6.32-23. The latest version now is 2.6.32-24 and there it worked (or it was something else, who knows?). After the clean install, you need to grab gcc, do an update/upgrade of all packages and make sure the latest kernel headers are present:
> apt-get install gcc
> apt-get install linux-headers-generic ( <-- ensure it's the latest & greatest @ 2.6.32-24 )
Then I used the entire v4l2 source tree available here (at 05-Aug-2010 that is, things may change rapidly here)
> hg clone http://kernellabs.com/hg/~dheitmueller/polaris4
> cd polaris4
/polaris4> sudo -s
/polaris4> make

( then edit v4l/.config:
1. find CONFIG_DVB_FIREDTV=m
2. change this into:
3. CONFIG_DVB_FIREDTV=n
4. (notice the n instead of m at the end, that's all).
5. This essentially deactivates the build of a problematic module, so if you have a problem there
you can deactivate it.

/polaris4> make
( yes again ).

If all went well:
/polaris4> make install

And then reboot. The reboot will ensure you're not using any old drivers. The rollback procedure for this is to reinstall the linux-image-...version... you were using. That will replace all your modules with the stock Ubuntu Lucid distribution.

Any time you get a new linux-image installed, you have to do another make, make install in this custom directory to overwrite the module files again. Remember that!

As you can tell, after doing this some video-related things may not work properly. For my netbook, I'm only going to use it in the field and some simple stuff. So far things haven't broken, so that should be relatively safe. It looks pretty stable so far.

I've noticed that the image when there's no signal may be plain blue. That is usually the case when the signal hasn't been tuned at all, but in this card you must ensure there's a video signal going into it.

The USB Live2 is a great little gadget with quite good image quality and zero lag on the Linux drivers. In Windows it's possible you get lag up to 3 seconds even, but that's because your driver must be configured to use "GAME" mode (if you're flying or gaming with this that is, otherwise it doesn't matter too much).

In order to visualize things, I now use tvtime (apt-get install tvtime). There's a handy configuration XML file in /etc/tvtime.xml where you can make all the default settings. The coolness about this utility is that you can change a large number of settings without restarting and adjust the image quality, brightness or contrast itself. Great for outdoors flying and getting a bit more out of the display.

The USB Live 2 uses the cx231xx kernel driver.

Wednesday, October 07, 2009

Rolling your own heap manager in Linux

I was looking at ways to maintain a lot of data in memory, which may get modified in any way through constant interaction and then methods to maintain that state across process invocations. One of the ways to do this is by picking out structures and objects and store them elsewhere, but that involves a lot of work. Since the process I am working has full knowledge of what happens inside it, I've considered the possibility to dump an entire data heap to disk, so that it can be read in later and immediately reused.

One of the things that you can't do in that case is maintain file descriptors or other state-bound items that are tied to sockets and so on. So, the heap may only maintain data and even with state one should be very careful, since state is often bound to the currently executing process. There's ways around those things too however, so I decided to just go along and do this heap writing/reading.

The idea is to set up a 2GB file as the standard and maximum size of a heap and then map the contents of that file into memory using mmap. Let's call this the file heap. The mmap call will automatically increase the memory address space when necessary (read up on brk() and sbrk()). This 2GB looks contiguous to the process (although Linux may have this in totally different pages in physical memory ). The idea thereafter is that the process handling uses different memory pages, storing current state that is related to current handling and that any data modifications are done using the file heap.

So, data -> file heap, any other necessary memory allocations are allocated from the standard glibc heap (using standard malloc() and free() ). This way, any data allocated with a specific s_malloc() or s_free() call will automatically be added to the internal data file in places that are available to it and programming will feel quite natural overall. It's just like dealing with normal memory.

When the program terminates through a standard quit call (or when it catches specific signals), the msync() call is called, syncing all memory changes to the disk file synchronously, the memory is detached and the application exits. This should guarantee ok consistency between runs. Another run attaches to the file and all the data is there in the right place. For now, the application requires that this mmap is attached to the same base address, so that any pointers inside the file still point to something valid later. An alternative is specific linux structs and allocation functions that maintain this householding, but that increases size significantly.

These methods and custom malloc() and free() implementations should allow the application to also do garbage collections, maintain reference counts and other clever stuff that I can't think of right now. The good thing is that this doesn't require the application to keep everything properly aligned, it deals with less complexity. The preallocated heap is then filled up until it's full. Theoretically, it's also a good idea to pre-slice the heap into three different areas and then work with three heaps instead. This means that all three heaps have good knowledge about the maximum size they should be having and they can arrange their memories with their own specialized algorithms.

Wednesday, September 09, 2009

Tried out a new distribution: Fedora

I've used Fedora for the first time now. Like Ubuntu and Debian relationships, Fedora is a distribution that started after RedHat release 9 for home-users. RedHat sponsors Fedora heavily, but has put RedHat Enterprise Linux after the Fedora developments. So basically, the community is expected to innovate on the platform and RedHat sits around these developments, sponsoring Fedora where required and streaming those developments back into the enterprise product.

I'm now using a couple of Linux distro's actually. Some of the servers are running on CentOS, where CentOS is basically the opensource distribution of RHEL, as RedHat is required to release the software under the GPL license. So in a sense, CentOS == RHEL without the support contracts.

It's the first time I've been using SELinux as well. I can't say I'm truly happy about it, it's a bit invisible to the first-time user. I've managed to get things started ok and some nice things are coming out of Fedora, but I do think that Ubuntu support for many desktop tasks is slightly better. It's probably because of the widely visited Ubuntu forums. All-in-all, Ubuntu is slightly friendlier for using the platform, but Fedora has some great and innovative features and probably contains things I haven't even discovered yet.

From the software perspective, they seem to have more or less the same availability. RedHat systems use "yum" for package management though and Ubuntu (+Debian) use apt, that's the difference. Personally, I slightly prefer apt still. Install-wise, things are pretty easy nowadays. It all installs without a hitch.

Saturday, June 16, 2007

High Precision Event Timer

I've been suffering a bit from a somewhat sluggish machine at times. Sometimes I run VMWare and that is especially annoyingly slow. I already posted the max_cstate thing (powersaving functionality).

Here is another thing I tried. "hpet=disable" on the kernel start line in /boot/grub/menu.lst.

What is HPET?

I am still experimenting with hpet disabled, but so far the Gnome desktop seems more responsive and things slightly faster. I used to get some 'stutters' on the mouse cursor sometimes and more processing times, but things seem to run better actually without the precision timer.

At the same time I noticed that some applications crashed suddenly. Not very frequent, but when load was high.

I'll keep this value for now and see what happens. I can always revert.

Wednesday, May 09, 2007

Performance of VMWare on Linux disappointing?

Probably only for laptop with power-save functionalities... but you never know:

cat /sys/module/processor/parameters/max_cstate

echo 3 > /sys/module/processor/parameters/max_cstate

This value is related to the number of level that the processor takes to save energy (reducing its performance and power consumption when idle).

Sunday, April 01, 2007

More design work

These are two more designs I created with Wacom and some standard functionalities.
This is a modification of a fractal. I basically heavily embossed it and layered the result on top of the original. Then grayscaled part of the picture for a concrete effect and layered other images over it for a sort of glass like effect. As you can see, I don't master glass yet, which is why I started with the following image, one of an orb over a newspaper. That one looks much better.

This orb is produced solely under photoshop. It started with a simple sphere in a flat color, than I added an inner shadow effect to create some white at the bottom right (it should have more in the final image). Then another gradient in the middle, which was masked by a radial gradient. A hotspot in white at the top and then a selection of the initial orb that had a fill of black and transparency, which was scaled down to about 40% in height, slightly blurred and slightly moved to the right. The glasses and newspaper are from a photo. And the problem I run into obviously is not paying attention to the existing shadows in the picture :).

G>

Monday, March 26, 2007

Beryl. The Windows Aero?

Beryl is a windows manager for Ubuntu Linux that puts some eye-candy on your desktop. For those people not aligned with Linux and subject to marketing efforts of Windows and having been woo-ed by Aero only, this is Beryl:

http://www.youtube.com/results?search_query=beryl&search=Search

Beryl uses OpenGL to animate all those pretty effects. The number of options you have for customization is absolutely astounding. You can configure all effects for window events and then also configure individual timings and what not. Beryl also shows the X Cube, which is an invention of X/Linux. This cube helps in choosing your workspace.

I'm running Beryl on both NVIDIA 64MB card at home and my 128MB laptop computer. The 128M laptop is quite a bit slower and to remove very large windows with effects you notice the extensive cpu load, but still if you believe the video's, the performance should still be better than Aero.

Some people have loaded videos on YouTube that show how Beryl continues to run properly on standard current computers and do not need these "special" Aero graphic hardware that now seems to be demanded by Windows Vista to run with Aero. Have a look for yourself:

http://www.youtube.com/watch?v=xC5uEe5OzNQ

So, I guess we're finally here then. Windows Vista sucks and is already getting discounted:

http://arstechnica.com/news.ars/post/20070323-microsoft-announces-more-discounted-vista-licensing.html
My advice is... if you are looking for something new... get Ubuntu! I am using it on two computers right now and there is no reason for any of my productivity apps to move to Windows. The only reason anybody should ever have to move to Windows is for games or multi-media editing applications (Yes, there are plenty available like Kino, Cinelerra etc..., but their usability and stability is not very good at all).

Actually, you can move *now* to Ubuntu, but go straight to the distribution named "Edgy", which is the latest release. As soon as Feisty Fawn comes out, you'll be able to upgrade.

If you are that person that will have to reinstall anyway:

http://www.extremetech.com/article2/0,1697,2082982,00.asp

you might as well consider it in your "upgrade". I haven't even told anyone about the features of "apt". Imagine a very, very, very, very large repository of software on a server. And this server is connected to the Internet. This server contains all definitions of package dependencies, compatibilities, kernel versions, distributions etc. available. And then from your PC, by typing in:

apt-get update
apt-get upgrade

*ALL* (yes, *ALL*, not just kernel, not just the "main" Linux software or packages, but *ALL*) the software on your computer is automatically upgraded to the latest version possible, taking into account dependencies on other packages, taking into account if you have removed certain end-softwares that no longer use a certain other piece of software.

That is APT.

So, get Edgy from here:

http://cdimage.ubuntu.com/releases/6.10/release/

burn your CD and go for it. When Feisty comes out officially, you will be able to upgrade over the Internet (yep, free and gratis!). Then start reading from info here:

http://ubuntuguide.org/wiki/Ubuntu_Edgy

http://www.ubuntuforums.org/

Also use Google. Type "+Ubuntu" at the beginning and anything else that is of your concern. The reason is that this normally shows the ubuntuforums and other information. Notice that the Ubuntu people generally provide easier and more generic methods to resolve your problems. If you go with other distributions, the resolutions are generally more technical in nature.

http://www.google.com/search?hl=en&q=%2BUbuntu+%2Bberyl&btnG=Search

Have fun!

G>

Thursday, January 18, 2007

MS Vista vs. Linux source code control...

MS Vista has been delayed a couple of times and this is often attributed to different factors. This article shows how Windows did the version management. Check it out:

You have a very large number of branches that sometimes need to be integrated. The whole point of these branches is to establish "quality control exit gates", where the code will not pass upstream unless it passes the quality control check. This is a nice thought and that part might work, but there is another consideration for quality to be made here... immediate synchronization with other code. Now, the person writing this article is convinced that without this level of "management", nothing will ever ship.

Consider the Linux kernel. There is basically one mailing list and one mailing list only. There exists an idea of "separation of expertise", but not really "separation of concern or by branch". This does not mean that branches are not used, but they are temporary. The only longer-lived branches are kernel version branches (2.2, 2.4, 2.6), so you have different streams there.

The mailing list is considered total chaos (if you live up to MS terms). Everybody has access to every comment, even if they don't work on it and comments on everything. Filtering on your area of interest may be quite difficult, but in the end you can gather insight into everything that is going on. Messages in the middle establish "code freezes" (sent by Linux) when another version of Linux is to be closed off.

What I am ignoring so far is that Windows is much more than just a kernel, so the comparison is not truly fair. The problems that Windows may have faced in Vista is not necessarily "quality control", but immediate access to new features and managing the streams of code in real-time. Now, I am not saying there is a solution to this dilemma, but there is a problem to be solved.

If you rely on the quality gates, then a new feature that you depend on may only arrive after two weeks. Forget about putting pressure. Then you need to start a political fight with another department with a different budget and different bosses to get this feature in or *your* project will delay. Truth is that it's not truly the concern of that other department.

Open Source does not have a deadline and no requirement to make everything backwards compatible. Windows this is highly desired. So Windows needs to maintain backwards-compatible API's using a new structure of the kernel, but "looking" the same. We all know this is an impossible task...

So... we can argue... is the shipping of Windows Vista truly a miracle and good management, or could it have shipped way earlier if they would have managed things differently? Their personal experience shows that the way how it initially worked wasn't actually working very well...

Monday, January 15, 2007

How to start writing your own OS

Writing a fully fledged Operating System is not a task that can be finished by a small group of people, but it is still possible to write one that gets near to "DOS" kind of functionality, all by yourself. What you put in... you decide! It probably only runs on your hardware, or it doesn't make use of specialized features of hardwares, but you should ask yourself what kind of things you want to have for that particular OS and code away!

The tutorials here show you the start of writing one. If you get into the downloads area and look at one of those tutorials, you can even get a template OS to start from. Notice that Linux 0.01 is also part of the list and is a good read (and a chuckle!) to work from.

The first thing to do when writing an OS, I think, is "choose your tools". If you don't get them right from the start, you'll spend significant amounts of time rewriting it later. I am using gcc + nasm, since these are widely available and free. Then, start by building a hard disk image using the info from the site here.

You probably do not want to get started with a boot loader (it's really complicated!) and with the boot loaders available now, why should you? I did the following on Ubuntu Linux to create a hard drive image to start from. You go up to the point where they start to copy the guest OS, because that guest OS... is basically your real OS that will run inside qemu (more about qemu later).

http://wiki.xensource.com/xenwiki/InstallGuestImage

In the steps where "grub" gets installed... I had to manually copy a menu.lst into the "/boot/grub" folder. It didn't get created automatically. Maybe I should have "chroot-ed" instead into the "/mnt" and then try it again. If you take care of this image, you only will need to do it once.

Anyway, I copied in a menu.lst that looks like:

default 0
timeout 3
hiddenmenu

groot=(hd0,0)

title
root (hd0,0)
kernel /kernel.bin
boot

Of course you will need to replace the "kernel.bin" every time you recompile. I have this in my Makefile as such:

losetup -o 32256 /dev/loop1 ./test/hd.img
mount /dev/loop1 /mnt/temp
sudo cp kernel.bin /mnt/temp/kernel.bin
sudo umount /mnt/temp
losetup -d /dev/loop1
losetup -d /dev/loop0


And then you can test the OS through Qemu (better than VMWare, since it has a shorter time for startup and it is free):

qemu -hda ./test/hd.img -boot c -std-vga -localtime

Qemu is installable through apt, but the kernel module needs to be downloaded from the site. No worries... plenty of documentation there. On my Ubuntu Edgy I downloaded, untarred, ./configure-d and used make; make install and it all worked.

The cool thing nowadays is that you can downloaded various OS's from here and there and merge the concepts and see what happens. Also, an OS is not necessarily multi-user or multi-tasking, nor does it necessarily need to page out memory to disk and back again (you can simply page-fault and panic).

Unless you have a specific need or idea to follow, I wouldn't even start to believe it would be the start of the next "better" OS... But it's good to write one "OS" in your life to better understand how they all work.