Tuesday, January 30, 2007

Project Dune now stable...

Project Dune went stable yesterday. It still needs a detailed customization guide and a user's guide to be really complete, but the main beef is there for people to use and to provide feedback on.

The next steps for this project are aimed at separate modules for SCRUM project management and document management. Naturally, those modules will have certain breakout points as well to allow integration with Project Dune.

The project itself will slow down in new features and aims to resolve issues that the user community will (or should) raise. Then with some more stability under it, we can take it to other levels later on.

Friday, January 19, 2007

It's not all the same kind of development

I have had the opportunity to work on many different kinds of projects, and you cannot treat software development for embedded hardware the same as web server application development, not by any long shot.

Embedded software, operating systems, low level applications are 70-80% preparation in design, memory layout, memory management, documentation and planning. In contrast to high-level languages used on webservers, etc. The more low-level and smaller you go, the better you need to plan the software, since the tools available (or rework) to modify changes is going to be more difficult. And the lower you go in the system, the more other systems built on top are likely to be severely affected.

So, 70-80% planning in embedded systems with not a lot of coding (just make sure you *do* it right). This is against 20-40% of planning for application software, but requiring a lot of coding effort. The high-level languages have plenty of libraries available and the development tools help significantly with refactoring nowadays. Planning beforehand for the software is actually a risk factor, due to changing requirements from the clients and changing insights along the development process, or a change in the way how things could/should be done.

Therefore, one could "perceive" embedded software development as a "slow" process, but this is not truly the case. A design is in itself a produce of intellect, but it does not immediately convert itself to a materialized ability to the end user. Nobody however can deny that a good design produces immense benefits in the long run for a certain operating system or function.

The final argument has to do with update cycles. Embedded software *has* to be correct. Webserver or PC software can be updated (should not, unless for new features), but we have sort of generally started to accept that for PC software you produce a version that you think works and then continuously update the software after it is released. Everybody does it, so why not you? With embedded, this is simply impossible, since the user does not generally have the ability or skills to do this. And in general we tend to think of a product as "bad" if they have to return it to the shop for 'an upgrade'...

Thursday, January 18, 2007

MS Vista vs. Linux source code control...

MS Vista has been delayed a couple of times and this is often attributed to different factors. This article shows how Windows did the version management. Check it out:

You have a very large number of branches that sometimes need to be integrated. The whole point of these branches is to establish "quality control exit gates", where the code will not pass upstream unless it passes the quality control check. This is a nice thought and that part might work, but there is another consideration for quality to be made here... immediate synchronization with other code. Now, the person writing this article is convinced that without this level of "management", nothing will ever ship.

Consider the Linux kernel. There is basically one mailing list and one mailing list only. There exists an idea of "separation of expertise", but not really "separation of concern or by branch". This does not mean that branches are not used, but they are temporary. The only longer-lived branches are kernel version branches (2.2, 2.4, 2.6), so you have different streams there.

The mailing list is considered total chaos (if you live up to MS terms). Everybody has access to every comment, even if they don't work on it and comments on everything. Filtering on your area of interest may be quite difficult, but in the end you can gather insight into everything that is going on. Messages in the middle establish "code freezes" (sent by Linux) when another version of Linux is to be closed off.

What I am ignoring so far is that Windows is much more than just a kernel, so the comparison is not truly fair. The problems that Windows may have faced in Vista is not necessarily "quality control", but immediate access to new features and managing the streams of code in real-time. Now, I am not saying there is a solution to this dilemma, but there is a problem to be solved.

If you rely on the quality gates, then a new feature that you depend on may only arrive after two weeks. Forget about putting pressure. Then you need to start a political fight with another department with a different budget and different bosses to get this feature in or *your* project will delay. Truth is that it's not truly the concern of that other department.

Open Source does not have a deadline and no requirement to make everything backwards compatible. Windows this is highly desired. So Windows needs to maintain backwards-compatible API's using a new structure of the kernel, but "looking" the same. We all know this is an impossible task...

So... we can argue... is the shipping of Windows Vista truly a miracle and good management, or could it have shipped way earlier if they would have managed things differently? Their personal experience shows that the way how it initially worked wasn't actually working very well...

SCRUM

There are different methods for project management besides the "Project Management Body Of Knowledge". The ancient "accredited" methods put a single guy as the responsible person for finishing and executing a project. This is just an observation at this point.

"Scrum" is another way of "managing". Actually, it is not really management, but mostly delegation. The scrum method is derived from agile project management and the team becomes responsible for its planning, estimation and execution. Well, the team led by the scrum master, sort of like a quarterback.

What I like about the method is that it allows you to extract information about the daily progress. For example, each task that is defined in this method will be estimated for a single day and one day only. There are no tasks that take two days. If there is one, it will be broken down into two parts. This facilitates management and tracking.

In the regular project management, a task may sometimes take 3, 5, 7 or 20 days. In the minds of some developers, this gives them "slack" and sometimes too much slack that the time is not properly controlled. At the end of the task, it may either be finished or backlogged. Imagine the surprise... And in steps the project manager to negotiate with the client.

Having it broken down in single tasks really facilitates tracking. The cool thing here is that the "whip" is no longer in the hands of the project manager. There is a public board on the wall that shows the tasks to be done, who is working on what and there is therefore transparency on other people's performance. This sort of means that everybody in the team has an invisible "imaginary" whip. Even though you wouldn't publically make anyone accountable, the people involved perceive a certain team pressure for complying with their agreed tasks and feel thus pressure to finish. This makes the integration within the team a lot stronger if the tasks *are* completed and opens up an interesting discussion if tasks are *not*. (this is where people management comes into play).

In the context of "Project Dune", I am planning to build a separated module for this type of management, so that electronically it is possible to track projects this way. There are quite a number of manual tasks involved still that should be automated.
(feature point management, project burndown, product burndown and impediment burndown).

The whole essence of scrum is making a sprint of 4 weeks and putting your foot down to complete it and manage your difficulties (impediments) plus tasks on a daily basis. This really helps to integrate your team better and establishes a much nicer team atmosphere than other management methods can do. However, this does depend on the commitment and attitude of the team.

Wednesday, January 17, 2007

More on OS development...

I'm just playing around with the toy OS and learning quite a lot about how a computer works and what the kernel for a computer really has to do. I am not even working on device I/O or advanced features at all yet. Just memory management.

Memory management is subject to the following considerations:
  • Where to put the kernel code and data (you do not want to overwrite this)
  • Where to put the GDT and IDT
  • Where to put the memory "page directory" (full 4GB directory == 8 MB)
  • The ratio between kernel space vs. user space (linux = 1:3)
  • How to implement an optimal "malloc" routine
  • Page swapping (but not required, a kernel may also fault)
  • Optimization of the above requirements, like is done in Linux > 2.4
  • Memory page sharing through copy-on-write for example (by default the process shares the same code + data, but on any attempt to write to the page, the kernel makes a copy of that page)
So... great... lots of considerations. I am just at the part of where to put everything. My decisions haven't been ideal yet, because of boot-up limitations mostly. But slowly I should be able to work towards page allocation for a process, assigning it to a process and then subsequently the ability to run a certain process (albeit hardcoded in the kernel itself for the time being).

As said before, this is not going to be anything big, but will greatly assist in understanding other OS's a lot better. Maybe even I could use this knowledge to jump-start another project for a limited device and make the machine do something really specific in an easy way.

G>

Monday, January 15, 2007

How to start writing your own OS

Writing a fully fledged Operating System is not a task that can be finished by a small group of people, but it is still possible to write one that gets near to "DOS" kind of functionality, all by yourself. What you put in... you decide! It probably only runs on your hardware, or it doesn't make use of specialized features of hardwares, but you should ask yourself what kind of things you want to have for that particular OS and code away!

The tutorials here show you the start of writing one. If you get into the downloads area and look at one of those tutorials, you can even get a template OS to start from. Notice that Linux 0.01 is also part of the list and is a good read (and a chuckle!) to work from.

The first thing to do when writing an OS, I think, is "choose your tools". If you don't get them right from the start, you'll spend significant amounts of time rewriting it later. I am using gcc + nasm, since these are widely available and free. Then, start by building a hard disk image using the info from the site here.

You probably do not want to get started with a boot loader (it's really complicated!) and with the boot loaders available now, why should you? I did the following on Ubuntu Linux to create a hard drive image to start from. You go up to the point where they start to copy the guest OS, because that guest OS... is basically your real OS that will run inside qemu (more about qemu later).

http://wiki.xensource.com/xenwiki/InstallGuestImage

In the steps where "grub" gets installed... I had to manually copy a menu.lst into the "/boot/grub" folder. It didn't get created automatically. Maybe I should have "chroot-ed" instead into the "/mnt" and then try it again. If you take care of this image, you only will need to do it once.

Anyway, I copied in a menu.lst that looks like:

default 0
timeout 3
hiddenmenu

groot=(hd0,0)

title
root (hd0,0)
kernel /kernel.bin
boot

Of course you will need to replace the "kernel.bin" every time you recompile. I have this in my Makefile as such:

losetup -o 32256 /dev/loop1 ./test/hd.img
mount /dev/loop1 /mnt/temp
sudo cp kernel.bin /mnt/temp/kernel.bin
sudo umount /mnt/temp
losetup -d /dev/loop1
losetup -d /dev/loop0


And then you can test the OS through Qemu (better than VMWare, since it has a shorter time for startup and it is free):

qemu -hda ./test/hd.img -boot c -std-vga -localtime

Qemu is installable through apt, but the kernel module needs to be downloaded from the site. No worries... plenty of documentation there. On my Ubuntu Edgy I downloaded, untarred, ./configure-d and used make; make install and it all worked.

The cool thing nowadays is that you can downloaded various OS's from here and there and merge the concepts and see what happens. Also, an OS is not necessarily multi-user or multi-tasking, nor does it necessarily need to page out memory to disk and back again (you can simply page-fault and panic).

Unless you have a specific need or idea to follow, I wouldn't even start to believe it would be the start of the next "better" OS... But it's good to write one "OS" in your life to better understand how they all work.