Saturday, April 28, 2007

The IT worker of the future

We all know that the US has a large number of IT companies or departments in mega-corporations and we also know that over the past couple of years, a large number of people were deemed "redundant" in their own country. Their jobs were off-shored to other countries like India, China or Brazil, mostly because the actual work they were performing is similarly available as a service elsewhere at a much better cost. With the advent of the Internet, getting the results of that work and the project team interaction is very easy.

What we all probably need to do is re-think the objectives of companies, as I wrote about in earlier posts. Whilst I am not going to even start to highlight particular companies in this posts, (it is irrelevant), the concept and context of a corporation and company responsibility should be evaluated. Is it meaningful that a company is only aiming to make money in this world? What if we, as a democracy, change the legislation to make them also aim for other objectives? Should we introduce protective laws against job off-shoring?

If the trend continues, then I am afraid all IT workers will need to start learning more than just IT. The technology is getting easier all the time or at least more accessible and documented. Just look on any search engine for some particular problem and it is likely that you get many possible solutions. People with lower wages in other countries have access to that very same information. How can you make a difference?

I believe that much of actual software that is written today will become more commoditized. Not to the level of steel, just more. If this happens, then it is no longer enough to just know about technology. In the future, you will very likely need complementary knowledge in order to keep your job in your own country. You must bridge the gap between technology and some other activity that is really valuable to the business. This kind of activity is much more difficult to execute than following a requirements specification. It is also an activity that can possibly produce a lot more value for your employer than if it would just acquire an existing solution.

The first thing to consider is that IT doesn't really matter. It is there to be used, but it's a bit like a truck. The truck itself provides no value just by its existence, but the way how it is used is meant to bring the cargo from A to B. So, the shipment of cargo from A to B is the actual thing that provides value, not the truck moving from A to B. The actual activity of building systems is meaningless, because you do it for a certain goal. That particular goal has meaning, not the system itself. The more you understand this distinction, the better and more efficient the systems you will build are.

I think that the IT people of the future can no longer sustain themselves in some richer countries unless they understand how they should improve themselves to better contribute to actual goals. This is possible by (better?) applying IT knowledge onto a better in-depth knowledge about the problem domain. Just building systems isn't going to cut it anymore. Focus on the problem, find out everything about the problem domain, find out how it really works, then shape your technical solution around that.

So, knowing about IT is still important in order to apply that knowledge to the problem. But it becomes much more important from a value perspective to understand the actual problem domains in which we are working. You should aim for being that person that writes the specification for producing something basically.

Thursday, April 19, 2007

Workflow systems

I'm looking at workflow systems for work and I'm very much enjoying it so far. The main package I'm looking at is called "jBPM". It's quite a large package to understand.

Workflow programming, to begin with, is different from actual programming in that it focuses on the process and on the execution of things, not on appearance or transformation of data. What I quite enjoy is that this statement means that you focus on the middle part of the construction of a system, rather than focus on the beginning (screens, user input) or the very end (database) or how some requirement fits in with your current system in an architectural way.

So, in order to successfully construct workflow-based systems, you first need to change your thinking about IT and how systems get constructed. It's not necessarily a very (pedantically) 'clean' way of programming (all data is stored in proper elements and there is compile-time dependency between all objects and all that), but it provides much more decoupling between functions, which on thus on the other hand improves reusability and componentization.

You should think of workflow as a series of automatic or manual actions that need to take place, easy to re-arrange and reusable. These actions are arranged in a diagram, which specifies which actor or pool of actors can execute those actions. The final part of the diagram also states whether these actions are executed in parallel or in series. To top things off... the whole process "instance", that particular order or that particular holiday request will get persisted into a database when it has no way to proceed immediately because it is waiting for external actions. You do not get these features "automatically" from any software language.

The latter word "language" is specifically mentioned, because you will start to mix your domain language (say Java or C#) with another "graph-based" language, in this case XML. So, the XML specifies authentication, actors, pools, the "flow" of the process, whereas the domain-based language helps to express componentized functionality (such as performing a credit-check, updating databases, gathering user input and so forth).

If you re-read the last paragraph, you notice how different workflow-based programming essentially is and how it could provide enormous value to IT systems if it is done right.

As an example, if you know Mantis or BugZilla, you know from the developer's or tester's point of view what the screen looks like for bug manipulation. Well, this screen is a single screen with all components in editable form, but from a workflow point of view, it should be constructed differently every time with only those components required for that particular step... For example, if you start to update or categorize the bug, then you do not need all the fields available in the screen to do that. When you have a process that dictates the need for a CCB comment, then you also have far too many elements on your screen to do specifically just that.

The point is that in general, many applications show everything on screen that you may not necessarily care about at that time and the user needs to compensate for the lack of guidance by understanding the process documented elsewhere in a very large document... Wouldn't it be great if we could just look at a screen and get our business process right and know what to do at the same time?

The other thing I notice is that many SRS documents are not necessarily focused on the process, but are focused on data. This shows me that there must be a huge opportunity in this area to provide consultancy on core business processes and merge this with IT specific knowledge.

Software is something that should be changeable in an easy form. Some people interpret this as "scriptable", but scripts are software as well and you wouldn't want to have production-editable fields that allow business people for example to modify those scripts at runtime. So there are only specific scenarios in which scripts actually add value, because if only your developers modify those scripts, why not write it in a strongly typed language instead?

Workflow based systems, due to their high decoupling factor and focus on process, might be slightly easier to change than J2EE, .NET, C++ or ASP systems. It matters from the perspective of flexibility and how you allow systems to grow with changing needs, because the needs *will* change.

Lastly, someone at JBoss mentioned how workflow systems are very much in their initial phase, comparing it to RDBMS systems. It's not very clear how they can be used effectively or in particular environments... what else we need for workflow systems to become very successful. The key part in improving these opportunities is to take the best of both worlds and merge this into something better. We may have to take a radical step into something else and see where it goes. I am just considering... wf based systems may be slightly slower due to use of XML and process persistence... but with a couple of machines and putting all processes on similar systems, what do we need for widespread deployment of this technology?

There must be things missing here, because not everyone is using it yet :).

Sunday, April 01, 2007

Yet more

Here are yet some more design works:

Some cartoon art


galaxy explosion kind of thing


gel/glass/liquid button graphics. Can contain any text or icon really.



1024x768 wallpaper for my computer.

all of the above taken from tutorials to learn more about designs. The last took most time, but also was more actual drawing to do. The last has a very interesting, unintentional detail, which is the base of the tree. I sculpted the tree out of an actual picture and then noticed the trunk looks like a foot.

So, probably I'm going to read up a bit more on design theory from here on. Things like logo construction, colors and those kind of theories.

I actually tried some cartoon characters for drawing, but that didn't end up in much yet ;).

More design work

These are two more designs I created with Wacom and some standard functionalities.
This is a modification of a fractal. I basically heavily embossed it and layered the result on top of the original. Then grayscaled part of the picture for a concrete effect and layered other images over it for a sort of glass like effect. As you can see, I don't master glass yet, which is why I started with the following image, one of an orb over a newspaper. That one looks much better.

This orb is produced solely under photoshop. It started with a simple sphere in a flat color, than I added an inner shadow effect to create some white at the bottom right (it should have more in the final image). Then another gradient in the middle, which was masked by a radial gradient. A hotspot in white at the top and then a selection of the initial orb that had a fill of black and transparency, which was scaled down to about 40% in height, slightly blurred and slightly moved to the right. The glasses and newspaper are from a photo. And the problem I run into obviously is not paying attention to the existing shadows in the picture :).

G>

Saturday, March 31, 2007

Wacom tablet

I just bought a Wacom tablet to also pay attention to a different kind of creativity. I never really drew anything in my life basically, but the computer helps a lot with imagery in many cases. This one is called "beauty", but then in Japanese :).

Well, as with all things, the first try isn't much of anything, but here you go. I have an incredible lot to learn about the art, graphics, how to use applications and so on.

One of the things I did start out with is a way to create fonts. I managed to create 5 characters and got them working, but then I dropped it because I didn't see much need further.

Other things in design that sound very interesting and that I will try out, probably in the order given:
  • Logo's
  • Image manipulation
  • Create paintings based on photo's
  • Actual drawing from scratch
So far, I have used "inkscape", "gimp" and "fontforge" as the tools of choice. InkScape is really cool and quite different from Gimp. It's dealing a lot with shapes where the Gimp deals more with pixels and photo manipulation.

I can recommend getting one of those drawing tablets. It's a lot of fun and the navigation on the computer feels totally different.

G>

Monday, March 26, 2007

Beryl. The Windows Aero?

Beryl is a windows manager for Ubuntu Linux that puts some eye-candy on your desktop. For those people not aligned with Linux and subject to marketing efforts of Windows and having been woo-ed by Aero only, this is Beryl:

http://www.youtube.com/results?search_query=beryl&search=Search

Beryl uses OpenGL to animate all those pretty effects. The number of options you have for customization is absolutely astounding. You can configure all effects for window events and then also configure individual timings and what not. Beryl also shows the X Cube, which is an invention of X/Linux. This cube helps in choosing your workspace.

I'm running Beryl on both NVIDIA 64MB card at home and my 128MB laptop computer. The 128M laptop is quite a bit slower and to remove very large windows with effects you notice the extensive cpu load, but still if you believe the video's, the performance should still be better than Aero.

Some people have loaded videos on YouTube that show how Beryl continues to run properly on standard current computers and do not need these "special" Aero graphic hardware that now seems to be demanded by Windows Vista to run with Aero. Have a look for yourself:

http://www.youtube.com/watch?v=xC5uEe5OzNQ

So, I guess we're finally here then. Windows Vista sucks and is already getting discounted:

http://arstechnica.com/news.ars/post/20070323-microsoft-announces-more-discounted-vista-licensing.html
My advice is... if you are looking for something new... get Ubuntu! I am using it on two computers right now and there is no reason for any of my productivity apps to move to Windows. The only reason anybody should ever have to move to Windows is for games or multi-media editing applications (Yes, there are plenty available like Kino, Cinelerra etc..., but their usability and stability is not very good at all).

Actually, you can move *now* to Ubuntu, but go straight to the distribution named "Edgy", which is the latest release. As soon as Feisty Fawn comes out, you'll be able to upgrade.

If you are that person that will have to reinstall anyway:

http://www.extremetech.com/article2/0,1697,2082982,00.asp

you might as well consider it in your "upgrade". I haven't even told anyone about the features of "apt". Imagine a very, very, very, very large repository of software on a server. And this server is connected to the Internet. This server contains all definitions of package dependencies, compatibilities, kernel versions, distributions etc. available. And then from your PC, by typing in:

apt-get update
apt-get upgrade

*ALL* (yes, *ALL*, not just kernel, not just the "main" Linux software or packages, but *ALL*) the software on your computer is automatically upgraded to the latest version possible, taking into account dependencies on other packages, taking into account if you have removed certain end-softwares that no longer use a certain other piece of software.

That is APT.

So, get Edgy from here:

http://cdimage.ubuntu.com/releases/6.10/release/

burn your CD and go for it. When Feisty comes out officially, you will be able to upgrade over the Internet (yep, free and gratis!). Then start reading from info here:

http://ubuntuguide.org/wiki/Ubuntu_Edgy

http://www.ubuntuforums.org/

Also use Google. Type "+Ubuntu" at the beginning and anything else that is of your concern. The reason is that this normally shows the ubuntuforums and other information. Notice that the Ubuntu people generally provide easier and more generic methods to resolve your problems. If you go with other distributions, the resolutions are generally more technical in nature.

http://www.google.com/search?hl=en&q=%2BUbuntu+%2Bberyl&btnG=Search

Have fun!

G>

Saturday, March 24, 2007

Custom loader in Ubuntu Linux

I got a bit bored by the static-ness of the whole thing and decided to create a custom loader screen for use at the company I work for.

Here are details on how to get started:

http://www.ubuntuforums.org/showthread.php?t=323520


Well, I'll also post a couple of details on commands to run:

make (to make the .so)
cp your-custom.so /usr/lib/usplash
ln -sf /usr/lib/usplash/your-custom.so /usr/lib/usplash/usplash-artwork.so
update-alternatives --config usplash-artwork.so
( select yours )
dpkg-reconfigure linux-image-$(uname -r)
( recreates initramfs )

That should do it.

Very important thing. There is no need to restart inbetween your tests. It is not very easy to find, but usplash has a testing mode. (Alt-Ctrl-F7 gets you back to X)

usplash -c

Here is some source code that shows some simple animation to get started with:

http://www.ubuntuforums.org/showthread.php?t=278301

G>

Friday, March 23, 2007

Guice in GWT

I am separating some functionalities in Project Dune. Transaction control was initially handcoded in each service request, but there were two problems here. The service request started the transaction in an HTTP specific stub and the business logic was mixed with this protocol-specific code. So the separation puts the business code into a separate object and transaction control is managed on any method that this business object does.

The first attempt actually was a transaction filter that always creates a transaction and then closes it, but this is expensive and one problem with GWT is that the SerializableException gets consumed and serialized as a response stream. So the filter will never see this exception being raised.

I have thus used Guice (pronounce this as "juice") to deal with these problems. The way it works is like this:
  1. A servlet context listener is used (see web.xml configuration) to initialize a Guice Injector and bind this to an attribute into the servlet context.
  2. A servlet filter always opens a Hibernate session. (not a transaction!). It will also (try-finally) close this session.
  3. When a GWT service needs to service a request, Tomcat will create the specified servlet.
  4. Each servlet derives from a BaseServiceImpl abstract servlet.
  5. The BaseServiceImpl overrides "init( ServletConfig config )". Through the "config.getServletContext().getAttribute()"
  6. I am retrieving the injector object, created in the listener.
  7. The injector object calls "injector.injectMembers( this )", which will "instrument" any annotated members in the servlet instance.
  8. When the injector sees a request to inject a field or method parameter, it will look this up in the registry and also attempt to inject any annotations that may exist in the to-be-injected instance.
  9. And so through very simple annotations, it may result in a couple of cascaded injection requests when a servlet gets instantiated.
The very nice thing about Guice is that you no longer have to deal with XML files. It is all programmatic. As soon as you have your "injector" instance, this instance will have been configured with a couple of bindings. Those bindings have matchers on classes and methods and if it finds anything that is annotated, it will perform that instrumentation.

Notice however that Guice is *not* installed on the classloader. This means that just setting "@Inject" on a field for example will not do anything *unless* you retrieve the instance through the injector instance. This latter part is not very easy for
everybody to understand right away, but is the most important aspect (no pun intended) about Guice programming as I have found so far.

Code example? You will need aopalliance.jar and guice-1.0.jar for this to run, downloadable from the Guice website:

ServletContextListener:
================
public class GuiceServletContextListener implements
    ServletContextListener
{
    public GuiceServletContextListener() {
        super();
    }

    public void contextInitialized(ServletContextEvent servletContextEvent)
    {
        ServletContext servletContext =
            servletContextEvent.getServletContext();

        // Create our injector for our application use
        // store it in servlet context.
        Injector injector = Guice.createInjector( new TransactionModel() );
        servletContext.setAttribute( Injector.class.getName(), injector );
    }

    public void contextDestroyed(
        ServletContextEvent servletContextEvent)
    {
    }
}

TransactionModel:
=============
public class TransactionModel implements Module
{
    public void configure(Binder binder)
    {
        binder.bindInterceptor(
            any(), // Match classes.
            annotatedWith(Transactional.class), // Match methods.
            new TransactionInterceptor() // The interceptor.
        );
    }
}

Transactional:
==========
@Retention(RetentionPolicy.RUNTIME)
@Target({ElementType.METHOD})
public @interface Transactional {
}

BaseServiceImpl (a base class for any servlet in the application):
==========================================

public abstract class BaseServiceImpl extends RemoteServiceServlet {
    .....
    @Override
    public void init(ServletConfig config) throws ServletException {
        super.init(config);
        Injector injector = (Injector)config.getServletContext().
            getAttribute( Injector.class.getName() );
        injector.injectMembers( this );
    }
    .....
}

CustomerServiceImpl (implementation of GWT service):
====================================
public class CustomerServiceImpl extends BaseServiceImpl implements CustomerService {
    ......
    @Inject
    private CustomerBO customerBO;
    ......
    public CustomerDTO saveCustomer( CustomerDTO dto, boolean isNew )
        throws UIException
    {
        try {
            Customer customer = customerBO.getCustomer( dto.getCustomerId() );
            if ( customer == null ) {
                checkAccess( WebConstants.CUSTOMER_ADD );
                // not found, so create it.
                customer = new Customer();
            } else {
                checkAccess( WebConstants.CUSTOMER_EDIT );
            }

            MapperIF mapper = DozerBeanMapperSingletonWrapper.getInstance();
            mapper.map( custDTO, customer );

            customer = customerBO.saveCustomer( customer, isNew );
            CustomerDTO dto = customerBO.getCustomer( customer.getCustomerId() );

            return dto;
        } catch (ApplicationException ae ) {
            log.error( "Could not save customer", ae );
            throw new UIException( ae.getMessage() );
        }
    }

    ......
}

CustomerBO:
=========

@Singleton
public class CustomerBO extends BaseBO {
    ......
    @Transactional
    public Customer saveCustomer( Customer customer, boolean isNew )
        throws ApplicationException
    {
        Session session = getSession();

        try {
            if ( isNew && customer != null ) {
                throw new ApplicationException(
                    custI18n.getMessage( "error.cust.already.exists" ) );
            }

            // validate will raise an ApplicationException if the customer data is invalid.
            validateCustomer( customer );

            if ( isNew ) {
                session.save( customer );
            } else {
                session.update( customer );
            }

            customer = getCustomer( customer.getCustomerId() );

            return customer;
        } catch (HibernateException he ) {
            log.error( "Could not save customer", he );
            throw new ApplicationException(
                custI18n.getMessage( "error.save.customer" ));
        }
    }
}

==========================

Obviously this code can/should be extended with a variety of things. It should probably check if there is already a transaction ongoing. It should probably add parameters to the transaction interface to find out how the method supports transactions (required, supports, requiresNew, etc) and so on. But for simplicity's
sake, this is the bare minimum.

Notice how, once you have started the injector in the ServiceImpl, the CustomerBO does not need to be declared specifically in the injector. This is some sort of automatic cascading injection effect which happens because the injector is already processing dependent classes. So, luckily, you only need to use the Injector once, which will inject all your other classes where you want them.

Also have a look how to do this for interfaces. What is lacking in the CustomerBO is a separation of persistence with business logic. If you separate this further, you have a start to be able to switch the implementation of your persistence framework.

I am personally contemplating to bring the transaction boundary more forward and wrap this around the methods (where required) of ServiceImpl instead. But I am not sure whether this will work.

Good luck on your own incantations!

Thursday, March 15, 2007

Gluon

One of my colleagues has just released a version of "Gluon", which is a plugin for Eclipse that allows you to develop in BREW.

" GLUON is a fully open source IDE for BREWTM, based on Eclipse and CDT plugin. It increases your software quality and productivity by providing high-value features that are not available on the current development environment. It leverages BREWTM development on companies and individuals, making development process easy, enabling developer to focus on application."

Check it out. If you are into any kind of BREW development, you'd certainly love to see the features and ability to debug.

(btw, the design for the page was made by another colleague in only a very short time before launch).

The plugin will be discussed at the EclipseCon. Follow the link on the page.

Monday, March 05, 2007

Conversation with the machine

The way how machine communication is evolving is starting to change, more into a conversation with hints provided by the system to improve efficiency then a list of operations to execute one after the other. The machine initially started as a processor to receive a list of holes on paper, which would process inside the machine, with the output of another paper element that had values on them.

With the introduction of the desktop and the monitor, the way to use a program would be typically to prepare a file or a command with options on the command line and then hit Enter. The program would execute and do something, either print on screen or print on the printer.

The GUI in Windows and Apple in the 80's and 90's made this slightly more interactive, with generally the limitation that it did not have access to worldwide information immediately. The GUI could however look into databases or execute other processing whilst the user was active on the screen. This is a very important point.

The web applications in the 90's started to come up as well, whether intranet or internet. These applications would typically be a mixture of serverside script and client-side script for some simple validations. Now, the user would typically not have to install and maintain a binary on the client side, which facilitates a lot for the maintainer on the server side. Everybody could just log on and use the services without many problems.

In Web 1.0, HTML however, we would generally prepare a form or a screen that we only submit at the very end of our operations. In a way, you could say that we would have to know how the server would process the information, more or less, before we could efficiently use that screen. Will this ever change? I don't know yet, but it's possible.

With Web 2.0, the way how we interact with servers is slightly changing. Maybe it's HTTP technology or we haven't reached the immense bandwidth yet that is necessary, but I see slight trends in which our conversation with the server is getting more continuous (or approaching a continuous state).

For example the suggest box. The system in its entirety already goes off to the server after a couple of keystrokes and from there, it might all start to change the conversation. Maybe it will hide a couple of fields thereafter or it will allow new functionality to appear from there on.

The changes in this field are that we are becoming more integrated with information overall, but let the machine help us better to use the system (and ourselves as society) better and more efficient. This is the power of IT. It is quite interesting therefore to know how scalable continuous conversation systems are. The messages are typically quite small, but involve XML translations, bandwidth and a good deal of common sense architecture and timeout decisions.

So, the point is to show that initially, human computer interaction was quite basic and the human operating the machine had to know intricate details about the functioning of the program. Over the years this is changing, with MS-Word for example we could already more or less just use it as a rich typewriter. It probably can be made more efficient for example by letting the server find similar phrases in other books to avoid claims of plagiarism, to give a weird example.

So, a probably useful addition to how we interact with computers is that we allow the server to give us hints. Make it as if the computer is looking over our shoulder what we are doing and analyze our intent whilst we are interacting with it. Suggest improvements or more efficient work will let the machine teach us. A more powerful example is where the machine learns from the wisest users automatically and then makes the efficient suggestions to less efficient users of the system.

How can/will/should user-machine conversations evolve from this point onwards?

Thursday, March 01, 2007

Scrum project management

I haven't written a lot here, nor have released any new version of Project Dune because we are writing a lot of new functionalities. This image shows a crude task burndown graphic for SCRUM project management. The module that we created basically is aimed to allow users to very quickly manage the states of their tasks for projects they are assigned.

It is time for a new release though, so very soon you should expect some new movements.

The project is also expanding in active members. We have three active developers at the moment, working on a timesheet module, scrum project management and evaluating quality, unit testing etc.

Tuesday, February 13, 2007

Carnaval in Recife

It's getting carnaval again. Starts this week. On Friday, everyone will power down the machines and they start a party in front of the building.

The company has a "bloco de carnaval" called "Dá O Loud". It's a joke between making a lot of noise and "download".

Saturday, Recife organises "Galo da Madrugada" (see picture). Imagine 1 - 1.5 million people on a small island... it's quite insane really. Anyway, I'm going to a different kind of carnaval more in the country-side (on sunday). It's in Bezerros and should be quite good.

The other days? I probably would work a bit on personal projects, take it easy, go to the beach, play some video games? etc...

Tuesday, January 30, 2007

Project Dune now stable...

Project Dune went stable yesterday. It still needs a detailed customization guide and a user's guide to be really complete, but the main beef is there for people to use and to provide feedback on.

The next steps for this project are aimed at separate modules for SCRUM project management and document management. Naturally, those modules will have certain breakout points as well to allow integration with Project Dune.

The project itself will slow down in new features and aims to resolve issues that the user community will (or should) raise. Then with some more stability under it, we can take it to other levels later on.

Friday, January 19, 2007

It's not all the same kind of development

I have had the opportunity to work on many different kinds of projects, and you cannot treat software development for embedded hardware the same as web server application development, not by any long shot.

Embedded software, operating systems, low level applications are 70-80% preparation in design, memory layout, memory management, documentation and planning. In contrast to high-level languages used on webservers, etc. The more low-level and smaller you go, the better you need to plan the software, since the tools available (or rework) to modify changes is going to be more difficult. And the lower you go in the system, the more other systems built on top are likely to be severely affected.

So, 70-80% planning in embedded systems with not a lot of coding (just make sure you *do* it right). This is against 20-40% of planning for application software, but requiring a lot of coding effort. The high-level languages have plenty of libraries available and the development tools help significantly with refactoring nowadays. Planning beforehand for the software is actually a risk factor, due to changing requirements from the clients and changing insights along the development process, or a change in the way how things could/should be done.

Therefore, one could "perceive" embedded software development as a "slow" process, but this is not truly the case. A design is in itself a produce of intellect, but it does not immediately convert itself to a materialized ability to the end user. Nobody however can deny that a good design produces immense benefits in the long run for a certain operating system or function.

The final argument has to do with update cycles. Embedded software *has* to be correct. Webserver or PC software can be updated (should not, unless for new features), but we have sort of generally started to accept that for PC software you produce a version that you think works and then continuously update the software after it is released. Everybody does it, so why not you? With embedded, this is simply impossible, since the user does not generally have the ability or skills to do this. And in general we tend to think of a product as "bad" if they have to return it to the shop for 'an upgrade'...

Thursday, January 18, 2007

MS Vista vs. Linux source code control...

MS Vista has been delayed a couple of times and this is often attributed to different factors. This article shows how Windows did the version management. Check it out:

You have a very large number of branches that sometimes need to be integrated. The whole point of these branches is to establish "quality control exit gates", where the code will not pass upstream unless it passes the quality control check. This is a nice thought and that part might work, but there is another consideration for quality to be made here... immediate synchronization with other code. Now, the person writing this article is convinced that without this level of "management", nothing will ever ship.

Consider the Linux kernel. There is basically one mailing list and one mailing list only. There exists an idea of "separation of expertise", but not really "separation of concern or by branch". This does not mean that branches are not used, but they are temporary. The only longer-lived branches are kernel version branches (2.2, 2.4, 2.6), so you have different streams there.

The mailing list is considered total chaos (if you live up to MS terms). Everybody has access to every comment, even if they don't work on it and comments on everything. Filtering on your area of interest may be quite difficult, but in the end you can gather insight into everything that is going on. Messages in the middle establish "code freezes" (sent by Linux) when another version of Linux is to be closed off.

What I am ignoring so far is that Windows is much more than just a kernel, so the comparison is not truly fair. The problems that Windows may have faced in Vista is not necessarily "quality control", but immediate access to new features and managing the streams of code in real-time. Now, I am not saying there is a solution to this dilemma, but there is a problem to be solved.

If you rely on the quality gates, then a new feature that you depend on may only arrive after two weeks. Forget about putting pressure. Then you need to start a political fight with another department with a different budget and different bosses to get this feature in or *your* project will delay. Truth is that it's not truly the concern of that other department.

Open Source does not have a deadline and no requirement to make everything backwards compatible. Windows this is highly desired. So Windows needs to maintain backwards-compatible API's using a new structure of the kernel, but "looking" the same. We all know this is an impossible task...

So... we can argue... is the shipping of Windows Vista truly a miracle and good management, or could it have shipped way earlier if they would have managed things differently? Their personal experience shows that the way how it initially worked wasn't actually working very well...

SCRUM

There are different methods for project management besides the "Project Management Body Of Knowledge". The ancient "accredited" methods put a single guy as the responsible person for finishing and executing a project. This is just an observation at this point.

"Scrum" is another way of "managing". Actually, it is not really management, but mostly delegation. The scrum method is derived from agile project management and the team becomes responsible for its planning, estimation and execution. Well, the team led by the scrum master, sort of like a quarterback.

What I like about the method is that it allows you to extract information about the daily progress. For example, each task that is defined in this method will be estimated for a single day and one day only. There are no tasks that take two days. If there is one, it will be broken down into two parts. This facilitates management and tracking.

In the regular project management, a task may sometimes take 3, 5, 7 or 20 days. In the minds of some developers, this gives them "slack" and sometimes too much slack that the time is not properly controlled. At the end of the task, it may either be finished or backlogged. Imagine the surprise... And in steps the project manager to negotiate with the client.

Having it broken down in single tasks really facilitates tracking. The cool thing here is that the "whip" is no longer in the hands of the project manager. There is a public board on the wall that shows the tasks to be done, who is working on what and there is therefore transparency on other people's performance. This sort of means that everybody in the team has an invisible "imaginary" whip. Even though you wouldn't publically make anyone accountable, the people involved perceive a certain team pressure for complying with their agreed tasks and feel thus pressure to finish. This makes the integration within the team a lot stronger if the tasks *are* completed and opens up an interesting discussion if tasks are *not*. (this is where people management comes into play).

In the context of "Project Dune", I am planning to build a separated module for this type of management, so that electronically it is possible to track projects this way. There are quite a number of manual tasks involved still that should be automated.
(feature point management, project burndown, product burndown and impediment burndown).

The whole essence of scrum is making a sprint of 4 weeks and putting your foot down to complete it and manage your difficulties (impediments) plus tasks on a daily basis. This really helps to integrate your team better and establishes a much nicer team atmosphere than other management methods can do. However, this does depend on the commitment and attitude of the team.

Wednesday, January 17, 2007

More on OS development...

I'm just playing around with the toy OS and learning quite a lot about how a computer works and what the kernel for a computer really has to do. I am not even working on device I/O or advanced features at all yet. Just memory management.

Memory management is subject to the following considerations:
  • Where to put the kernel code and data (you do not want to overwrite this)
  • Where to put the GDT and IDT
  • Where to put the memory "page directory" (full 4GB directory == 8 MB)
  • The ratio between kernel space vs. user space (linux = 1:3)
  • How to implement an optimal "malloc" routine
  • Page swapping (but not required, a kernel may also fault)
  • Optimization of the above requirements, like is done in Linux > 2.4
  • Memory page sharing through copy-on-write for example (by default the process shares the same code + data, but on any attempt to write to the page, the kernel makes a copy of that page)
So... great... lots of considerations. I am just at the part of where to put everything. My decisions haven't been ideal yet, because of boot-up limitations mostly. But slowly I should be able to work towards page allocation for a process, assigning it to a process and then subsequently the ability to run a certain process (albeit hardcoded in the kernel itself for the time being).

As said before, this is not going to be anything big, but will greatly assist in understanding other OS's a lot better. Maybe even I could use this knowledge to jump-start another project for a limited device and make the machine do something really specific in an easy way.

G>

Monday, January 15, 2007

How to start writing your own OS

Writing a fully fledged Operating System is not a task that can be finished by a small group of people, but it is still possible to write one that gets near to "DOS" kind of functionality, all by yourself. What you put in... you decide! It probably only runs on your hardware, or it doesn't make use of specialized features of hardwares, but you should ask yourself what kind of things you want to have for that particular OS and code away!

The tutorials here show you the start of writing one. If you get into the downloads area and look at one of those tutorials, you can even get a template OS to start from. Notice that Linux 0.01 is also part of the list and is a good read (and a chuckle!) to work from.

The first thing to do when writing an OS, I think, is "choose your tools". If you don't get them right from the start, you'll spend significant amounts of time rewriting it later. I am using gcc + nasm, since these are widely available and free. Then, start by building a hard disk image using the info from the site here.

You probably do not want to get started with a boot loader (it's really complicated!) and with the boot loaders available now, why should you? I did the following on Ubuntu Linux to create a hard drive image to start from. You go up to the point where they start to copy the guest OS, because that guest OS... is basically your real OS that will run inside qemu (more about qemu later).

http://wiki.xensource.com/xenwiki/InstallGuestImage

In the steps where "grub" gets installed... I had to manually copy a menu.lst into the "/boot/grub" folder. It didn't get created automatically. Maybe I should have "chroot-ed" instead into the "/mnt" and then try it again. If you take care of this image, you only will need to do it once.

Anyway, I copied in a menu.lst that looks like:

default 0
timeout 3
hiddenmenu

groot=(hd0,0)

title
root (hd0,0)
kernel /kernel.bin
boot

Of course you will need to replace the "kernel.bin" every time you recompile. I have this in my Makefile as such:

losetup -o 32256 /dev/loop1 ./test/hd.img
mount /dev/loop1 /mnt/temp
sudo cp kernel.bin /mnt/temp/kernel.bin
sudo umount /mnt/temp
losetup -d /dev/loop1
losetup -d /dev/loop0


And then you can test the OS through Qemu (better than VMWare, since it has a shorter time for startup and it is free):

qemu -hda ./test/hd.img -boot c -std-vga -localtime

Qemu is installable through apt, but the kernel module needs to be downloaded from the site. No worries... plenty of documentation there. On my Ubuntu Edgy I downloaded, untarred, ./configure-d and used make; make install and it all worked.

The cool thing nowadays is that you can downloaded various OS's from here and there and merge the concepts and see what happens. Also, an OS is not necessarily multi-user or multi-tasking, nor does it necessarily need to page out memory to disk and back again (you can simply page-fault and panic).

Unless you have a specific need or idea to follow, I wouldn't even start to believe it would be the start of the next "better" OS... But it's good to write one "OS" in your life to better understand how they all work.

Sunday, December 31, 2006

Happy new year!

Happy new year to all. The new year is starting off again with new promises, new resolutions, new business, new projects. I wish everybody well in the coming year and hope that they will get closer to the realization of their dreams.

There are some interesting developments on my own projects. Project Dune has launched a community site: http://pdune.sourceforge.net/ and I am waiting anxiously for people to start using it. The project has been rising in the popularity ranks at SourceForge and it's quite difficult to maintain momentum at these positions.

The good news is that I will be releasing a Beta release pretty soon. There is more working functionality, the main problems have been dealt with and so I am happy to upgrade the status at this time.

It probably won't answer immediately to your needs in your project, because the rules for ownership transfer and updating are rather generic and default. I'll only be able to make some good improvements when I learn more about how people want to use the software.

See you in the next year! I'm off to the beach to celebrate and will be back tomorrow....

G>

Tuesday, December 26, 2006

Measurement

I have read a book titled "Measuring and Managing performance in organizations". It really is an enlightener.

As many know, I am working on a project called "Project Dune" on SourceForge, which is related to quality. You could say that quality is highly dependent on measurement. And for measurement, you need to generate data and do that in such a way that you can historically compare one data-set with another. The end result is hopefully some insight in how you are doing as a team/department/organization.

Well, it is not really that easy. When you start to measure in an organization, you need to generate the data and both activities are coming at a cost of that same organization. Effort removed from normal production work. Well, obviously it is required to find a balance between going for production and generate the data anyway since you need to find a way forward to improve and without information backing up any decisions, you base all decisions on intuition, which in general can be very deceiving.

Project Dune is interesting in that the vision is that it should do the measuring for you. It is basically a similar tool like BugZilla, Mantis and so forth, but in addition to helping with standard administration and tracking, it also helps in day-to-day activities. And that is where the automation is plugged in, next to its envisioned integration points.

When you start to connect a system that knows about user activities, it can connect the data for its users and the larger the domain is where it is connecting "dots", the larger is the area that you can measure across.

The good thing about this is:
  • You get the measurements almost for free, always up-to-date and in real-time
  • You are better supported in day-to-day activities
  • You can spend more effort on your productive tasks without worrying about any process or dependent tasks that you do for others
Of course, it is not really that near to completion, but a BETA is coming out not too far from now. I'm just thinking about server-side call control (security) before I can even call it BETA. So far I have no feedback yet, but hope to see that happening at some time soon.

The statistics for the project are ranking up at the moment. The project is available at the third page in the rankings at this time (103) and seems to be going up still. A new ALPHA-3 release was just issued today. Let's see what happens next :)

Regards,

G>