Tuesday, November 22, 2005

HTTP protocol is broken? ( 1 )

I'm working with the HTTP protocol everyday. It was initially developed as part of an effort to serve static HTML files that have no interactive content or relationship with the server. This was also in the era when servers were not as powerful as today, let alone as cheap.

From a hardware and traffic perspective, it makes sense to terminate a connection once the files have been served. Keeping a connection open for every user means consuming resources on the server. It always includes keeping a socket open. It is required to re-work slightly the concept of a thread serving the user tightly integrated with the socket.

The bad thing about terminating connections is that one needs to communicate a session ID with the client and that "execution context" cannot be maintained and needs to be re-loaded every time. Every new request needs to regain full sense of a session and potentially reload a lot of data for that particular user. Having a separate context of execution per user that remains active (without being serialized) can help in making the application more useful to others and more reliable ( back button? ), because every new action is still related to the same context. (Context management in HTTP applications can be a problem, because you don't know whether someone will contact you again and you do not necessarily want to remain with a large hierarchy of objects for a user if it is not necessary and the user has gone away. This is why session timeout was introduced).

Probably servers should not deal so much with interface code (HTML) as at the moment is the case. The application revolves around re-creating the full GUI at each new request. AJAX has reduced this to only parts of the user interface.

An approach with pages and XML injection has already been proposed before by Microsoft, which was further refined into "AJAX" by Google. Maybe this can be taken a step further to create a browser that downloads a template pack or templates 'on-demand' where the actual data that needs to be injected is inserted from a separate call. I'm not talking Ajax here, but more something like "continuous RMI" where the connection with a container is never terminated. This enables the web server to become quicker in serving requests for smaller communities, because it does not need to reload the context, set up or deal with security checks done every request, etc.

Server push already become broken due to NAT firewalls, so I don't think the solution is there...

To be continued...

Friday, November 18, 2005

Agent based Internet

Mobile agents is an area of research undertaken in computer science. It mostly involves mobile pieces of code that move from one computer to another and 'execute' elsewhere. Whatever it does, the traits or behaviour of these agents do not normally change. The functionality is quite static, but executes in a different environment.

The alternative is an enormous array of protocols. Every protocol slightly different than the other but also quite common. Some protocols are data-driven like XML, while other protocols are more function- or action-driven like HTTP. EDI, HTML, HTTP, XML, SOAP, .NET, the list goes on and on. All of these protocols require a contract between client and server or otherwise a contract to be discovered and complied with dynamically.

Agents can change this. Rather than thinking about agents as "mobile pieces of code", we could think about agents as "mobile, adaptive pieces of code that maintain a conversation". I'm basing this on Java for example, where it's easy to serialize a Java class with its data and transport it over a network. The agents are all sent to the same port without exception, because it is no longer the port that controls behaviour or is entitled to particular functionality, it is the message and request that is inside the agent, which is actually executed within the host environment.

The agent may request information (browsing) or make a request to purchase a particular article. It can give away information about its host environment like HTTP does in some sense, but it can also travel around further for other information that may be required before the action can be completed.

Consider a credit card transaction. An agent could be sent off to buy item "15-XA-01" from a particular online-shop. The agent is sent off along with credit card details and shipping address in its pocket. The agent is executed and requests the item to be purchased and sent off to the address. In return, it hands over the credit card details to the environment, which needs to specifically ask for it. If the environment also asks for an email address or phone number, the agent needs to travel back to the originating host, where it requests the user for the extra information. The agent then returns and continues its conversation.

Rather than creating "software interfaces", I think it's worth to consider "data-objects" that are versioning. For instance, an environment could ask for "v3.0 of Master CC", which is version 3 of the Master Credit Card details, a specific format. Or it could give the list of acceptable payments and ask the user how it wants to pay. Anything is possible. The actual conversation can be based on very basic, old "text adventure" parsing protocols:
  • A: Buy "XA-01-05"
  • B: Accept "{v3.0 Master CC, v2.0 Visa CC, cash}" ( "{...}" being the contents of a Java object )
  • A: Use "{v3.0 of Master CC}"
  • B: Get "Shipping Address"
  • A: Use "{shipping_address}"
  • B: Get "Email Of Host"
  • A: Please forward to {Host}
A is forwarded back to host and continues conversation...
  • A: Request "Email of Host" for transaction "YYY" to "Buy XA-01-05"
  • ( C shows message to user, who accepts )
  • C: Use "Email of Host"
  • A: Please forward to {address_of_destination}
A is forwarded back to destination and continues conversation there, completing transaction...

Just some initial ideas there, in practice there will be a big gap to be filled in the capabilities of the agent and its parsing capabilities, not to mention security! Plus how can it be proven that something was actually bought in a court of law, etc? Cryptography can help here and there, but only to some extent.

What I wish to pursue is "some means" where the endless "integration" of protocols can be avoided altogether. A space in which integration is the addition of a couple of parsing statements or requests, rather than burying layer of layer on additional "plumbing" on top of one another. Business rules with the above are potentially easier.

You should also see in the above that it requires standardisation of data-classes. Properly versioned, I see this as a benefit, because once the data is common, libraries can be written once to do the parsing and validation only once, rather than each entity having to do this "in their own way".

Many benefits over current integration practices... Let's see how this follows in the space.

Thursday, November 17, 2005

Democratic Corporations?

I've been reading a book from "Naomi Klein" called "No Logo". In this book a lot of criticism is made on the responsibilities of large corporations for the community and for the workforce. She documents for instance that MicroSoft hired a core of professionals on the payroll, but around 1500 temps that could be dismissed at any time. All of the departments like printing and other "non-core" were out-sourced to another company, so that "right-sizing" (as America calls it now) can be done at the touch of a button. Nobody wants to bother with workers rights, families and those kinds of responsibilities anymore, because it's a burden to the "responsibility towards the shareholders".

Capitalism is everywhere. I'm not against making money, but I am against making money at the cost of everything else. I am all for economic sustainability, but also for social sustainability and environmental responsibility.

I've been thinking lately about ideas for a "democratic" company. The management of a conventional corporation does not particularly resemble "democracy", since the workers inside the company can hardly exert influence over a company's actions, unless they organise themselves in some kind of demonstration or strike. It's a mainly hierarchical decision making policy that operates top-down, not bottom-up like some companies wish to portray themselves.

Such a company should still identify leaders and make responsibilities clear, plus give people with responsibilities the autonomy to act under that responsibility. For instance, project leaders could become 100% responsible for work and re-organise the people within the project, but forums, votes and news-casts can be used to allow people to make decisions for policy-making inside the company itself ( aligned to mission and goals ), which is more about the identity and culture of a company than anything else.

Consider a new social experiment: - Computer professionals working "away". There are some challenges to face here:
  • How to define and show the "culture" of the company
  • Quickly find consensus and defining ways of dealing with conflict
  • Fair payrolling around the globe (different holidays, economies, work laws)
  • Ensuring that people work for the hours they claim (discipline and ethics)
  • Feeling of belonging to the company and feeling connected
  • Social isolation (from perspective of work)
  • Maintaining projects on budget and time
  • Ensuring fair work (deal with the voluntary work-aholic)
  • Regular meetings and making sure these are held with people present
  • Dealing with confidential information
Especially when you are able to exert influence over how a company is run within the boundaries of mission and goals, this can be a very rewarding experience. Working "away" implies working at home, on travel, at Starbucks in the city or the public library. Anywhere you can get 'connectivity' probably and wherever you like to work most.

The benefits here for the worker are very clear. But the benefits for a customer are not yet clear. Motivation is one and tapping into a global network is another. What kind of services however can be developed in this network that are difficult to organize in a conventional corporation and will it be worth the effort?

Tuesday, November 01, 2005


This term in software means adopting the same protocol, format or development methodology as other people do. Patterns is another means of standardization. I like standardization, because it makes things look familiar to many people and it's easier for others to understand.

The bad thing about standardization is sometimes that certain developers see it as a "Holy Grail" of development. For instance, if an application runs on Tomcat, it has been developed quite strongly, does the job well, and it is doing quite an important job, it is likely some guy comes in and says:"Really, this ought to run on an application server, because the service has become so important, we should not risk....".

And there goes another million dollars against a project that was running fine in the first place and now is being rewritten to run inside an expensive application container.

I'm not especially against J2EE, but I've found the actual 'benefit', for what J2EE was developed, to become less and less marking, considering there are now technologies like Spring, Hibernate, Apache libraries and log4j. As in another article written on OpenSource and the costs of software development, I see no problem if someone uses a Tomcat engine properly and writes an application using Servlets or other technologies. As long as it works great and is consistent in behaviour, why change it to EJB's and the rest, *only for the purpose of being standards-compliant*?

I've seen people developing enormous presentations about *why* everything should run on J2EE, that the industry has shown in practice J2EE is the winning factor, that an application server can cater all your needs, etc... etc.. etc...

In practice, from experience, J2EE development is not as cost-effective as people want others to believe. Actually, I am writing software my own using *only* Tomcat, Hibernate, Apache libraries and some XML libraries. The development speed for that project is 3 times higher than if I would have used solely J2EE standards, plus that the testing and unit-testing cycle time is 5 times lower, because Tomcat starts up in 10 seconds, but JBoss in 50 for example ( WebLogic taking even more than that ).

Another defense in favour is remote-ness. "Anybody can use this object as a service...". In reality and practice, people develop an HTTP interface and don't go through the EJB container directly. I'd rather use SOAP probably, if the volumes of using the service permits it.