Wednesday, December 28, 2005

Patty on SourceForge?

I've worked a bit more on the profiling tool for the JDK 1.5.0, which uses JVMTI. The benefits of this profiler are, in comparison with others:
  • You can target the classes you wish to profile for method execution and coverage analysis
  • There is no need to instrument the classes at build time in the ant script
  • Using JVMTI, any Java process can be analysed. There are no sources required.
  • For an existing project to use the profiling tool means modifying the startup-line of the application to include the profiling agent in the JVM. That is all.
  • Because it targets specific classes for analysis, the rest of the classes not targeted run at full speed ( unless code coverage analysis is turned on ).
  • It currently analyses thread contention, method execution times of the targeted classes and code coverage of the targeted classes.
  • It will (soon) analyse the heap based on how much memory is consumed and can be reached by all objects of a particular class. For example, this allows you to analyse how much memory is referenced by an HTTP session or a singleton.
  • It will eventually include a web interface on Tomcat, where some links can be used as a command interface to the profiling agent. This allows you to instrument/deinstrument classes at runtime and request specific information about heap analysis, etc.
Because it was going on so nicely, I've requested a new opensource project on SourceForge, where I hope the project is going to be hosted. Keep you posted on the acceptance and the link.

[edit]

Patty got approved and is released here: patty

Home page, roadmap, documentation, screenshots, etc. will be added soon. Binaries and source have already been posted.

Tuesday, December 27, 2005

Java Virtual Machine Tool Interface

I'm playing around with JVMTI now.

JVMTI allows you to do a couple of useful things:
  • Query all objects of a particular class and iterate over them to see what other objects are reachable from those classes. This also allows you to analyse the amount of heap consumed by certain classes, like singletons or HTTP sessions.
  • Native method entry / exit. A bytecode instrumentation implementation for execution profiling is more efficient though, because it only hits the classes that are instrumented.
  • Redefine classes at runtime. Useful for instrumenting/deinstrumenting classes on demand. This also allows you to run a profiler in production at no overhead.
  • Analyse thread contention on synchronized blocks.
  • Single step through the code to analyse code coverage. This should probably only be done during unit testing.
In my implementation, I'm using 2 computers. One is running the JVM with the JVMTI agent, the other is running a daemon that receives events and records them in memory.

Ideally, I wish to run a GUI on top of the daemon with some command abilities, so that a control socket into the JVM can be used to send commands to the profiling application. This GUI is likely to become some sort of simple Tomcat / Struts application that renders a couple of views in real-time on a webserver, based on the received information so far. That can also be used to print heap memory usage / changes in realtime.

The idea is to do the following:
  • Collect real-time information about code execution, avg/min/max.
  • Collect code coverage statistics per class and method.
  • Instrument classes at runtime to analyse method execution timings.
  • Request memory usage of reachable classes from a particular class type and analyse this over time when the application is running.
  • Request memory usage of all objects per class type and put this into a list / diagram.
  • Send garbage collection requests at runtime for analysing memory leaks and the impact of long-running sessions.
  • Deinstrument classes on demand.
  • Analyse thread contention.
  • ... Some more features in the future.
Sun has researched another technology called JFluid, which has even more features than JVMTI and is just becoming productized for use in NetBeans probably. I don't really need so many features though, just some good in-depth information what goes on inside the JVM.

Monday, December 19, 2005

Source Code Versioning systems

I've been using a couple of versioning systems now. Some I like,
some I highly dislike.

I've used MS VSS, cmm, cvs, ClearCase, svn and many have interesting
features, but none comes close to satisfying all of the needs that
I have as a developer. Some of them are rich in features, but slow
in operation.

This is one of the things I am thinking of. Developing a module
that can use a web-based front-end on Tomcat to organise sources,
backed up by a database. A client-side daemon process connects to
the server-process and retrieves XML-based repository information plus
optionally different streams of codebases.

CVS almost forces you to look at a single view at a time, unless you
manually set it to do something different. CVS keeps the repo files
inside the same directories, which makes compilation and source management
quite difficult unnecessarily.

Most of the tools were written in C/C++ with the largest overhead in
file management. Which shows clearly that with Java, where files can
be managed easier ( no more string-length checking, etc. ), this can
be implemented much easier and quicker. I am thinking of using Derby
as a local database implementation, rather than individual files to
record information, and to use XML for transferring repository information.

What I want is a system that:
  • Shows baselines and streams of the repository in an easy manner, like an overview and I can choose from them.
  • Shows a description of what the baseline means
  • Lets a user download the whole baseline or stream locally without too much fuzz ( setup view, create local repo, etc... )
  • Can import a new project/baseline through a GUI system
  • Automatically updates local files when updates happen remotely.
  • Does not create alien files within the source directory
  • Can let users work with multiple streams locally, without having to redownload or do extra stuff for diffs, etc.
  • Should not use a network-mapped drive.
  • Allows easy baseline management.
  • Enables good overviews for extra meta-information on repos ( no lines, no changes, what changes are, searching for changes, etc ), maybe even up to annotation.
Well, just some good ideas anyway, now up for the design :)

Tuesday, November 22, 2005

HTTP protocol is broken? ( 1 )

I'm working with the HTTP protocol everyday. It was initially developed as part of an effort to serve static HTML files that have no interactive content or relationship with the server. This was also in the era when servers were not as powerful as today, let alone as cheap.

From a hardware and traffic perspective, it makes sense to terminate a connection once the files have been served. Keeping a connection open for every user means consuming resources on the server. It always includes keeping a socket open. It is required to re-work slightly the concept of a thread serving the user tightly integrated with the socket.

The bad thing about terminating connections is that one needs to communicate a session ID with the client and that "execution context" cannot be maintained and needs to be re-loaded every time. Every new request needs to regain full sense of a session and potentially reload a lot of data for that particular user. Having a separate context of execution per user that remains active (without being serialized) can help in making the application more useful to others and more reliable ( back button? ), because every new action is still related to the same context. (Context management in HTTP applications can be a problem, because you don't know whether someone will contact you again and you do not necessarily want to remain with a large hierarchy of objects for a user if it is not necessary and the user has gone away. This is why session timeout was introduced).

Probably servers should not deal so much with interface code (HTML) as at the moment is the case. The application revolves around re-creating the full GUI at each new request. AJAX has reduced this to only parts of the user interface.

An approach with pages and XML injection has already been proposed before by Microsoft, which was further refined into "AJAX" by Google. Maybe this can be taken a step further to create a browser that downloads a template pack or templates 'on-demand' where the actual data that needs to be injected is inserted from a separate call. I'm not talking Ajax here, but more something like "continuous RMI" where the connection with a container is never terminated. This enables the web server to become quicker in serving requests for smaller communities, because it does not need to reload the context, set up or deal with security checks done every request, etc.

Server push already become broken due to NAT firewalls, so I don't think the solution is there...

To be continued...

Friday, November 18, 2005

Agent based Internet

Mobile agents is an area of research undertaken in computer science. It mostly involves mobile pieces of code that move from one computer to another and 'execute' elsewhere. Whatever it does, the traits or behaviour of these agents do not normally change. The functionality is quite static, but executes in a different environment.

The alternative is an enormous array of protocols. Every protocol slightly different than the other but also quite common. Some protocols are data-driven like XML, while other protocols are more function- or action-driven like HTTP. EDI, HTML, HTTP, XML, SOAP, .NET, the list goes on and on. All of these protocols require a contract between client and server or otherwise a contract to be discovered and complied with dynamically.

Agents can change this. Rather than thinking about agents as "mobile pieces of code", we could think about agents as "mobile, adaptive pieces of code that maintain a conversation". I'm basing this on Java for example, where it's easy to serialize a Java class with its data and transport it over a network. The agents are all sent to the same port without exception, because it is no longer the port that controls behaviour or is entitled to particular functionality, it is the message and request that is inside the agent, which is actually executed within the host environment.

The agent may request information (browsing) or make a request to purchase a particular article. It can give away information about its host environment like HTTP does in some sense, but it can also travel around further for other information that may be required before the action can be completed.

Consider a credit card transaction. An agent could be sent off to buy item "15-XA-01" from a particular online-shop. The agent is sent off along with credit card details and shipping address in its pocket. The agent is executed and requests the item to be purchased and sent off to the address. In return, it hands over the credit card details to the environment, which needs to specifically ask for it. If the environment also asks for an email address or phone number, the agent needs to travel back to the originating host, where it requests the user for the extra information. The agent then returns and continues its conversation.

Rather than creating "software interfaces", I think it's worth to consider "data-objects" that are versioning. For instance, an environment could ask for "v3.0 of Master CC", which is version 3 of the Master Credit Card details, a specific format. Or it could give the list of acceptable payments and ask the user how it wants to pay. Anything is possible. The actual conversation can be based on very basic, old "text adventure" parsing protocols:
  • A: Buy "XA-01-05"
  • B: Accept "{v3.0 Master CC, v2.0 Visa CC, cash}" ( "{...}" being the contents of a Java object )
  • A: Use "{v3.0 of Master CC}"
  • B: Get "Shipping Address"
  • A: Use "{shipping_address}"
  • B: Get "Email Of Host"
  • A: Please forward to {Host}
A is forwarded back to host and continues conversation...
  • A: Request "Email of Host" for transaction "YYY" to "Buy XA-01-05"
  • ( C shows message to user, who accepts )
  • C: Use "Email of Host"
  • A: Please forward to {address_of_destination}
A is forwarded back to destination and continues conversation there, completing transaction...

Just some initial ideas there, in practice there will be a big gap to be filled in the capabilities of the agent and its parsing capabilities, not to mention security! Plus how can it be proven that something was actually bought in a court of law, etc? Cryptography can help here and there, but only to some extent.

What I wish to pursue is "some means" where the endless "integration" of protocols can be avoided altogether. A space in which integration is the addition of a couple of parsing statements or requests, rather than burying layer of layer on additional "plumbing" on top of one another. Business rules with the above are potentially easier.

You should also see in the above that it requires standardisation of data-classes. Properly versioned, I see this as a benefit, because once the data is common, libraries can be written once to do the parsing and validation only once, rather than each entity having to do this "in their own way".

Many benefits over current integration practices... Let's see how this follows in the space.

Thursday, November 17, 2005

Democratic Corporations?

I've been reading a book from "Naomi Klein" called "No Logo". In this book a lot of criticism is made on the responsibilities of large corporations for the community and for the workforce. She documents for instance that MicroSoft hired a core of professionals on the payroll, but around 1500 temps that could be dismissed at any time. All of the departments like printing and other "non-core" were out-sourced to another company, so that "right-sizing" (as America calls it now) can be done at the touch of a button. Nobody wants to bother with workers rights, families and those kinds of responsibilities anymore, because it's a burden to the "responsibility towards the shareholders".

Capitalism is everywhere. I'm not against making money, but I am against making money at the cost of everything else. I am all for economic sustainability, but also for social sustainability and environmental responsibility.

I've been thinking lately about ideas for a "democratic" company. The management of a conventional corporation does not particularly resemble "democracy", since the workers inside the company can hardly exert influence over a company's actions, unless they organise themselves in some kind of demonstration or strike. It's a mainly hierarchical decision making policy that operates top-down, not bottom-up like some companies wish to portray themselves.

Such a company should still identify leaders and make responsibilities clear, plus give people with responsibilities the autonomy to act under that responsibility. For instance, project leaders could become 100% responsible for work and re-organise the people within the project, but forums, votes and news-casts can be used to allow people to make decisions for policy-making inside the company itself ( aligned to mission and goals ), which is more about the identity and culture of a company than anything else.

Consider a new social experiment: - Computer professionals working "away". There are some challenges to face here:
  • How to define and show the "culture" of the company
  • Quickly find consensus and defining ways of dealing with conflict
  • Fair payrolling around the globe (different holidays, economies, work laws)
  • Ensuring that people work for the hours they claim (discipline and ethics)
  • Feeling of belonging to the company and feeling connected
  • Social isolation (from perspective of work)
  • Maintaining projects on budget and time
  • Ensuring fair work (deal with the voluntary work-aholic)
  • Regular meetings and making sure these are held with people present
  • Dealing with confidential information
Especially when you are able to exert influence over how a company is run within the boundaries of mission and goals, this can be a very rewarding experience. Working "away" implies working at home, on travel, at Starbucks in the city or the public library. Anywhere you can get 'connectivity' probably and wherever you like to work most.

The benefits here for the worker are very clear. But the benefits for a customer are not yet clear. Motivation is one and tapping into a global network is another. What kind of services however can be developed in this network that are difficult to organize in a conventional corporation and will it be worth the effort?

Tuesday, November 01, 2005

Standardization

This term in software means adopting the same protocol, format or development methodology as other people do. Patterns is another means of standardization. I like standardization, because it makes things look familiar to many people and it's easier for others to understand.

The bad thing about standardization is sometimes that certain developers see it as a "Holy Grail" of development. For instance, if an application runs on Tomcat, it has been developed quite strongly, does the job well, and it is doing quite an important job, it is likely some guy comes in and says:"Really, this ought to run on an application server, because the service has become so important, we should not risk....".

And there goes another million dollars against a project that was running fine in the first place and now is being rewritten to run inside an expensive application container.

I'm not especially against J2EE, but I've found the actual 'benefit', for what J2EE was developed, to become less and less marking, considering there are now technologies like Spring, Hibernate, Apache libraries and log4j. As in another article written on OpenSource and the costs of software development, I see no problem if someone uses a Tomcat engine properly and writes an application using Servlets or other technologies. As long as it works great and is consistent in behaviour, why change it to EJB's and the rest, *only for the purpose of being standards-compliant*?

I've seen people developing enormous presentations about *why* everything should run on J2EE, that the industry has shown in practice J2EE is the winning factor, that an application server can cater all your needs, etc... etc.. etc...

In practice, from experience, J2EE development is not as cost-effective as people want others to believe. Actually, I am writing software my own using *only* Tomcat, Hibernate, Apache libraries and some XML libraries. The development speed for that project is 3 times higher than if I would have used solely J2EE standards, plus that the testing and unit-testing cycle time is 5 times lower, because Tomcat starts up in 10 seconds, but JBoss in 50 for example ( WebLogic taking even more than that ).

Another defense in favour is remote-ness. "Anybody can use this object as a service...". In reality and practice, people develop an HTTP interface and don't go through the EJB container directly. I'd rather use SOAP probably, if the volumes of using the service permits it.

Wednesday, October 12, 2005

Data-centric architecture and design

JSP is a great 'scripting' language for quickly creating versatile views in semi-Java code. However, I do recognise a couple of challenges with JSP. Eclipse helps with a parser/lexer a great deal, but still it does not prevent problems in the following areas:

- Loss of compile-time checks on method gets and sets. It would be ideal if the template would pre-generate Java code, so that I wouldn't lose much time with silly mistakes. Beans are simply not that great for this and time is lost due to loss of compile-time checks.
- Unclear which objects are available for use. The feeling of which API's can be used inside a big blob of text without abstract objects or interfaces is a bit challenging.
- The variety of tag-libs available and how the tags, attributes etc. always seem to be wrong or incompatible.

I've been thinking as well that web applications seem to be written from the perspective of functionality, *what* you want to do with the data, not written from the perspective of the data itself and then wrapping the right functionalities around that. Isn't this exactly the inverted world?

I'm working on a new research framework where data exists at the heart of the system and everything else revolves around that. It does use certain patterns in development, like Front Controller etc. and I do use reflection and beans, but the main attempt is to shield developers from having to declare "field X" in the code somewhere, which always at some point seems to map incorrectly.

This model revolves around the "DataObject" java code. I intend to use XDoclet to auto-generate certain chosen functionalities and go back to standard Java classes. Also, I intend to bind this to Hibernate for persistence. The idea is then that every object will have certain 'functions' associated with it. These are more or less generic ( view/edit/add/delete ). By choosing which functions you want for that object, you could theoretically auto-generate the code that we know have to write by hand. Sounds interesting so far?

One of the reasons of using JSP is that it makes you very versatile in choosing layout. Reasons why 'standard visualization' components fail is because they are not flexible enough. My intention is to merge data objects with HTML template snippets and XML attribute settings for specific types. The output is a Java class that becomes part of the compilation cycle. This class exists in a pre-determined package, chosen in the declaration of the Java object. Since generating HTML with snippets for the page as a whole becomes too complex, I intend to use AJAX to only update one particular division on the page, much like some kind of "portlet" approach.

Now, through Hibernate, it becomes very easy to 'load' objects, even if the framework has no knowledge about what that object truly is. By declaring a data object in a configuration rather than the connected objects like forms or actions ( forms and actions can be auto-generated ), it gains the benefit that the framework can find or populate the object based on reflection/description in Hibernate and pass it to a business rule system.

The effective idea is to take out all ~housekeeping~ code from your web application. You'd basically create a couple of HTML snippets that dictate how your layout looks for the majority of your 'data'. Then you'd typically create a dashboard or portal front-end, where you place your various applications ( nothing stops you from using AJAX to 'replace' apps where you see fit ). Then the framework automatically generates links on pages for common actions based on your template ( using some 'decorator' classes for instance, this can be coded further ). Attach a property XML file with default settings and possible overrides for "CLASS" or whatever attributes and the framework is almost ready.

Since it is already possible to construct any object that the framework not yet knows about, validation is easier. The validation component can accept the object of already the proper type. ( Type checking must already conform, so why validate on that? ). Then pass the object to the validator, which does certain checks on length, etc. The validator returns true if ok, false if not. If ok, then the business rule executes.
( Does the user have access to this functionality? Related to object A, does this setting make sense? etc. ). If that passes, the object is automatically committed. The rule knows from itself what the action is on the object.

In the future I intend to add some wizards, so that it becomes even easier to have a particular page flow.

So, data-centric architecture is what I'd like to call where data is defined and actions revolving around operating on that data are already well-known. Why re-write that code constantly? The only thing one needs to do is call certain business rule logic, some validation logic and some view logic that is auto-generated. The win with having Java classes back is that property getters and setters are verified at compile-time, plus hopefully a bit of a speed-up ( not that it is needed ) in order to generate the pages. Using "Spring", it should become a real fun-fair getting this framework going ( choosing the implementation rule to execute based on other configurations for customisation purposes, etc. ).

Some thoughts on the costs of software

The cost of software can be divided in a couple of categories. It's not necessarily getting better with new emerging technologies (it's just toooo confusing). But following a couple of principles, you should be able to make up your mind soon enough and go for it. ( Don't get enthusiastic with enthusiasm of others in the next big thing that is going to change the world ). To categorise some costs in software:

- Procurement ( licensing and support )
- In-house development and maintenance
- Open Source adaptation
- Mixing requirements with choice
- Stupidity
- Uncertainty

Each of the above topics has a certain 'sillyness' associated with them that may drive costs higher in your company or increase effort for your department.

* Procurement: When I was working for a very large company, the intention was to reduce overall cost by applying a 'one-size-fits-all' strategy in an attempt to reduce licenses. The famous example was taking everybody off MS Outlook ( that came 'free' with MS Office that everybody was using anyway ) and move everybody over to Lotus Notes ( which started to increase the number of licenses required in that area ).
Another item that I think could increase your costs is the question of 'support'. We like to think that 'if this happens or that', we'd better be able to call somebody to come and fix it. This is why you pay 'ongoing support' for some packages or a 'license fee' that comes with a certain degree of support.

In reality, I've seen that it is very rare that companies make a big fuss about problems that occur in the software, unless it sits at the very core of their business ( like billing services or where their customers need services from 3rd parties that break down ). In practice, there is not much pressure to get software updated and they carry on with the bugs anyway. The most bizarre case I heard was with Lotus Notes and IBM, where a patch for a severe defect in Notes was actually 'licensed' to the customer to solve the defect. ( meaning the company started paying for using a patch for software ). That was the weirdest thing I ever heard of.

How this relates to opensource is clear. There is no 'widespread confidence' that opensource can satisfy needs for ongoing support. But according to the above, if the company is never ready to sue the other party anyway, or is never putting pressure, why pay for the license/support and why not take a suitable OS solution? The main reason is that companies like to feel that they have this stick behind the door...

* In-house development and maintenance may become costly if you're not careful about dimming enthusiasm a little bit of enthusiasts that are roaming around continuously looking for new tools and technologies to use, the only reason being is that they have never used it before. To deal with this is a bit tricky, because you certainly do not want to de-motivate staff by refusing every little new thing ( plus the new technology may indeed be promising ). The big idea however must always be that you're in a particular profession that must provide a 'best-of-breed' service to other people. This means going with technologies and ideas that provide most value to your clients.

Other issues involve policies for re-use. Storing regularly used frameworks and making them easily re-usable, developing an in-house knowledge base where people can find information about API's and development experience geared towards working in that particular company in that particular culture.

* Open Source adaptation is a great way to reduce costs for certain projects. Especially for building Java nowadays it is possible to rely on Ant, Eclipse, XDoclet and various other packages that help out in getting software out there quicker and at better quality. What I see, which is related to the first point, is that open source software is hardly ever used in 'production systems'. A shame because many packages are certainly up for it nowadays. The difference here is sensitivity to 'marketing' and the idea that commercial software is better quality and better serviced than opensource. The only difference I see is visibility ( marketing ).

* Mixing requirements with technology is an interesting one. This one is where an analyst or manager writes a Requirements Document that states languages and technologies to be used. This is very wrong and development should fight this any way they can. Language and technology choice must be done in the architecture or design stage.

The manager may have had good experiences with a certain technology and assumes it will equally perform in the next project. Mixing requirements with design/technology choice is a bad thing. Always go back to the writers to find out *why* something has to be developed using tech A or B. There may be reasons for it and there may not be. However, I also advise to be careful, because you may be opening a snake-pit there of 'politics' and 'excuses'. ( don't necessarily do this if you're Junior, find out from Seniors first what their take is on the situation :)

* Stupidity? well... This is where a new IT manager comes in that may or may not have a degree in computer science. With the new manager in place, the first thing he orders is to move all databases from brand A to brand B. Or specifies that all new development must be done in C++ by standard, or Java. Some standards make sense ( size of project, web/gui/cmdline/back-end against favoured solutions is a good chart to develop ). But numbly favouring a particular vendor or technology without any background and requiring that existing systems be changed to comply is a very bad thing to go for. ( Trust me, I've seen it happen :)

* Uncertainty comes up in situations where requirements of size and scope are not entirely clear. (How is the app going to be used and for how many users). Depending on your architecture, you may end up with very unpleasant surprises down the road when you discover that suddenly people thought this would be a great system to roll out to 5 million users, rather than the initially intended 10,000. It's better to have this information up-front, because the chart of 'size against favoured solution' would have brought you different technologies, development patterns or vendors than you may be working with for now. ( word to developers: Be a bit wary about these 'quick' requests and 'no problem' statements. If you're careful and as is said in Holland 'Look the cat out of the tree', you'll be already more or less prepared for these situations ).

Saturday, October 08, 2005

Rules Engines in Java for your business processes

I'm writing a large application that at some point needs to deal with customisable behaviour. I had intended to use a custom package with standard Java classes that people can customise at will. The interface declaration basically exists and then the implementation of verifiers and state changes and "rules" will be left up to the person that is trying to use the system.

At the moment I am in a bit of a doubt between using a rule engine like "Drools" or continue with what I have. Obviously, the benefit of Java classes is that the compilation is more pedantic and there is a better framework for unit testing. At the other side, actually "changing" business rules might be easier and more fun if there are separate 'files' on a file-system somewhere that are loaded by the application at runtime.

I am considering some details when thinking about this in writing a pre-parser for Drools to make using Drools more accessible. There are a couple of disadvantages that I would like to get rid of:

- Having to know Java import statements
- Uncertainty about order in execution if configuration is messed up ( in direct code this is clearer ).
- Dealing with XML code

If you have a business and you grant a discount voucher of $20 when somebody buys at least 3 books for a minimum value of $100 and the customer has 4 children, how would you express this? This is a question of pre-conditions and post-conditions. In Drools it is very easy for a programmer to express, but a business analyst may have more difficulties finding all the right Java classes to include, order of execution, etc. It would be interesting to see exactly how 'much' it is possible to ditch SQL and 'plumbing' code by employing a rules engine in combination with Hibernate... Hibernate can already auto-generate database schemas based on a mapping file. Would it be possible to auto-generate a mapping file too based on another descriptor file that describes an object and its constraints in real-life ( must have address, must have email, has collection of contacts, has collection of items, has shopping cart, etc...? )

To meet some of the short-comings of rule-based systems, merge it with the idea of a petri-net. A petri-net can be used to 'fire' a rule, action or process when pre-conditions are met. The benefit of petri-nets is that it controls concurrency and execution order and knows about required pre-conditions, things that I think are quite difficult when using a rule-engine stand-alone. Add some time-outs in the net and you've got a framework of execution that is better suited to a business process. This framework could be added to an EJB container even, if one wishes... But what is the use of EJB's if we've got a rules engine? :)

A programmer may also add 'resources' with input parameters to this system that could be EJB's, but not necessarily (and would the EJB use a different instance of a rule-engine? ). For instance, a credit-check system could be a particular resource that an analyst might like to use. Then a business person, when creating a new system, could think of launching a pre-configured standard container, load a new set of business rules on it and start an input connector for new data.

The process generated should become a re-usable sub-process with the inputs and outputs declared, more or less like a white/black-box. This supports the idea of multi-level application overviews. Detail is only important when you need to know it. Dependency resolutions here should allow someone to manage changes. Software development than doesn't really need to think about NullPointers in plumbing code so much. I hope it would eliminate a lot of plumbing that is required nowadays.

Summarized:

A rules engine instance is loaded with a business process. This uses files that can be changed easily later. A petri-net is used to control execution, dependency of other information, process concurrency and time-outs. An analyst creates a new Entity. Based on the description of that entity, a Java class file is generated, a mapping file for Hibernate is generated and a full database schema is generated based on all these entities together ( relationships are declared within the declaration ). The only thing that is still necessary is plumbing code between the 'model' layer and some user interface. Some parts there can also be auto-generated ( Struts Forms <-> data objects for instance ).

Interesting concept and idea! Let me know if you wish to do anything with this, I'd be glad to assist!

Thursday, October 06, 2005

Communication at the heart of project success

Effective communication within a project is not often identified as *the key factor* in the success of a project. This article shows why ineffective communication has such a high detrimental impact and a high-level overview of the scope of effective communication.

Effective communication means sharing or stimulating the following:

- Maintaining clear targets and objectives, even when these change
- Good understanding of pr scope and constraints
- Feeling of recognition and being part of a team
- Prevention of unnecessary or doubled work
- Knowledge who is doing what and a constant reminder of time remaining
- The notion that the current work undertaken is useful for others
- Sharing experiences, knowledge to support work of others
- Gauging levels of experience and knowledge with others in new teams
- Let others know you are aware of your roles & responsibilities
- Let others know what your roles and responsibilities are

There are severe side-effects on motivation if communication does not take place. When communication is withheld consistently, or openness for communication is reduced, motivation may drop to an absolute zero in time of a few weeks. Drops in motivation result in "chatter" in a project, which is sometimes viewed as "unnecessary communication" by managers.

Since lack of *effective* communication results in poor work, which increases the "chatter" and unhappy faces. Unfortunately some managers respond to these symptoms ( side-effects ) by further reducing communication in an effort to increase productivity by reducing the chatter. This results in either:

1. The deliverables are made on time, but everybody is unhappy and time is needed to regain confidence.

2. The deliverables are not made on time and everybody is further de-motivated.

So, the best strategy is always: Try to find the actual problem points in the communication line. Some questions may help to identify this:

- Is there lack of direction?
- Is there uncertainty about how or what to undertake?
- Are there (slumbering) conflicts within the team?
- Do meetings happen and are these effective?
- Is email used effectively to resolve problems?
- Are people highly opiniated or ego-centric and do they hardly ever come to a conclusion? ( concessions )

When the problem point(s) is (are) identified, attack the source where it exists.

Always attempt to maintain a team that communicates openly, as frequently as needed, sharing information as efficiently as possible, and communicate with respect for one another.

The Mind Is Radial

This article challenges you to think about how we generally accept to learn new material. What I shall call "linear learning" is the process of reading a book from start to finish. What I shall call "radial learning" is the process of reading the table of contents of a book and then in phases, read more and more details of each section.

Practical, hands-on learning for instance has more radial properties, since the material is researched or explored from a problem perspective. The problem plane for the student contains certain 'gaps' that require a resolution.

When people read a book with the objective to learn or to consume information, the information is grouped into chapters, where the book as a whole is the content in a particular context or perspective. It is impossible to read the book in one blink of an eye, so how can one most effectively read a book with the objective to understand and remember all of the contents?

Consider reading 'radially' into the subject using the table of contents. Skim pages, chapters, paragraphs to get a high-level overview of the contents. Write down any questions that develop while reading and do not get immediately distracted, focus on the current context, but also remember not to dig too deep into the detail. As time progresses, the information you have is more complete and you will be much better able to index the information inside your brain, leaving unimportant details out as they can be re-read later at any time without effort.

The way how the brain stores information approximates a kind of network. Likewise information gets connected, which is the process how we recognise similarities. One cannot think as the brain as a bag of unordered information where a hand digs in to retrieve the item when we need it. In order to 'populate' the brain more effectively, follow the same pattern as the network is organised.

This can also be applied to other topics, like conversations. Focus on global objectives, global points before digging into detail.

Try it and let me know :).