Showing posts with label process. Show all posts
Showing posts with label process. Show all posts

Thursday, April 19, 2007

Workflow systems

I'm looking at workflow systems for work and I'm very much enjoying it so far. The main package I'm looking at is called "jBPM". It's quite a large package to understand.

Workflow programming, to begin with, is different from actual programming in that it focuses on the process and on the execution of things, not on appearance or transformation of data. What I quite enjoy is that this statement means that you focus on the middle part of the construction of a system, rather than focus on the beginning (screens, user input) or the very end (database) or how some requirement fits in with your current system in an architectural way.

So, in order to successfully construct workflow-based systems, you first need to change your thinking about IT and how systems get constructed. It's not necessarily a very (pedantically) 'clean' way of programming (all data is stored in proper elements and there is compile-time dependency between all objects and all that), but it provides much more decoupling between functions, which on thus on the other hand improves reusability and componentization.

You should think of workflow as a series of automatic or manual actions that need to take place, easy to re-arrange and reusable. These actions are arranged in a diagram, which specifies which actor or pool of actors can execute those actions. The final part of the diagram also states whether these actions are executed in parallel or in series. To top things off... the whole process "instance", that particular order or that particular holiday request will get persisted into a database when it has no way to proceed immediately because it is waiting for external actions. You do not get these features "automatically" from any software language.

The latter word "language" is specifically mentioned, because you will start to mix your domain language (say Java or C#) with another "graph-based" language, in this case XML. So, the XML specifies authentication, actors, pools, the "flow" of the process, whereas the domain-based language helps to express componentized functionality (such as performing a credit-check, updating databases, gathering user input and so forth).

If you re-read the last paragraph, you notice how different workflow-based programming essentially is and how it could provide enormous value to IT systems if it is done right.

As an example, if you know Mantis or BugZilla, you know from the developer's or tester's point of view what the screen looks like for bug manipulation. Well, this screen is a single screen with all components in editable form, but from a workflow point of view, it should be constructed differently every time with only those components required for that particular step... For example, if you start to update or categorize the bug, then you do not need all the fields available in the screen to do that. When you have a process that dictates the need for a CCB comment, then you also have far too many elements on your screen to do specifically just that.

The point is that in general, many applications show everything on screen that you may not necessarily care about at that time and the user needs to compensate for the lack of guidance by understanding the process documented elsewhere in a very large document... Wouldn't it be great if we could just look at a screen and get our business process right and know what to do at the same time?

The other thing I notice is that many SRS documents are not necessarily focused on the process, but are focused on data. This shows me that there must be a huge opportunity in this area to provide consultancy on core business processes and merge this with IT specific knowledge.

Software is something that should be changeable in an easy form. Some people interpret this as "scriptable", but scripts are software as well and you wouldn't want to have production-editable fields that allow business people for example to modify those scripts at runtime. So there are only specific scenarios in which scripts actually add value, because if only your developers modify those scripts, why not write it in a strongly typed language instead?

Workflow based systems, due to their high decoupling factor and focus on process, might be slightly easier to change than J2EE, .NET, C++ or ASP systems. It matters from the perspective of flexibility and how you allow systems to grow with changing needs, because the needs *will* change.

Lastly, someone at JBoss mentioned how workflow systems are very much in their initial phase, comparing it to RDBMS systems. It's not very clear how they can be used effectively or in particular environments... what else we need for workflow systems to become very successful. The key part in improving these opportunities is to take the best of both worlds and merge this into something better. We may have to take a radical step into something else and see where it goes. I am just considering... wf based systems may be slightly slower due to use of XML and process persistence... but with a couple of machines and putting all processes on similar systems, what do we need for widespread deployment of this technology?

There must be things missing here, because not everyone is using it yet :).

Tuesday, December 26, 2006

Measurement

I have read a book titled "Measuring and Managing performance in organizations". It really is an enlightener.

As many know, I am working on a project called "Project Dune" on SourceForge, which is related to quality. You could say that quality is highly dependent on measurement. And for measurement, you need to generate data and do that in such a way that you can historically compare one data-set with another. The end result is hopefully some insight in how you are doing as a team/department/organization.

Well, it is not really that easy. When you start to measure in an organization, you need to generate the data and both activities are coming at a cost of that same organization. Effort removed from normal production work. Well, obviously it is required to find a balance between going for production and generate the data anyway since you need to find a way forward to improve and without information backing up any decisions, you base all decisions on intuition, which in general can be very deceiving.

Project Dune is interesting in that the vision is that it should do the measuring for you. It is basically a similar tool like BugZilla, Mantis and so forth, but in addition to helping with standard administration and tracking, it also helps in day-to-day activities. And that is where the automation is plugged in, next to its envisioned integration points.

When you start to connect a system that knows about user activities, it can connect the data for its users and the larger the domain is where it is connecting "dots", the larger is the area that you can measure across.

The good thing about this is:
  • You get the measurements almost for free, always up-to-date and in real-time
  • You are better supported in day-to-day activities
  • You can spend more effort on your productive tasks without worrying about any process or dependent tasks that you do for others
Of course, it is not really that near to completion, but a BETA is coming out not too far from now. I'm just thinking about server-side call control (security) before I can even call it BETA. So far I have no feedback yet, but hope to see that happening at some time soon.

The statistics for the project are ranking up at the moment. The project is available at the third page in the rankings at this time (103) and seems to be going up still. A new ALPHA-3 release was just issued today. Let's see what happens next :)

Regards,

G>

Tuesday, December 12, 2006

Quality in Innovation

This post is my personal opinion and does not reflect the opinion of my employer... yadda, yadda...

I'm reading several books at the moment, one of which is the book to CMMi. This is basically deriving a vision for continuous process improvement in a company. But there are things seriously wrong with CMM. It is geared towards establishing a "static" state atmosphere, because if this were different, it would be very difficult to apply measurements across the process. A dynamic setting like an innovative powerhouse has severe problems adopting more rigid procedures due to the nature of their work. It is *not* a factory and they work with people that are intelligent and know what they are doing. Why insult them with procedures that tell them what to do, if they are the ones that know best?

The final level of the CMMi Zen approach is: "the optimizing level" :). I compare this to an engine room of a ship (I used to work in shipping).

This is where every process is defined and measured, you have all the spare parts you need, you have all the supplies you need and the ship hums along without a glitch. No leaking oil, no weird sounds, there are no bearings about to break or go. The chief engineer only walks around to tweak some settings on regulators for the form of it, puts his ear against a machine to hear its performance. It might tell the 2nd to replace a filter here or there or schedule an overhaul in the future, but overall the performance is so magnificent that everybody can go upstairs to the ship's bar. That's Zen, but software is pretty much different, especially when working on innovative and new products. Technology moves a *lot* faster in IT than it does in shipping. The speed at which new technology can be consumed on the ship is because they are users of that technology, not creators. And in software you may find yourself working on an embedded system to monitor cars first, subsequently followed by the latest technology in Ajax to create a project management system.

The later parts of my 729-page CMMi guide talks about "statistics" and "statistically managed subprocesses". Surely it must be evident that if you are not continuously repeating what you are doing, you are not building any historical data and thereby not creating the data necessary to improve your processes.

The real reason why it is so difficult to integrate with these quality visions is because in IT we are learning new things every day or encountering problems that are not standard. If I was working on a project with really new technology or bending the rules of what is possible with that technology, I would deserve a couple of days extra to find out ways and alternatives. I might choose a wrong alternative first and then have to rework. I might have to read through books of protocols in order to even start with architecture or coding. I may have people in the team that need to be trained.

Is a quality process that relies on repeatedness able to improve the quality of what I am doing?

....

I am not convinced!

( also check out http://www.joelonsoftware.com/articles/Craftsmanship.html )

Thursday, December 07, 2006

Software Quality

This is one of the first posts on software quality. Since 2002 I have been working on a project called "Dune", which is basically an automated system that helps you in the process of quality (it does not inspect, nor write code for you, nor writes documents), but if your company has strict processes regarding baselines, inspections, document control and especially traceability, this may be an interesting thing for you to try out.

The software is at: http://pdune.sourceforge.net and http://www.sf.net/projects/pdune.
Yep, it is GPL'd, so anybody can download, help out and use it, however you cannot take the sources and close it down for any commercial product.

There are some good rationales behind the project that I am going to write and design soon and put on the site. The whole work on the project is good to understand quality a lot better. I read through entire books about software quality (dry reading material about "process", "audit" etc. *yawn*), but in the end just discover that whatever they are talking about.... it can be automated! And how many tools do you know that automate traceability and have import functions for project plans, RF and UC documents?

A "quality process" in the end is nothing but the definition of responsibility. Setting the boundaries where the responsibility of one person ends and the other starts. So if you believe that a process is going to improve your software, I wouldn't really think so. The only thing able to do that are the people that you have in your time. The process should only help to create an environment wherein this team can flourish! And as written before, definition of processes is not always good, because if the process writer 'forgets' to write down a certain responsibility, it is likely that this issue falls through the cracks with no one to blame (blame it on the process, not yourself :).

The rationale for adopting a software quality standard doesn't really have much to do with actual software quality. That is, a software standard certification need not be in place in order to write good software. Good quality depends on good people, not on processes of any kind. It may help, but only if the process is defined and adopted by the people in the company. Rather, a process must be seen as the formalization of how work is done in that company with that culture, not an enforced method of work by a select few in the organization that read a good amount of books on software quality and then run around crazy with the theory.

I have not yet read CMMi in its entirety, but I understand what it is about. I am not yet sure whether what I am building will comply with CMMi or vice-versa. I do expect though that the "rules for certification" of CMMi fit in nicely.

Any quality "system", whether this is a quality plan, a set of software tools or adopted method of work should aim for a couple of objectives to achieve software quality:
  • transparency
  • traceability
  • control
Control is the means that a team has to exercise control over the 'states' in the project. There are initiatives, purchases, work items, bugs, features and change requests to be managed. The better the software is aligned with how the team wants to manage this information, the more effective it will be.

Traceability has much to do with the CMMi and ISO standards. This is often interpreted as a "paper trail", but I argue that it might just as well be a set of records in a database (it is actually much better). Traceability has much to do with "auditing" as well. To know when, how and who is basically the question that for each initiative, bug and feature needs to be answered.

Transparency means a system where the information is close to the surface. One should not have to open a separate locker with a key that only the manager has to go through drawers of paperwork to find out what happened to bug #35. This is one extreme transparency problem, but it shows how transparency is important in a system. Transparency is basically a measure how easy it is to get to the information in the system. The easier it is, the more value the system generates for its users.

So, this is what I am building. An automated quality support system. The system by itself is incapable of guaranteeing quality, but it helps to lift the effort in maintaining the audit chain and ties control, transparency and traceability together. This is very important for people involved in software development. It relieves the people involved from a couple of hours of effort a week (as an estimate), which can be directed to something really productive.