Wednesday, April 23, 2008

The art of software estimation

I'm reading up even more on the Cocomo II method and interestingly, I like it. Although it's all maths and you can't run companies through formulas of course, the exercise you're put through is the real value.

Consider the following definitions of estimate:
"to judge size, amount, value etc, especially roughly or without measuring"

"to form an idea or judgement of how good etc something is"

"a calculation (eg of the probable cost etc of something)"
So estimating is about forming ideas, judging and calculation. Cocomo II does just that. No estimate method can replace the judgement part, but it can provide the math part.

There are loads of parameters to consider, but within the context of a formula you get an idea what their impact is when you deviate by some values. For example, some parameters only slightly add cost and time, whereas others have a strong exponential factors and an incorrect evaluation of the real value has great consequences.

One of those factors is regarding re-use. For regular projects I worked in, people wouldn't properly go through the practice of estimating the re-usability of some piece of software. Clean interfaces, excellent documentation and little coupling with a used library means it can plugged in within one day or so. But once you plug in a library that has good documentation, probable bugs, but controls your software (like Spring), the effort of integrating and configuring it into your code suddenly becomes a lot larger. Unfamiliarity with the business sector, the library code, other packages and how they work together also adds up linearly to that effort.

There's a great opportunity here for a new opensource tool on the web that supports these kinds of estimates. And it has to be written for the non-cocomo savvy users. Estimating with Cocomo only really makes sense once you are working in teams of 3 or more for 3 or more months. Anything before that is probably better off with a hand-written estimate on a piece of paper, because the size isn't considerable at all. However, be aware that the accuracy very quickly wears off after that and you need to be put through a real practice of estimation. It's really an art form, but nothing mystic, just common sense. And I think anyone will produce incorrect estimates. As long as you understand by how far you could be wrong and supply that value along with the estimate.

For Project Dune, I intend to make something quite easy. The idea is to explain the purpose of the parameters and allow the user to tweak them. Then on a side-bar show a real-time graph with the impact of the changes. And everything in a kind of wizard form.

Estimates are based mostly on code size. And size can be expressed in either Lines Of Code or Functional Points Analysis points. Then you should use a multiplication factor and apply the formula and out comes your cost and timeline.

As said before in the previous post, I wouldn't be so much interested in the actual estimate produced, but much more so in the potential variance. Probably with some more maths behind it, there should be a possibility to show range graphs that indicate the very worst case, the likely bad case, the real estimate, the likely good case and the best case.

The estimate is used as input to project planning. It doesn't yet adjust for serial / parallel work breakdown structures. So the real planning that states "how long" a project takes is still different. You may have people idling or crashing tasks to get things done.

I'm sure there's more stories on estimation at a later time in this blog.

Sunday, April 20, 2008

Software Cost Estimating with Cocomo II

I'm reading the book Cocomo II. It's an adjusted cost estimating model from, mostly, Barry Boehm. As you can gather from previous posts, I'm sceptical on these models of process improvement and managing projects through "numbers" and forms. Ideally, we know everything at the start of the project and everyone knows how to do things most efficiently.

If you read the book only literally and pay attention to the math, it won't get you far. The power of the book comes from the interpretation of the ideas behind those numbers and formulas and only then use the numbers anyway, since they're the only factual foothold you'll get in a real-life situation. The rest are fictions of our imagination or our personal recalibrations that are influenced by our own ideas of project management.

First you should remember what the word "estimate" means. It's an expectation of cost, time or effort based on the information known at the time of producing the estimate. One of the strongest and most interesting statements is done in Chapter 1, where it is made clear that Cocomo II doesn't give you drop-dead fixed deadlines or cost statistics (as long as your numbers are correct), but that they provide a guideline or foothold on a potential track for your project that is subject to a potentially large deviation. The size of the deviation is determined by the quality and quantity of information at the outset of the project. So, the more you know beforehand, the more accurate your estimate will be. That is a logical given. Maybe that is also why you should consider reproducing estimates at different stages throughout the project.

However, things go further than cost estimates. In estimates we often make assumptions and tend to think along a positive trail of project execution. That is, we generally like to disregard risks and things that will go wrong and just don't take them into account. Or we think we can squeeze the effort anyway and do more in the same time than initially envisaged. In order to be correct in the estimate, regardless of a boss that won't like what you're telling him, you'll need to also factor in the negative issues into the equation.

So, there must be a number of factors that negatively impact the project outcome as envisaged in the estimates. Think of these factors for a future project, as these increase the potential deviation significantly:
  • Incomplete definition of scope at the start of the project.
  • Unclear development process or not living up to that process.
  • A development team that doesn't communicate well or otherwise faces challenges in its communication.
  • Scope creep in future stages of the project.
There are some people that understand that certain projects above a certain size or cost have no chance of succeeding. That is because the environmental factors keep changing and due to the size, the communication needs to increase and all other factors. Simply put, you can't reliably estimate on those projects, since there may a deviation of a factor 4-8.

If you think to the completion of the project, it would be very easy to compile an estimate at that stage. You go through the project history, estimate how much is lost for each event (without considering the real numbers) and chances are you'll arrive at a number that is about 90% accurate. But we can only do that because at the completion of a project, we have all the information we need to produce that estimate.

Now think towards the start of the project. What information do we have available and what are risks or issues that we should foresee?

From cost estimation, I think we can learn that reducing project risk and improving the chances of success is increasing the amount of information for successful development of the project. Think of methods like software proto-typing, iterative deployment cycles, showing things early, etc. It's not yet proven that these provide the correct results, because showing things early may also induce a feeling to your client that things may change at any stage in the process.

So, from all of this, I can conclude that the estimate itself is not the most worthwhile thing produced in the cycle, but the accuracy of that estimate is more valuable. How much deviation can we expect overall? And how do we express it? Since we can't reliably come to any estimate at all to initiate a project, what range can we give to project decision makers, our sponsors, so that we can inform them beforehand whether something should be done or not?

Probably, this conclusion should result in a whole new way of software development. Something based on measuring the quality of information, scope and specification available at the outset and a measure of the risk involved if things are progressed based on that little information.

Wednesday, April 02, 2008

Requirements of a silicon brain: Project Semantique

I've started work on some implementations (research) to elaborate ideas on the implementation of a symbolic network. A good method for me is trying out some implementation, switching to reading, switching to philosophical meanderings and back again. The entire process should feel like some kind of convergence towards a real working implementation (how far away that may be).

In previous posts, I touched base with a couple of requirements that are part of this design:
  1. The energy equation must hold, that is, the time and energy it takes for a biological brain should more or less equal the time and energy of a technical silicon implementation
  2. It should be parallel in nature, similar to neuron firings (thread initiations) that fire along dendrites and synapses
  3. It should be stack-less and not need immense amounts of stack or function unwinds

    Some new requirements:

  4. The frequency of introspection is undetermined at this point, or better yet: "unknown". I don't know how often to check any kind of result in the network to come to any kind of conclusion or result. But I reckon that the frequency is tied to the clock-cycle of the main CPU or anything that can be compared to that. So, someone noted in my blog that the frequency of the brain seems 40Hz. That means it might be needed to inspect 40 times a second (and cleanup old entries, leaving room for new?). The idea is to not push too much in the heap for analysis, but clean up the results regularly and continuously work forward, storing the previous results in different rings of memory
  5. There should not be any "output dendrite" or "return object".
  6. The state of the network at any point in time == the output.
  7. Previous results should eventually be stored in different rings of memory, which have lower qualities of prominence the further away from the source of processing. Most possibly, results in more remote rings of memory may require re-processing in the brain to become again highly prominent.
So where does one start to design this?

I'm looking at "stackless python". It's a modified library of python that allows little tasklets to run that do stuff. Basically it's similar to calling a C function that passes in another function address to execute. The calling function unwinds and the CPU can start executing from the new address.

Python further hides tasklets (to run in the same thread), some kind of "green" thread or micro-thread, since it has a scheduler (that is not pre-emptive, but cooperative).

Check it out here:

http://members.verizon.net/olsongt/stackless/why_stackless.html#the-real-world-is-concurrent

What is the objective?

The objective for now is to load word lists and process text. It's quite a basic process I'm simulating at this point, but that does not matter, I'm mostly interested in seeing if these methods display any kind of emergent intelligent behaviour:
  • loading word lists in memory
  • Enter 'learning mode' for my symbolic network
  • Process 'stories' that I downloaded from the web
  • Establish 'connections' between symbols
  • Verify connection results
  • ... modify algorithm ... modify implementation ... back to 1
  • post results
Project name: Semantique

Tuesday, April 01, 2008

On the implementation of humor...

Cool! April Fool's Day. Well, I did not hear a lot of jokes today, luckily, but I guess that others will have been fooled at some point one way or another.

From cognitive science, I am very interested in the analysis of humor... What is humor? I'm not asking how to tell a good joke or what makes a good joke, but on a lower level I'm trying to understand when we find something funny. So how come something is experienced as funny?

I define humor as the deviation of the most logical / expected path of the change of the context (your expectation) towards something that you didn't immediately see coming as part of the analysis of the development of the story. Well, that's a joke anyway, otherwise if you did expect it, it wouldn't be all that funny. The best jokes and joke tellers keep you from the other logical, explainable path long enough until the punch line, where the actual context all becomes clear.

All well and good... Star Trek seems to suggest that Data could not understand humor, as if humor in its essence would be pure human emotion. Since machines don't by default have access to emotional responses (if emotion is the driving force of our life, in the sense that it is at the basis of our decisions in the morning to get up and start doing something), then Star Trek would assert that Data couldn't laugh at some joke because he didn't have access to an emotional organ or simulation of such.

I'm not sure of that assertion made in the series. I think humor isn't as much emotional, but more a trigger (spike?) in your brain that brings forward an emotional reaction (laughter). That little difference is very large. Scientists performed experiments on "aha" moments, those quick moments that you have solved a puzzle and can complete it in its entirety. Those "aha" moments were accompanied by huge spikes of brain activity for a very short time, after which the context of some problem should be entirely clear.

I do imagine humor (specifically for now) thus to be an emotional reaction to a relatively simple discovery in the brain that the context and path we've been led to believe (our expectation) is not the real path we should have taken to develop the context (chain of symbols) of some story. And by "shifting" this context the right way due to more information becoming available (the punch line), we feel a response to this "aha" moment when the brain solves it. Also, if you concentrate, you can rather easily suppress the urge to laugh in a great extent. (does that suggest that laughter and humor are quite conscious processes?).

(could you say then that the closer the expectation is to the actually developed context, it makes the joke funnier or vice-versa, the farther away using the same words makes it funnier?).

So, anyway, that means that there might be ways to detect humor by software as well, provided there is software that can develop expectations and interpret contexts the same way our brain can.

Therefore, Data won't probably actually laugh in the same way that we humans do (since biologically we react to that aha moment), but probably it is possible to detect if something is humorous by analyzing the contextual difference in two different snapshots of the context and then send the appropriate signals to react to it. Of course... Data is most likely not culturally apt as he does lack real biologically induced emotions, so he may very well laugh inappropriately in contexts that are culturally sensitive (imagine!). But that is another story.