Tony,
Actually, in the university spinoff company I co-founded, we employed nonlinear filtering WAG (NLFWAG). It involved us constructing a stochastic model of the distribution of possible results (i.e. how long the actual work takes), and an observation process (i.e. our estimates). We then estimated the conditional distribution how long things would actually take based on our estimates, and used those as our real estimates. We did this until we realized our estimates were so bad that no amount of filtering would give us better ones :-)
Two other useful tips
1. Garter (or maybe Forrester, or even the PMI) ran a wide study a few years ago and dscoverred that the amount you were out from your estimates 20% of the way along your planned schedule was likely to be the amount you were out at the end.
Lesson: take a moment 20% in on your schedule to review and if appropriate revise your work plan.
2. Liquid Planner, fogBugz and a few new project management applications are vying to replace MS project, Prima Vera and Niku as viable alternatives to project scheduling tools. What do the new breed of tools have in them? THey factor in a degree of uncertainty based upon how far into the future you are estimating.
Your action: Go search out the video demos and assess whether the tools are useful for you.
Larimar:
Wow! I am noted locally for the sophistication of my WAGs, but your technique is light-years beyond what I am familiar with.
Actually, I estimate by using data flow diagrams to break the system down into relatively small equal size chunks (largely based on ensuring chunks have about the same number of inputs and outputs). I then estimate for one chunk and then multiply out. Not perfect by any means, but often leads to more than adequate results.
Tony
brought to you by enabling practitioners & organizations to achieve their goals using: