Donald J. Patti

Archive for the ‘Software Development’ Category

Failed Pilot? Chalk it up as a Win!

In Business, E-Business, management, project management, Software Development, Technology on 14 December 2009 at 8:30 am

You’ve just had a failed pilot, followed by a quick meeting with the Project Management Office (PMO).  Your project was killed and you feel like a failure.

What should you do next?  “Celebrate,” I say, “then chalk it up as a win.”

What? Not the answer you were expecting?  Let me explain…

I spend quite a bit of time in a classroom, whether its to teach a subject or to learn myself.  During one class, the oft-cited Standish Group statistic that measures projects successes reared its ugly head once again, this time citing a roughly 30% project success rate with roughly 45% qualifying as challenged (Standish Group 2009).  Per Standish, roughly 70% of projects fail to meet expectations – a sobering statistic.

A project manager sitting behind me who specialized in pharmaceuticals shocked me when she said, “Gee, I wish our numbers were that good [in our industry].  The odds of a clinical trial resulting in the drug reaching market is 1 in 20, and this is after its cleared a number of internal hurdles to justify a stage I/II trial.”  (A stage I/II trial is early in the process and serves as a pilot).  While I laughed at her comment, I also considered how it related to the Standish statistics and definitions of project success.

By her definition, success meant bringing her product all the way to market, an unlikely outcome by her own estimation and by those of my fellow health sciences colleagues.  But, what if success was defined as, “Accurately determining whether a product should be brought to market,” or “Successfully determining whether a project should continue past the pilot stage”?  Suddenly, many of her projects would be considered successes.  After all, how many drugs don’t work, have ugly side effects, or have the potential to kill their patients?  Isn’t she and her team successful if they keep bad drugs off the market and aren’t we better off for it?

In the software industry, good software methodologies use pilots, proof-of-concepts or prototypes to determine whether a software product is worth fully developing and fully budgeting.  In the Rational Unified Process, the rough equivalent of a pilot is called the Lifecycle Architecture Milestone and its purpose is to confirm that the greatest technical and design hurdles can be overcome before additional funding is provided to the project.  In Rapid Application Development prototyping is embedded in each and every iteration (cycle), while paper prototyping is a part of Agile development.  Regardless of the methodology, these steps are designed to provide results early, but they are also designed to confirm that a project is worth completing, providing an opportunity to change course or shut down the effort when it’s not.

So, maybe it’s time for those of us in the PMO and portfolio management to change they way we measure project success.  Right along side the “projects successfully completed on time/on budget” statistic, there should be two others — “projects successfully killed because their pilots proved they simple weren’t viable,” and “dollars saved by ending unfeasible projects early.”  Because in the end, a pilot’s failure is just as good as a pilot’s success, as long as you listen to its message.

When Quality is Too Costly

In Business, management, project management, Quality, Software Development, Technology on 2 November 2009 at 8:15 am

Throughout his career, my father served as an engineering manager in the aerospace industry, where the parts he and his teams developed were used for missiles and spacecraft. Sometimes, these parts were required to meet specifications within one ten-thousandth (0.0001) of an inch, a level of quality rarely seen in any industry. I remember discussing the day his company announced they would enter the automotive parts market, which used similar components.

“You should be quite successful,” I told my father, “if you’re able to deliver products that are of such high quality to the automotive industry. Who wouldn’t buy a product that meets specifications so precisely?”

“That’s actually the problem,” my father responded. “We can build parts to one ten-thousandth of an inch, but the automotive market doesn’t need anything that precise and isn’t willing to pay for it. It costs an awful lot of money to build something that precise and to verify that it meets that standard.” He continued, “It will take us a while to figure out how to mass-produce products with much lower tolerances that are also competitively priced.”

Many years later, I encountered the same issue in my own career. Educated like many others in the principles of total quality management (TQM) and the concept of “zero defects”, I believed that the key to building excellent software was a clean defect log.  Budgeting time for multiple rounds and types of testing, my team and I dutifully worked to test 100% of all functionality in our systems and didn’t release our software until every known defect was repaired.

The day came to release our software and I was met with an unexpected surprise. Sure enough, not many defects were reported back, but there were defects reported. Why?

It turned out that our large user base was exposing problems that neither our QA team nor our beta testers encountered or anticipated. Only through the rigors of production use did these defects come to the surface. This was my first lesson about the limitations of preaching a “zero-defect” mantra – nothing replaces the “test” of production.

During our project retrospective, an even more important downside to the blind pursuit of “zero defects” surfaced. Over the two extra months our team spent addressing low-severity defects, our client lost roughly $400,000 in new sales because the new system was not available to collect them. (I had done the ROI calculations at the beginning of the project showing a $200K per month of new income, but had completely forgotten that holding on to the system for a couple of months meant the client would be deprived of this money entirely-they were not merely delayed). For the sake of perfection, zero-defects meant withholding key benefits – this was a far more important lesson than the first.

Certainly, few users want to deal with defects in software, particularly software that’s not ready to deliver. And, of course there are software products where a near-zero defect rate is extremely important. For example, I’d be quite upset if a zero-defect standard weren’t set for the key functionality in an air traffic control system or in certain military, aerospace, medical and financial systems.

But now, before I recommend a zero-defect quality target for any project, I make certain that the costs of such a high level of quality are not only beneficial to the client, I make certain I include in my calculations the lost benefits the product will bring to the users by holding it back until this level of quality is achieved. After all, none of us like defects, but would we really want to wait the months or years it might take before we reap the benefits from a new version of our favorite software?

A Case of Developer’s Optimism

In management, project management, Software Development, Technology on 26 October 2009 at 6:30 am

In the project management world, “How much time do you think this will take?” is often a loaded question.  This is true not only when senior managers ask this to a project manager, but is also true when a project manager asks this to a developer or other team member.

As a former developer, my knee-jerk response when asked, “How much time…” was almost always to respond with the correct answer – but under only the best possible conditions and outcomes.  It was only when I began managing projects that I saw how often my estimates were low and began to understand why.  I’ve coined this under-estimation of work effort on an individual task or unit of work “Developer’s Optimism”, though it’s just as likely that any other member of a project team could make this mistake.  Developer’s Optimism has many causes,  but it can be alleviated using a few basic cures, or adjustments, to the estimating process.  I’ll explore both the causes and the cures in today’s blog.

The Cause

Developer’s optimism is, on the surface, what the phrase states – a well-meaning software developer or other expert on the project team gives an estimate that is theoretically possible but highly unlikely.  In most cases, the developer may be quite experienced, yet their estimate only has a one in ten chance of being accurate.  Why?

It’s not that the developer wants to ruin your estimate, though you may begin to believe this is the case if you’ve had this occur with the same person repeatedly.  In nearly every case, he or she has made one of six mistakes in their estimating that you need to help them avoid, as described below:

  1. He did not include unit testing.  In most cases, a developer will not include unit testing in their estimating, particularly if you phrased the question as we did in the first paragraph.  In most developer’s minds, the work is done when the last line of code has been written on an alpha version, not when testing has been completed.  This common difference in dividing lines can throw off your estimate by 10-25% or more.
  2. She did not include re-work.  Perhaps all of your developers do perfect work and they deliver the first version of their code working exactly as your specifications are written. But, then the client takes a look at the outcome during a prototyping session and says she’d like a few changes.  Depending upon your client’s level of scrutiny and detail, this could bias your estimate downward by between 5-50%.  Sure, change requests can add hours to cover this, but isn’t better to plan for a moderate amount of change in advance, rather than come back and asking for more time and money mid-project?
  3. He estimated based on his own skills and not those of others.  Hoping for a better estimate, project managers nearly always go to only senior developers, designers and architects to benefit from their experience – big mistake!  A senior developer will almost always tell you how long it will take them to do the work themselves, not how long it will take the average developer.   Inevitably, when the time comes for the project manager to staff their team, the senior developer is not available and the estimate is unrealistic.  In this scenario, it could take as little as 10% and as much as 10 times longer (yes, doubting senior manager reading this now – 10 times!) for an inexperienced developer to complete the same work as an experienced one – particularly if the senior developer is not available to provide coaching or guidance.
  4. She estimated her portion of the work but did not include the work of others.  In this case, she’s spot-on with her estimate, but she’s the user interface (UI) developer and not the database developer.  So, if you use her estimate, the hours are gone and the UI looks beautiful, but the database queries/procedures aren’t written and the graphics haven’t been spruced up by the graphic artist.
  5. He did not include bug-fixing.  Depending upon how you estimate testing and quality assurance, you may or may not assume that the developer has included time to make bug-fixes in their estimates.  In most cases, he has not done this, because bug-fixing often occurs weeks or months after the code was written.  Depending upon the complexity of the task and experience of the developer, bug-fixing typically takes between 10% and 30% more time than completing the work itself.
  6. She did not account for complexity and risk.  In some cases, project managers never ask the critical follow-up question to “How long will this take?” — “How hard is this?”  When asked, we begin to understand how complex and risky the task actually is.  Particularly if a project involves many high risk tasks, one of these tasks is certain to face many setbacks as it is developed, blowing our initial estimate out of the water.  For high risk tasks, it’s not uncommon for the task to have a 25% chance of taking twice as long.

The Cure

Fortunately, project managers can cure their teams (and themselves) of Developer’s Optimism by taking roughly a half-dozen steps when making their estimates.  These include:

  1. Asking multiple individuals to estimate.  Instead of asking one senior developer, ask two or three developers of varying skills, then average their work.
  2. Use analogous estimating.  Analogous estimating uses historic project data to look back at the level of effort spent to complete similar tasks in prior projects. It then adjusts them based upon the differences between old and new to estimate time for the current task.  As a result, analogous estimating takes away some of the risk of downward bias that is so common when project estimates are developed.
  3. Make it clear what “this” is.  When asking the question, “How much time will THIS take?” make it very clear what THIS entails – are you assuming THIS includes unit testing, bug-fixing, re-work, the work of other team members, or none of the above.  If none of the above, be certain to include these in your own, separate estimate.
  4. Add “…an average developer…” to the question.  Instead of placing the developer in the position of assuming they will do the work, ask them to assume that they WON’T do the work by rephrasing, “How much time will this take for the average developer?”  In many cases, the developer can remember their skills when they just started or when they were a few years on the job, and will give you a better estimate.
  5. Assess complexity and risk.  For each task and the project as a whole, conduct a risk assessment and set aside a legitimate number of hours for risk contingency based on the probability of a specific approach to completing a task failing and the likely number of hours to develop an alternative if the primary fails.  Risk management is a key discipline in project management – one that impacts your estimation as much as it does your execution of the project.
  6. Use PERT.  Not a shampoo, many project managers have encountered Program Evaluation and Review Technique as a way to estimate project duration and level of effort.  During the evaluation process, PERT asks individual estimators to provide three estimates – one optimistic estimate, one moderate estimate and one pessimistic estimate for the same activity.  PERT then averages these three values using the formula, Expected time  =  ( optimistic  +  4 x moderate  +  pessimistic ) / 6.  While I tend to use 3 times the most likely and divide by 5 because I think project variation tends to be greater than 4 times the moderate estimate, I have found PERT’s estimating formula to be a good way to level out the bias.

An Optimistic Ending

Project estimation is almost always fraught with some degree of inaccuracy, so its not as though taking the steps above will result in perfect estimates.  But it is possible to have a reasonably accurate outcome – one that consistently is + or – 10% for the entire project — on a vast majority of projects.   To do this, however, project managers must take methodical steps to reduce innaccuracy.  They must also ask their team members and their senior managers to take off the rose-tinted glasses before giving or speculating on an estimate.

Dead Babies, Dead-tired Staffers and “Leaving the Zone”: Exceeding the Envelope in Software Development

In E-Business, Ethics and ideology, management, Quality, Software Development, Technology on 12 October 2009 at 10:38 am

I know, I know.  “What do dead babies have to do with software development?” you say.  “Are you playing my heart-strings?”

Sensationalism being what it is, I have to admit that I couldn’t avoid leading with nearly everyone’s horror – dead babies.  Yet, there is a critical tie between today’s three attention-grabbing subjects and software development that makes this entry worth reading.  And, it has implications for how you manage your staff, software developers or otherwise, in the days to come.  Read on.

Leaving the Zone

It’s been more time than I care to admit, but during my senior year of college I learned what it means to be “in the zone” as well as what it’s like to leave it – painfully.  A track and cross country athlete at the NCAA Division 1 level, I was the first finisher on my team for my first four races, finishing in the top five for every race and posting two victories.

Old glories aside, it’s far more notable what happened next – my body crashed.  Though I’d trained hard with the rest of my team by posting a full Summer 80+ mile weeks and even two at 100+, I then took on the World when classes started, signing up for a full slate of five courses and tacking on a full time job managing a political campaign in Montecito, California, fifteen minutes south of Santa Barbara.  I slept less than five hours a night and spent nearly all my time racing from one place to another, which is a sure recipe for a wrecked body.

Back then, I had no idea there were limits to the punishment my body could take, but I found out quickly.  After consistent finishes with the lead pack among hundreds of runners, I finished no better than the middle of the pack in my remaining four races.  Even worse, my team went from three victories in four races to middle-of-the-pack as well, in part because I was no longer pacing them to victory.  At the end of the season, the affects of over-work were readily apparent – an MRI showed three stress fractures, one of the femur, our body’s largest and most durable bone.  Clearly, I didn’t recognize when I was “Leaving the Zone” by over-working myself, but had only just come to realize this.

Dead-Tired Staffers

Amazingly, it took an enormous amount of self-abuse for me to finally start listening to the messages my body was sending me about being tired or over-worked, but the lesson has stayed with me since.  As I’ve spent more time at work leading people, I’ve noticed that lesson also to the work world, where tight deadlines and high-pressure work can lead us as leaders to push for overtime again and again.

Consider your last marathon project with brutal deadlines and lots of overtime: Can you remember seeing these signs of over-work in your team members as they pushed themselves beyond their limits:  Irritability; an inability to concentrate; lower productivity; poor quality; at the extreme, negative productivity, when more work was thrown out than is gained?  Looking back, you’ve probably seen at least a few of these, and if you check out your defect logs from the work produced during these times, you’ll notice a spike in the number of defects resulting from your “more overtime” decision.  But, maybe you’re still denying that over-work will threaten the success of your projects, not to mention the long term well-being of your team members, as you run a dedicated team of dead-tired staffers over the edge.

Dead Babies

If this is the case, you wouldn’t be the first manager I’ve met who doesn’t understand how over-work can actually slow your project down rather than speed it up.  Software developers, analysts, engineers and QA team members, these managers argue, are hardly putting in a hundred miles of physical exertion each and every week, though they may be putting in 60 or 70 hours of work.  These managers counter that mental exertion and sleep depravation are not the same as physical exertion on the level of a college athlete. Or, they accept it in theory, until a project falls behind or a key deadline looms.

Though I found a number of excellent articles and blogs on the subject of software development and over-work and I’ve posted at the bottom of this article, the best evidence of the adverse affects of sleeplessness, stress and over-work on our ability to use our minds productively actually comes from the world of parenting.  In the Washington Post article, “Fatal Distraction: Leaving a child in the back seat of a hot car…”, report Gene Weingarten moves beyond the emotion of a very sensitive subject and asks the telling question of what was going on in the lives of parents who leave their children in cars on hot, sweltering days.  The answer?  Stress, sleeplessness, over-work and half-functioning brains – in many cases brought on by us, as managers.

“The human brain is a magnificent but jury-rigged device,” cites Weingarten of David Diamond, a Professor of Molecular Physiology who studies the brain.  (Weingarten and Diamond deserve all of the credit for this research, but I’ll paraphrase).  A sophisticated layer – the one that enables us to work creatively and think critically – functions on top of a “junk heap” of basal ganglia we inherited from lower species as we evolved.  When we over-work our bodies, the sophisticated layer shuts down and the basal ganglia take over, leaving us as stupid as an average lizard.  Routine tasks are possible, like eating or driving to work, but changes in routine or critical-thinking tasks are extremely difficult.  Even the most important people in our lives are forgotten when fatigue and stress are applied, as Weingarten’s article shows.

If an otherwise dutiful, caring parent can’t remember their own child when sufficiently fatigued, what is the likelihood we’ll get something better than a dumb lizard from our software development team when we push them above sixty hours per week again and again?  And when they’re finished, how high will their quality of work actually be?

So, when considering another week of over-time, think twice.  Sometimes, it’s better to just send the team home.

—–

Gene Weingarten’s Washington Post article can be found here: http://www.washingtonpost.com/wp-dyn/content/article/2009/02/27/AR2009022701549.html?wpisrc=newsletter&sid=ST2009030602446

Other good articles on overtime and software development can be found here:

http://xprogramming.com/xpmag/jatsustainablepace/

http://www.uncommonsenseforsoftware.com/2006/06/planned_overtim.html

http://www.systems-thinking.org/prjsys/prjsys.htm

http://www.caffeinatedcoder.com/the-case-against-overtime/