Donald J. Patti

Archive for the ‘Technology’ Category

Failed Pilot? Chalk it up as a Win!

In Business, E-Business, management, project management, Software Development, Technology on 14 December 2009 at 8:30 am

You’ve just had a failed pilot, followed by a quick meeting with the Project Management Office (PMO).  Your project was killed and you feel like a failure.

What should you do next?  “Celebrate,” I say, “then chalk it up as a win.”

What? Not the answer you were expecting?  Let me explain…

I spend quite a bit of time in a classroom, whether its to teach a subject or to learn myself.  During one class, the oft-cited Standish Group statistic that measures projects successes reared its ugly head once again, this time citing a roughly 30% project success rate with roughly 45% qualifying as challenged (Standish Group 2009).  Per Standish, roughly 70% of projects fail to meet expectations – a sobering statistic.

A project manager sitting behind me who specialized in pharmaceuticals shocked me when she said, “Gee, I wish our numbers were that good [in our industry].  The odds of a clinical trial resulting in the drug reaching market is 1 in 20, and this is after its cleared a number of internal hurdles to justify a stage I/II trial.”  (A stage I/II trial is early in the process and serves as a pilot).  While I laughed at her comment, I also considered how it related to the Standish statistics and definitions of project success.

By her definition, success meant bringing her product all the way to market, an unlikely outcome by her own estimation and by those of my fellow health sciences colleagues.  But, what if success was defined as, “Accurately determining whether a product should be brought to market,” or “Successfully determining whether a project should continue past the pilot stage”?  Suddenly, many of her projects would be considered successes.  After all, how many drugs don’t work, have ugly side effects, or have the potential to kill their patients?  Isn’t she and her team successful if they keep bad drugs off the market and aren’t we better off for it?

In the software industry, good software methodologies use pilots, proof-of-concepts or prototypes to determine whether a software product is worth fully developing and fully budgeting.  In the Rational Unified Process, the rough equivalent of a pilot is called the Lifecycle Architecture Milestone and its purpose is to confirm that the greatest technical and design hurdles can be overcome before additional funding is provided to the project.  In Rapid Application Development prototyping is embedded in each and every iteration (cycle), while paper prototyping is a part of Agile development.  Regardless of the methodology, these steps are designed to provide results early, but they are also designed to confirm that a project is worth completing, providing an opportunity to change course or shut down the effort when it’s not.

So, maybe it’s time for those of us in the PMO and portfolio management to change they way we measure project success.  Right along side the “projects successfully completed on time/on budget” statistic, there should be two others — “projects successfully killed because their pilots proved they simple weren’t viable,” and “dollars saved by ending unfeasible projects early.”  Because in the end, a pilot’s failure is just as good as a pilot’s success, as long as you listen to its message.

Taking Care of the Cobbler’s Children

In Economics, Entrepreneurship, management, Manufacturing, Small Business, Technology on 27 November 2009 at 8:35 am

“A cobbler’s children never have shoes,” a co-worker told me after we visited a healthy, thriving small business to provide consulting services. We met with the CEO, a talented entrepreneur who had built her business from one person into twenty in just under ten years. After two days of downtime, desperate to see her systems up and running and her client happy, she readily agreed to pay external bill rates for our repairing a critical part of their infrastructure. Yet, despite the obvious success of the business, the network equipment was nearly a decade old, all of the servers were used or refurbished and the employee’s desktop computers were more than two generations obsolete. “This was more than a case of the cobbler’s children being the last helped by the cobbler,” I thought. But, if not this, then why?

I recalled a day in economics class over decade before, when I learned about the Cobb-Douglas function (http://en.wikipedia.org/wiki/Cobb%E2%80%93Douglas), which states that productivity is a function of capital equipment, labor and technology. In short, Cobb-Douglas says that an increased investment in technology or capital equipment (machinery, tools) increases the productivity of your employees (laborers). This seemed obvious, so I tucked it away in my mind as a good example of conventional wisdom transformed in to a simple equation.

During my years consulting, I’ve seen Cobb-Douglas at work numerous times – at a large computer manufacturer where my team deployed over $1 million in computer hardware for collaboration for their 250,000 employees; at a global heavy manufacturer who built out identical server farms for their plants; at a top-10 financial institution where a dot-0 software upgrade (version 2.0, version 3.0, etc.) triggered the purchase of entirely new corresponding hardware. In most cases, we prepared business cases that showed (1) there were sizable productivity gains through the purchase and (2) the cost of replacing hardware during a corresponding software upgrade was far lower than waiting a year or two when the hardware was seeing high failure rates and affecting customers or workers.

I’ve also seen Cobb-Douglas ignored by otherwise-successful entrepreneurs. While this may seem surprising to some, if you understand the mindset of the entrepreneur, then you’ll understand why. You’ll also understand why this is not good policy for a thriving small business.

For many entrepreneurs, the key to initial success lies in boot-strapping – finding creative ways to deliver for customers despite the lack of resources necessary to get the job done. In most cases, small business owners build their business from the ground up by repeatedly stretching human resources to make up for the lack of capital investment or technology, rewarding these employees with equity or large delivery bonuses in exchange for working lots of overtime. As time passes, the entrepreneur equates scarcity and heavy workloads with success, and a pattern that ignores Cobb-Douglas is engrained.

All is well until the business grows, resource scarcity is no longer necessary and the rewards for burning the midnight oil are no longer available. The business has entered its teen years and is in need of some major infrastructure investments, but the entrepreneurial leader has trouble making the investment. Too many years of boot-strapping have made it difficult to imagine making a sizable, long term investment in technology or productive equipment. It’s safer, the thinking goes, to keep doing what made you successful – boot-strapping.

Yet the business has changed and the keys to past success are not the keys to future success. Businesses may start by successfully bootstrapping, but they grow by improving product quality, normalizing operations and building brand, all of which require substantial investment in technology, people and resources. Few entrepreneur’s recognize this shift occurring, so their business suffers “growing pains” until leadership transitions to someone else.  Dr. Rudy Lamone, a now-retired professor of Entrepreneurial Studies and former dean of the University of Maryland RH Smith School of Business, echoed the observation that entrepreneurs are often replaced once they’ve grown a business past a dozen employees, primarily because the behaviors that led to past successes are now detrimental to the business.

As a result, it’s not uncommon to encounter a small thriving business that uses ten-year-old computer hardware and six-year-old desktops for seemingly inexplicable reasons.  The cobbler’s children have no shoes, but not for lack of leather and nails.

What has been the impact?

  1. One entrepreneur lost their largest client by failing to buy and implement a defect tracking system capable of handling a dozen developers and QA resources. The software was delivered, but it was so defect-ridden that the client’s employees could hardly use it.
  2. Another entrepreneur lost two key developers who grew tired of developing on old desktops and outdated software.  When the phone call came from a recruiter to work for a business that provided new equipment and regular training, they jumped ship, creating a one month backlog of development work in their wake.
  3. A third entrepreneur suffered a one week outage of their production system because they failed to stock a redundant firewall for their network.  The pricey firewall, a $5000 unit, had a one week lead time for replacement.  The cost was $80,000 in revenue and breach-of-contract complaints from nearly every client.
  4. A fourth entrepreneur, mentioned at the beginning of this blog, suffered a three-day outage, but was able to avoid a breach of service level agreements by holding the network together for two more months, then upgrading the infrastructure after the contract was renewed.

The effects of tacking care of the cobbler’s children are evident to me, but they’re still anecdotal.  So, I’m asking small business owners and entrepreneurs – when your business is thriving, do you hesitate to make long-term investments in infrastructure? If so, why? If not, what criteria do you use to make the buying decision?

When Quality is Too Costly

In Business, management, project management, Quality, Software Development, Technology on 2 November 2009 at 8:15 am

Throughout his career, my father served as an engineering manager in the aerospace industry, where the parts he and his teams developed were used for missiles and spacecraft. Sometimes, these parts were required to meet specifications within one ten-thousandth (0.0001) of an inch, a level of quality rarely seen in any industry. I remember discussing the day his company announced they would enter the automotive parts market, which used similar components.

“You should be quite successful,” I told my father, “if you’re able to deliver products that are of such high quality to the automotive industry. Who wouldn’t buy a product that meets specifications so precisely?”

“That’s actually the problem,” my father responded. “We can build parts to one ten-thousandth of an inch, but the automotive market doesn’t need anything that precise and isn’t willing to pay for it. It costs an awful lot of money to build something that precise and to verify that it meets that standard.” He continued, “It will take us a while to figure out how to mass-produce products with much lower tolerances that are also competitively priced.”

Many years later, I encountered the same issue in my own career. Educated like many others in the principles of total quality management (TQM) and the concept of “zero defects”, I believed that the key to building excellent software was a clean defect log.  Budgeting time for multiple rounds and types of testing, my team and I dutifully worked to test 100% of all functionality in our systems and didn’t release our software until every known defect was repaired.

The day came to release our software and I was met with an unexpected surprise. Sure enough, not many defects were reported back, but there were defects reported. Why?

It turned out that our large user base was exposing problems that neither our QA team nor our beta testers encountered or anticipated. Only through the rigors of production use did these defects come to the surface. This was my first lesson about the limitations of preaching a “zero-defect” mantra – nothing replaces the “test” of production.

During our project retrospective, an even more important downside to the blind pursuit of “zero defects” surfaced. Over the two extra months our team spent addressing low-severity defects, our client lost roughly $400,000 in new sales because the new system was not available to collect them. (I had done the ROI calculations at the beginning of the project showing a $200K per month of new income, but had completely forgotten that holding on to the system for a couple of months meant the client would be deprived of this money entirely-they were not merely delayed). For the sake of perfection, zero-defects meant withholding key benefits – this was a far more important lesson than the first.

Certainly, few users want to deal with defects in software, particularly software that’s not ready to deliver. And, of course there are software products where a near-zero defect rate is extremely important. For example, I’d be quite upset if a zero-defect standard weren’t set for the key functionality in an air traffic control system or in certain military, aerospace, medical and financial systems.

But now, before I recommend a zero-defect quality target for any project, I make certain that the costs of such a high level of quality are not only beneficial to the client, I make certain I include in my calculations the lost benefits the product will bring to the users by holding it back until this level of quality is achieved. After all, none of us like defects, but would we really want to wait the months or years it might take before we reap the benefits from a new version of our favorite software?

A Case of Developer’s Optimism

In management, project management, Software Development, Technology on 26 October 2009 at 6:30 am

In the project management world, “How much time do you think this will take?” is often a loaded question.  This is true not only when senior managers ask this to a project manager, but is also true when a project manager asks this to a developer or other team member.

As a former developer, my knee-jerk response when asked, “How much time…” was almost always to respond with the correct answer – but under only the best possible conditions and outcomes.  It was only when I began managing projects that I saw how often my estimates were low and began to understand why.  I’ve coined this under-estimation of work effort on an individual task or unit of work “Developer’s Optimism”, though it’s just as likely that any other member of a project team could make this mistake.  Developer’s Optimism has many causes,  but it can be alleviated using a few basic cures, or adjustments, to the estimating process.  I’ll explore both the causes and the cures in today’s blog.

The Cause

Developer’s optimism is, on the surface, what the phrase states – a well-meaning software developer or other expert on the project team gives an estimate that is theoretically possible but highly unlikely.  In most cases, the developer may be quite experienced, yet their estimate only has a one in ten chance of being accurate.  Why?

It’s not that the developer wants to ruin your estimate, though you may begin to believe this is the case if you’ve had this occur with the same person repeatedly.  In nearly every case, he or she has made one of six mistakes in their estimating that you need to help them avoid, as described below:

  1. He did not include unit testing.  In most cases, a developer will not include unit testing in their estimating, particularly if you phrased the question as we did in the first paragraph.  In most developer’s minds, the work is done when the last line of code has been written on an alpha version, not when testing has been completed.  This common difference in dividing lines can throw off your estimate by 10-25% or more.
  2. She did not include re-work.  Perhaps all of your developers do perfect work and they deliver the first version of their code working exactly as your specifications are written. But, then the client takes a look at the outcome during a prototyping session and says she’d like a few changes.  Depending upon your client’s level of scrutiny and detail, this could bias your estimate downward by between 5-50%.  Sure, change requests can add hours to cover this, but isn’t better to plan for a moderate amount of change in advance, rather than come back and asking for more time and money mid-project?
  3. He estimated based on his own skills and not those of others.  Hoping for a better estimate, project managers nearly always go to only senior developers, designers and architects to benefit from their experience – big mistake!  A senior developer will almost always tell you how long it will take them to do the work themselves, not how long it will take the average developer.   Inevitably, when the time comes for the project manager to staff their team, the senior developer is not available and the estimate is unrealistic.  In this scenario, it could take as little as 10% and as much as 10 times longer (yes, doubting senior manager reading this now – 10 times!) for an inexperienced developer to complete the same work as an experienced one – particularly if the senior developer is not available to provide coaching or guidance.
  4. She estimated her portion of the work but did not include the work of others.  In this case, she’s spot-on with her estimate, but she’s the user interface (UI) developer and not the database developer.  So, if you use her estimate, the hours are gone and the UI looks beautiful, but the database queries/procedures aren’t written and the graphics haven’t been spruced up by the graphic artist.
  5. He did not include bug-fixing.  Depending upon how you estimate testing and quality assurance, you may or may not assume that the developer has included time to make bug-fixes in their estimates.  In most cases, he has not done this, because bug-fixing often occurs weeks or months after the code was written.  Depending upon the complexity of the task and experience of the developer, bug-fixing typically takes between 10% and 30% more time than completing the work itself.
  6. She did not account for complexity and risk.  In some cases, project managers never ask the critical follow-up question to “How long will this take?” — “How hard is this?”  When asked, we begin to understand how complex and risky the task actually is.  Particularly if a project involves many high risk tasks, one of these tasks is certain to face many setbacks as it is developed, blowing our initial estimate out of the water.  For high risk tasks, it’s not uncommon for the task to have a 25% chance of taking twice as long.

The Cure

Fortunately, project managers can cure their teams (and themselves) of Developer’s Optimism by taking roughly a half-dozen steps when making their estimates.  These include:

  1. Asking multiple individuals to estimate.  Instead of asking one senior developer, ask two or three developers of varying skills, then average their work.
  2. Use analogous estimating.  Analogous estimating uses historic project data to look back at the level of effort spent to complete similar tasks in prior projects. It then adjusts them based upon the differences between old and new to estimate time for the current task.  As a result, analogous estimating takes away some of the risk of downward bias that is so common when project estimates are developed.
  3. Make it clear what “this” is.  When asking the question, “How much time will THIS take?” make it very clear what THIS entails – are you assuming THIS includes unit testing, bug-fixing, re-work, the work of other team members, or none of the above.  If none of the above, be certain to include these in your own, separate estimate.
  4. Add “…an average developer…” to the question.  Instead of placing the developer in the position of assuming they will do the work, ask them to assume that they WON’T do the work by rephrasing, “How much time will this take for the average developer?”  In many cases, the developer can remember their skills when they just started or when they were a few years on the job, and will give you a better estimate.
  5. Assess complexity and risk.  For each task and the project as a whole, conduct a risk assessment and set aside a legitimate number of hours for risk contingency based on the probability of a specific approach to completing a task failing and the likely number of hours to develop an alternative if the primary fails.  Risk management is a key discipline in project management – one that impacts your estimation as much as it does your execution of the project.
  6. Use PERT.  Not a shampoo, many project managers have encountered Program Evaluation and Review Technique as a way to estimate project duration and level of effort.  During the evaluation process, PERT asks individual estimators to provide three estimates – one optimistic estimate, one moderate estimate and one pessimistic estimate for the same activity.  PERT then averages these three values using the formula, Expected time  =  ( optimistic  +  4 x moderate  +  pessimistic ) / 6.  While I tend to use 3 times the most likely and divide by 5 because I think project variation tends to be greater than 4 times the moderate estimate, I have found PERT’s estimating formula to be a good way to level out the bias.

An Optimistic Ending

Project estimation is almost always fraught with some degree of inaccuracy, so its not as though taking the steps above will result in perfect estimates.  But it is possible to have a reasonably accurate outcome – one that consistently is + or – 10% for the entire project — on a vast majority of projects.   To do this, however, project managers must take methodical steps to reduce innaccuracy.  They must also ask their team members and their senior managers to take off the rose-tinted glasses before giving or speculating on an estimate.

Dead Babies, Dead-tired Staffers and “Leaving the Zone”: Exceeding the Envelope in Software Development

In E-Business, Ethics and ideology, management, Quality, Software Development, Technology on 12 October 2009 at 10:38 am

I know, I know.  “What do dead babies have to do with software development?” you say.  “Are you playing my heart-strings?”

Sensationalism being what it is, I have to admit that I couldn’t avoid leading with nearly everyone’s horror – dead babies.  Yet, there is a critical tie between today’s three attention-grabbing subjects and software development that makes this entry worth reading.  And, it has implications for how you manage your staff, software developers or otherwise, in the days to come.  Read on.

Leaving the Zone

It’s been more time than I care to admit, but during my senior year of college I learned what it means to be “in the zone” as well as what it’s like to leave it – painfully.  A track and cross country athlete at the NCAA Division 1 level, I was the first finisher on my team for my first four races, finishing in the top five for every race and posting two victories.

Old glories aside, it’s far more notable what happened next – my body crashed.  Though I’d trained hard with the rest of my team by posting a full Summer 80+ mile weeks and even two at 100+, I then took on the World when classes started, signing up for a full slate of five courses and tacking on a full time job managing a political campaign in Montecito, California, fifteen minutes south of Santa Barbara.  I slept less than five hours a night and spent nearly all my time racing from one place to another, which is a sure recipe for a wrecked body.

Back then, I had no idea there were limits to the punishment my body could take, but I found out quickly.  After consistent finishes with the lead pack among hundreds of runners, I finished no better than the middle of the pack in my remaining four races.  Even worse, my team went from three victories in four races to middle-of-the-pack as well, in part because I was no longer pacing them to victory.  At the end of the season, the affects of over-work were readily apparent – an MRI showed three stress fractures, one of the femur, our body’s largest and most durable bone.  Clearly, I didn’t recognize when I was “Leaving the Zone” by over-working myself, but had only just come to realize this.

Dead-Tired Staffers

Amazingly, it took an enormous amount of self-abuse for me to finally start listening to the messages my body was sending me about being tired or over-worked, but the lesson has stayed with me since.  As I’ve spent more time at work leading people, I’ve noticed that lesson also to the work world, where tight deadlines and high-pressure work can lead us as leaders to push for overtime again and again.

Consider your last marathon project with brutal deadlines and lots of overtime: Can you remember seeing these signs of over-work in your team members as they pushed themselves beyond their limits:  Irritability; an inability to concentrate; lower productivity; poor quality; at the extreme, negative productivity, when more work was thrown out than is gained?  Looking back, you’ve probably seen at least a few of these, and if you check out your defect logs from the work produced during these times, you’ll notice a spike in the number of defects resulting from your “more overtime” decision.  But, maybe you’re still denying that over-work will threaten the success of your projects, not to mention the long term well-being of your team members, as you run a dedicated team of dead-tired staffers over the edge.

Dead Babies

If this is the case, you wouldn’t be the first manager I’ve met who doesn’t understand how over-work can actually slow your project down rather than speed it up.  Software developers, analysts, engineers and QA team members, these managers argue, are hardly putting in a hundred miles of physical exertion each and every week, though they may be putting in 60 or 70 hours of work.  These managers counter that mental exertion and sleep depravation are not the same as physical exertion on the level of a college athlete. Or, they accept it in theory, until a project falls behind or a key deadline looms.

Though I found a number of excellent articles and blogs on the subject of software development and over-work and I’ve posted at the bottom of this article, the best evidence of the adverse affects of sleeplessness, stress and over-work on our ability to use our minds productively actually comes from the world of parenting.  In the Washington Post article, “Fatal Distraction: Leaving a child in the back seat of a hot car…”, report Gene Weingarten moves beyond the emotion of a very sensitive subject and asks the telling question of what was going on in the lives of parents who leave their children in cars on hot, sweltering days.  The answer?  Stress, sleeplessness, over-work and half-functioning brains – in many cases brought on by us, as managers.

“The human brain is a magnificent but jury-rigged device,” cites Weingarten of David Diamond, a Professor of Molecular Physiology who studies the brain.  (Weingarten and Diamond deserve all of the credit for this research, but I’ll paraphrase).  A sophisticated layer – the one that enables us to work creatively and think critically – functions on top of a “junk heap” of basal ganglia we inherited from lower species as we evolved.  When we over-work our bodies, the sophisticated layer shuts down and the basal ganglia take over, leaving us as stupid as an average lizard.  Routine tasks are possible, like eating or driving to work, but changes in routine or critical-thinking tasks are extremely difficult.  Even the most important people in our lives are forgotten when fatigue and stress are applied, as Weingarten’s article shows.

If an otherwise dutiful, caring parent can’t remember their own child when sufficiently fatigued, what is the likelihood we’ll get something better than a dumb lizard from our software development team when we push them above sixty hours per week again and again?  And when they’re finished, how high will their quality of work actually be?

So, when considering another week of over-time, think twice.  Sometimes, it’s better to just send the team home.

—–

Gene Weingarten’s Washington Post article can be found here: http://www.washingtonpost.com/wp-dyn/content/article/2009/02/27/AR2009022701549.html?wpisrc=newsletter&sid=ST2009030602446

Other good articles on overtime and software development can be found here:

http://xprogramming.com/xpmag/jatsustainablepace/

http://www.uncommonsenseforsoftware.com/2006/06/planned_overtim.html

http://www.systems-thinking.org/prjsys/prjsys.htm

http://www.caffeinatedcoder.com/the-case-against-overtime/

Challenges Managing in the Knowledge Economy? The Answer is Beneath You

In Business, management, Technology on 10 August 2009 at 8:45 am

As children, many of us grew up with a vision of the manager as “the boss” – a key source of authority in the business who knew the answers, told others what to do and led with a firm hand.  As adults, many of us move into management and try to live that childhood vision – with disastrous consequences.  Why hasn’t the “boss as manager” model work for us today? What’s changed?  Surprisingly, the answer is beneath you.

To understand what this means, consider, first, the era when management as profession came into vogue – the industrial age.  The emphasis had shifted away from the craftsman-apprentice model before mass production, where individuals passed on knowledge one-on-one.  Instead, the secret to success in the industrial age was mass production, which required the standardization of business processes and simplification of tasks so that unskilled workers could more easily complete the work.

A manager’s primary responsibilities were to coordinate the efforts of large groups of people, so a highly regimented top-down management style fit best. The 24-hour day was broken down in to two or three shifts, people were assigned to shifts and they generally completed their assigned work. During this age, a manager’s biggest challenges were employees who didn’t show up on time, who left early or who didn’t complete their fair share of work.  A firm hand was required to prevent abuse and keep productivity up; the “manager as boss” excelled in this world and this management approach flourished.

On to the knowledge age, where knowledge is the key tool used to create products as well as the product itself.   For the first time, many products — semi-conductors, computer hardware, software, even modern phones — rely heavily on knowledge as the key inputs to their creation.  In the knowledge economy, the highest single cost in creating the product is the labor of experts – not manufacturing labor or raw materials.  In turn, making the most of the knowledge of these experts is the key to success – not necessarily making the most of their time.

With the shift from industrial economy to the knowledge economy, the storehouse of knowledge and authority are no longer in the same hands.  In the industrial age, the boss knew best; in the knowledge age, you, as a manager, still have the authority, but the knowledge is beneath you, in the hands of the experts.  To adjust for this, a change in management style is needed.

Based on my conversations with respected managers in the age of knowledge and my own experience, here’s how to succeed as a manager in the knowledge economy:

  1. Ask the experts.  This sounds simple and straightforward, but it’s rarely followed.  Because many of us are former experts who moved in to management, we consider ourselves experts still.  Yet, how long does it take before our expertise becomes out-dated when we don’t use it?  As little as six months? As much as a couple of years?  Consider this: If your subordinate knows all of the features of the latest version of your software, but you know all the features of the last one, whose knowledge is more valuable?
  2. Facilitate – don’t order.  As managers, we all know that the experts around us are extremely bright – in many cases brighter than we are.  How condescending it must seem, then, when we managers order our team members to execute our instructions.  Instead, help them to identify the problems together, then assist in developing a methodical approach for solving it.  In the process, it will become clear which team members should tackle each step in solving the problem.  No orders necessary.
  3. Coordinate – don’t control.  As former experts, we’re acutely aware of the challenge of being “stuck in the weeds” – that desire to keep your head down and focused on solving the problem in front of you.  As much as that tendency toward isolation grips our fellow experts, we should coordinate their efforts and encourage them to work together as a team.  Coordinating the team means bringing them together, discussing key challenges and asking one expert to help another to achieve breakthroughs when addressing a challenging problem.
  4. Serve as a knowledge bridge.  In many cases, the specialized knowledge of one expert on your team is very different from the specialized knowledge of another.  For example, one person may specialize in database design, while another specialized in user interface (screen) development.  Because of this, they are often working on very different tasks, even though one person’s knowledge may be needed to solve another person’s problems. As the manager of this team, we know what each team member is tackling and we should know whether they have solved past problems.  It’s our job to connect the dots across the work of our teams, to point out patterns in problems that we see, and to bring together the experts to share their relevant knowledge, solving each other’s problems more quickly and efficiently.
  5. Set challenging (but realistic) goals.  Knowledge experts like challenges, so work with them to set short term and long term goals that are SMART – Specific, Measurable, Attainable, Relevant and Time-specific.  As part of the process, be certain to set goals to be a little more challenging than the person believes is possible – but not so difficult that the person ignores the goal because he or she thinks it’s impossible.
  6. Value the individual.  In the industrial age, the loss of an individual team member was disappointing, but it wasn’t likely to cripple your business, your division or your projects.  In the knowledge age, there are people who are so essential that their departure could force the business to crumble.  While I would argue that it has always been important to treat people well, in the knowledge age, it’s even more important to treat each individual with respect and consideration.

——-

In many respects, the knowledge economy has made managing more challenging.  In many management positions, “people” skills are more important than analytical abilities.  Even more challenging, your position as manager often makes you more expendable than your subordinates.  Combined, it’s critical to your success as a manager to look beneath you for the knowledge and expertise for you and your organization succeed.

“But They Said They Understood…”: A Common Mistake with Indian Off-shore Teams

In Business, Culture, E-Business, Ethics and ideology, management, off-shore, Technology on 6 February 2009 at 1:23 pm

If you’re a long-time U.S. IT Manager, you’ve probably already led international teams composed of individuals from all over the globe.  I was fortunate, for example, to have one project with team members from England, Germany, Australia, Singapore, India and all four continental U.S. time zones.  While the mix of cultures and talents can cause conflicts, once the team gels, the results can be overwhelmingly positive.  It’s amazing to see what a team working nearly 24X7 can do when you lead it properly.

One mistake I’ve seen made by U.S. IT managers involves managing Indian off-shore teams, in particular, and has been repeated at three different client sites in the last five years, so it’s worth a good blog entry.  First I’ll explain the scenario and then I’ll explain why it is legitimate – NOT bigoted – to point out this common mistake so it can be avoided.

The Mistake

To explain the situation, you’re running a newly formed off-shore team and you’ve just assigned them a particular set of tasks that make up a deliverable. You ask, in front of the group or over a conference call, “Do you have any questions?”  When no questions are heard, you move merrily on and end the meeting, continuing on with your week’s work until you have your next meeting with your team.

“Is the work done?” you ask.  No.

“How much progress did you make?” None.

“Is it not explained clearly?” Yes, comes a response. Then, silence.

It’s at this point that we leaders usually begin our rant that it is not acceptable to complete nothing during a given week.  We consider terminating people, canceling our contract with the entire team, or trying to recoup costs now that the team is one week late.  As much as all of these actions would be acceptable in our culture given the outcome, this neither the way to deal with the problem, nor is it in the long-term best interest of your company.

The Cause

If you thought the problem was with literal understanding of your words, it’s possible, but unlikely.  Most Indians receive a healthy dose of English throughout their education and can understand it even if their pronunciation doesn’t sound like a Hollywood movie. But if you’ve figured out that the situation above occurred because of cultural differences, you’ve come to a more likely conclusion, though it will help to understand it in more detail than to merely say, “it’s cultural”.   Enter Geert Hofstede, a Dutch researcher and author of “Culture’s Consequences and Cultures and Organizations, Software of the Mind”, which can be found by googling the ISBN 9780071439596 or visiting this page on Amazon.com.  

Mr. Hofstede and his son Gert studied different cultures throughout the World but within the same company, IBM, and determined that there are five key differences in World cultures that can be scored across a continuum.

Individualism v. Collectivism: The extent to which a culture emphasizes speaking up for oneself and taking a unique path in life versus belonging to a group and benefiting from group affiliation.

Masculinity v. Femininity: The degree of emphasis on traditional Western male or female roles, such as assertiveness in males and subservience in females.  (If you don’t like the way I’ve phrased this ladies, I’m sorry. I’ve done my best to make it accurate and fair without losing the message. Alternate ideas on how to phrase this are appreciated).

Power Distance: Power distance refers to the social distance placed between people in authority compared to those who are not.  Because authority is relative (I have a supervisor, but I also supervise others), you can expect a middle-manager to behave just as their subordinates to them, but with their own manager.  As one would assume, the greater the power distance in a culture, the more deference and subservience subordinates display to their superiors; the lesser the power distance, the less deference displayed.

Uncertainty Avoidance: The desire or need to avoid uncertainty in relationships or dealings with others.  Cultures that try to avoid uncertainty have lots of rules.

Time Horizon: Some cultures have a short-term time orientation, while others have a long-term time horizon.  As an example, business leaders in the U.S. tend to manage to maximize short term, quarterly profits, while those in Japan and China manage across lifetimes and generations.

If we compare scores between the U.S. and India, we can better understand (or at least speculate) about why our mistake occurred.  While there are similarities in masculinity and uncertainty avoidance scores between the U.S. and Indian cultures, there are dramatic differences in power distance, individualism and time horizon between us.  The specific scores are here, but it’s important in our situation to note that Indian subordinates are far less likely to speak up when talking to a person with more authority and are far less likely to contradict or challenge someone in front of a group.   So, when you asked, “Are there any questions?” it was pretty unlikely you’d hear any from your team – even if they had them.

It’s probably good for me to note, as well, that these are generalizations. Just as all Americans are different, this is equally true with Indians, so you may well see different behavior from your team members.  The Hofstede’s describe the norm within a culture, not the exception.

A Better Response

Having managed over a dozen projects composed of Indian development and quality assurance teams, I have found that there are better ways to avoid the “Understanding Gap” and prevent it from occurring.

  1. The confirmation question. In our situation, we asked, “Does any one have any questions?” to the group as a whole.  Instead, ask each individual slightly different questions, phrased in a way that confirms they understand specific elements of the task.  As an example, one might ask, “<Name here>, I’m a little uncertain how I’d complete your portion of the work, so maybe you can help me understand. How were you thinking you’d test the <insert name> functionality?”  Or, “You’re most likely to find building the <insert name> component challenging.  Have you thought about the steps involved?”  This approach not only confirms the person’s understanding, it results in better design because the person asked may have a better approach than you do (unless you have a monopoly on brilliance?).
  2. The one-on-one. After asking confirmation questions, if you find one or two individuals struggling, schedule a one-on-one to go through their work and answer their questions.  In a one-on-one, they are more likely to feel comfortable asking pointed questions, and may even propose a better way to complete the work.
  3. The follow-up call. This one is simple.  If you’ve assigned a task, don’t wait one week to check on progress.  Check back with the team at least every other day to make sure they’re making progress and understand what you’ve asked.  Over time, this will be needed less and less, but initially, the follow-up call is a true time-saved
  4. The “you’re among family” reminder. Regardless, of culture, everyone has the fear that a “stupid” question or a mistake will threaten their jobs.  In some cases, the fear is warranted.  Particularly with teams that have just formed, remind the team members that “they are among family” when speaking to team members and that team members are here to help each other.  Even more important than saying, “you’re among family” is to live up to that statement. Do not brow-beat subordinates for small mistakes and do not cavalierly fire people because of a single error.  If you do, you’ll find the two-way channel you need to effectively lead a team is suddenly closed.

Possibly, you’re reading this article before you’ve managed your first global, off-shore or Indian team, so it’s been a good primer.  But there’s far more to know about the subject than can be posted in a single blog entry.  Though it’s very academic in the way it’s written, I encourage you to buy and read Hofstede’s book, referring back to the cultural dimensions the book provides on graphs so that you better understand each team member’s cultural before you try to relate to them using a purely American mindset.  I’d also use the following links for quick reference once you’ve read the book through:

http://www.geert-hofstede.com/

http://www.geert-hofstede.com/hofstede_dimensions.php

Doing so could save your company thousands, if not millions of dollars, keep your projects running smoothly and – most importantly – help you to build a harmonious work environment where people look forward to each and every day.  After all, isn’t that what keeps us from burying our heads in the pillow and hitting the snooze button twelve times?

Does “Process Improvement” Kill Creativity?

In Auto Industry, Business, Ethics and ideology, Manufacturing, Quality, Technology on 23 January 2009 at 1:58 pm

Early in my career, ISO-9000 was just coming into vogue and my employer, Manpower had earned the honor of being called ISO-9000 certified.  To say the least, the ISO-9000 concept was a little irritating to a young, creative-type:  Processes are documented, standardized, and followed without deviation because deviation yields an inconsistent outcome and inconsistent quality.  Even worse, ISO-9000 principles were being applied by Manpower not to manufacturing but to services, where the human factor was so important.  While people certainly admire the fact that a Hershey bar has the same consistently delicious taste, would the feel the same if the Service Rep at a Manpower office answered the phone in an identical manner every time, smiled at visitors in the identical manner and greeting them with the same Mr. or Ms. in the same robotic way?  Somehow, ISO-9000 seemed to be forcing the soul out of services and driving creativity out of the American worker.  This would not stand.

Fast forward nearly twenty years and I am now the devil I once cursed.  A leader of IT endeavors of all kinds, I regularly propose improvements to and then standardize processes for the company and clients I serve. To-be diagrams evolve into Standard Operating Procedures (SOPs), guidelines or end-user documentation. Similarly, systems are built with virtual guard rails that keep users from driving off the side of a digital cliff, enforcing the business rules and guidelines that are at times irritating and often restrictive, forcing workers to not only perform the same task repeatedly, but forcing them to do it in exactly the same way for sake of consistency.

Staring the enemy in the eye every time I pass a mirror, I think about what I’ve done. With such limits and constraints, how can creativity establish a foothold, much less flourish?  Have I not killed the entrepreneurial spirit of co-workers and end-users, alike?  With all these constraints, how many good ideas have been stifled, delayed or killed? Has the work I’ve done under the banner “Process Improvement” standardized our work to the point that we’re all nothing more than automatons?

A big believer in creativity and diverse thinking, I know that the World’s greatest innovations come from ignoring conventional wisdom and trying something a different way, so these questions are not trivial.  I think my answer, however, comes from two disparate figures:  Geoffrey A. Moore and Kiichiro Toyoda.

For those of you who don’t know Moore, he’s a business geek’s ultimate hero — the man behind the technology adoption lifecycle, Crossing the Chasm, and Dealing with Darwin.  It is in Dealing with Darwin that Moore introduces the concept of reallocating business resources from context to core.  Context is all that work done by employees that does NOT separate your business from its competitors.  Cores represents all work that is critical to delivering your products or services uniquely; core helps to separate you from your competitors and is the leading driver of innovation.  According to Moore, businesses spend far too much of their time (80%) in context activities and far too little in core (20%) involved in the core.

Let’s apply this to process improvement and process standardization.  These exercises provide a window for innovation, then they lock down a process so that it yields consistent results.  They also reduce a business’ emphasis on context activities by removing unnecessary steps and automating once-manual processes.  So, more time can be spent on the core, where a business can differentiate itself, developing new products or services with the creative mind.

Kiichiro Toyoda had a similar mindset nearly fifty years earlier when he developed the Kaizen philosophy of continuous improvement and the lean manufacturing concept targeting the elimination of waste.   Founder of Toyota Motor Corp, Toyoda had a keen eye that focused human efforts on eliminating waste and improving processes rather than perpetually repeating them without question.  Combined, Kaizen and lean are key reasons why Toyota leads in sales and product quality and why Toyota employees are among the happiest in the industry.

So, considering Toyoda and Moore when reflecting upon my past sins in the areas of process improvement and standardization, I’ve developed a few principles to keep in mind as we standardize:

(1) Wherever possible and cost-effective, automate.  There’s no sense in having people do work that a machine or computer can do faster and more consistently, especially when this is sure to dull the human capacity for innovation.  Instead, people should monitor repetitive processes, not do them.

(2) Involve workers and end-users in innovation.  Your best ideas often come from the line-worker, the front desk staff or a computer system’s end-users.  This also gives them an opportunity to flex their mental muscles.

(3) Focus your employees on creative efforts inside the core.  If you have people who are spending their time trying to marginally improve legacy products or services, redirect them to activities that create new products or radically transform current ones — efforts that will benefit most from the human capacity toward innovation.

(4) Leave room for creativity and individuality.  Where product quality won’t suffer and humans are involved in production, leave room for creativity and individuality.   This one is the hardest to follow, because we know that a consistent product is best created by a consistent process.  But, avoiding excessive detail in a process leaves room for grass-roots innovation and keeps the human mind engaged.

(5) Build a World that is Human-Centric.  Human beings are inherently creative and intuitive:  We move beyond patterns to think of completely different ways to solve a problem, create art or experience life.  All of the products, services and processes that we create need to remain human-centric, recognizing that they exist for the benefit of humans and to add value to the human experience.

Looking back at my list, I’m not fully satisfied that I’ve slayed the demon who kills creativity in the name of process and quality. Nor am I certain that there’s an easy way to balance the need for high quality with the need for innovation and human creativity.  But, at least I have a set of principles to follow to measure my progress.

As Gun-Shy Investors Turn Away from Traditional Markets, Banks Face Newest Threat

In Banking, Business, Current Events, E-Business, Finance, Peer-to-Peer Lending, Technology, Trends, United States on 16 January 2009 at 4:55 pm

Earlier today, Bank of America and Citi Group posted huge losses, then stood in line with a tin cup held out for more government loans via TARP (16 January 2009, Washington Post, “Bank of America, Citigroup Post Major Losses”).   It’s clear that, despite an injection of $350 billion, the U.S. banking industry is still reeling nearly four months into the credit crisis that has brought down some of the World’s largest banks and investment houses, leaving carnage in its wake.

Yet, at the same time these titans deal with “toxic” loans and absorbing the remains of their recently departed siblings, another threat grows beneath their giant footsteps and between their toes – Peer-to-Peer Lending.  Fueled by the sour credit market, distrust in traditional banks and fear of continued losses in the stock or bond market, once-wary investors are taking their dollars to upstarts prosper.com, lendingclub.com and even virginmoney.com where they can earn superior returns, diversify their investments and know specifically where their money is going.  Though resources in traditional banks are best directed toward immediate financial crises and folding in the business of recently acquired competitors, it’s time for traditional banks to start planning for the coming onslaught from peer-to-peer.

In only a few years, peer-to-peer lending has sprouted from the more-proven micro-lending practiced in developing countries and pioneered by Dr. Muhammed Yunus and Grameen Bank in 1983 (16 January 2009, http://en.wikipedia.org/wiki/Muhammed_Yunus).  Realizing that the loans needed by low-income individuals were far too small, uncollateralized and therefore “too risky” for traditional banks, he began making many small loans via Grameen to under-privileged entrepreneurs, who took the meager sums and made sizable profits, yielding healthy returns for Grameen.  Not specifically interested in making money, Yunus saw how the concept of pooling small sums of money from borrowers to make larger loans or taking larger sums to make many smaller loans had an enormous positive impact on the poor.  This became his business model for Grameen and other micro-lenders like it.

Operated much like the micro-lending Grameen, peer-to-peer lenders match lenders with borrowers on a relatively small scale – often no more than $25,000 for an entire loan and typically in the $5,000-$15,000 range.  Borrowers meet minimum standards for credit-worthiness and credibility, then they post information about their requested loan on line, stating how the money will be used and how much they need.  For most peer-to-peer sites, borrowers approve each lender’s loan offer and terms until the target loan amount is met.

For their part, lenders can lend out as little as $25 to a specific borrower and spread their money around as they deem fit, minimizing the risk that a single lender’s default will cause a huge personal financial loss.  Most often, payments are received on a monthly basis and doled our proportionately to each of the lenders until the fund are repaid.

Peer-to-peer lending sites like LendingClub.com and Prosper.com make their money in a few ways, though the process isn’t entirely consistent between them:  Peer-to-peer lending sites collect a 1-to-3 point service fee on the loan, much like the spread between the amount banks charge their borrowers and the Federal Funds rate at which they borrow.  They may also collect an on-going loan maintenance fee, late payment fees and collection fees if the borrower defaults.  In return, the peer-to-peer brokers screen the borrowers, process the loan, capture legal signatures and may even assist with collections if the borrowers default.

None of this sounds very threatening to traditional banks as of 2009.  The current market for peer-to-peer lending is about $100 billion (www.business-standard.com) and is dwarfed by the total U.S. market of $2.56 trillion (Federal Reserve, http://www.federalreserve.gov/releases/g19/Current/).  But, consider how the credit crisis has created a fertile environment for peer-to-peer lending:

(1)    The spread on a secured consumer loan for a car is 6.88% (7.13% as listed on Bankrate.com – 0.25% Federal Funds rate), far more than the 1 to 3% charged at peer-to-peer lenders. Though rates a higher of unsecured loans in both environments, the difference in spreads is even more dramatic.  (www.bankrate.com and http://www.federalreserve.gov.)
(2)    The yield on a 5-year government bond is 1.47% while the yield on a loan with a similar commitment, an auto loan, can yield 6-9% for the lender – a 4-to-7 point difference (http://www.bloomberg.com/markets/rates/index.html).  Certainly, risk is a partial factor in the large spread, but the other factor is likely the profit margins of banks.
(3)    Traditional lenders are turning away borrowers on all types of loans at record rates in efforts to shore up their portfolios and reduce risk.  Consumer credit dropped in December 2008 for a third straight month and automakers are citing the credit crunch as a reason car sales were off  by one-third between ’07 and ’08 (http://online.wsj.com/mdc/public/page/2_3022-autosales.html).

Traditional banks have some time to respond to the threat posed by peer-to-peer lending sites, but it can be measured in months and not decades.  They are unlikely to be able to compete with them via traditional methods, because the cost of staff, buildings and infrastructure in the brick-and-mortar is simply too high.  But it is viable

(1)    Ignore peer-to-peer lending and hope it fades away – a dangerous way to deal with a tech-savvy threat.
(2)    Acquire a peer-to-peer lending site once the process is refined and market penetration still low, reaping the largest gains by increasing market penetration.  This can be dicey, especially if the market potential is recognized early, driving up the price.
(3)    Develop their own peer-to-per lending capabilities to compete with the upstarts, keeping one of the smaller players from becoming the next “MySpace” or “YouTube” that fetches an exorbitant price on the open market.

Regardless of the path chosen by traditional banks, their spreads are likely to drop, forcing their business practices to change, as well.  A few will under-assess the threat and act too late, bringing them down in the process.  This does not bode well for the people who work for the titans of banking’ they are likely to see another assault on their jobs, just after the “credit crisis” has already dramatically cut their ranks.

Why I dumped Bill for Steve (first betrayal, then new love)

In Technology on 2 January 2009 at 6:48 pm

There are a few times in your life you have to make big decisions that you KNOW will dramatically affect the rest of your life. You make them with some trepidation, you make them with the promise of a brighter future, but you make them anyway.  One was the decision to pack everything I owned into my Jeep Wrangler and head East to a new life in Maryland back in 1994 (good decision). Another was asking my wife to marry me in ’95 (great decision, don’t anybody remind her I got the better end of the deal), and yet another was to move the family for a promising job thousands of miles away (good decision, bad outcome).  Though not quite as significant in my life as those three other decisions, I made another monumental decision two months ago that may well have just as much impact.

[For those of you looking for juicy gossip on gay love, I’d suggest visiting another blog.  As much as the title might be mis-leading, this blog entry isn’t about a romantic relationship, sorry.  If, however, you’re considering a switch from Microsoft to Apple, or your an Apple-head, read on.]

Okay, back to the story and the big decision. [I did tell you this blog is called “Nash Ramblings”, didn’t I? You didn’t expect something short and to-the-point, did you?]

I was just starting a new job and getting settled into my position, when my employer asked me a simple question – “Bill [Gates] or Steve [Jobs]? PC or Apple?” Now, this may seem like a trivial question, much like “McDonald’s or Burger King?”, “coffee or tea?”, “thumb tacks or push pins?”.  And sure, I can hear your snickers traveling over the wire right now, vibrating my keypad and jiggling the power cord so much the pretty little power indicator is blinking like Rudolph’s nose.  “Who cares what kind of computer you use” and “how pathetic is your life that this decision could be life-changing?”, you chuckle.  But for me, a thirty-year veteran of computing with over two decades of heavy desk time with IBM/HP/Toshiba PC’s and Microsoft operating systems, this question held the potential to rock the very foundation of my World.  Would I, a Microsoft devotee, actually consider a move to Apple? Why was I hesitating to blurt out “I’ll take the PC” to continue in my comfortable, cozy computing cocoon and instead thinking, “I’d like the Apple”?

Here’s why…

  1. Vista. I know this is already a public relations disaster for Microsoft, but I’d like to echo my hatred of this operating system so that the folks at Microsoft can realize how badly they messed this one up. Let’s start with those stupid “permit or deny” messages that keep me from moving a file from one folder to another – my own files, mind you, not another users – without saying, “permit”.  Then, let’s try to figure out why you had to re-label control panel utilities, printer configurations and modify application navigation.  Was there anything wrong with the phrase “Add/Remove Programs” that existed since Windows 95, if not earlier?  Did any of your focus groups include a single, solitary experienced user?  Did you really have to scrap 90% of my historic knowledge for a UI that benefited the 5% of your user base who is completely unfamiliar with Windows XP?  As you can see, I hate Vista, so a switch to another OS – any other OS, was possible.
  2. Office 2007 for Windows. Preferring to create a two-headed hydra, Microsoft not only introduced Vista, they published the drivel known as Office 2007.   Gone are the pleasant menu and toolbars that I memorized from Word 6.0 from the early 90’s and in its place is a series of cryptic icons that fail to categorize tools in any logical fashion.  What I could once do in a couple of keystrokes or a few mouse clicks took me up to 5 minutes to do in Word 2007 as I looked up how to add a page break, adjust a section heading, insert a table of contents or insert a table. No longer could I hit “Alt-F-S” to save a file or “Alt-T-O” to show the options menu.  Again, I must ask Microsoft, where was the logic in stripping your product of it’s most elegant feature – the Microsoft menu bar?  Didn’t any experienced users complain that this was madness before the product was shipped?
  3. Apple’s move to Intel chips. A few years back, Apple made the bold step of abandoning their Motorola chips for Intel.  To non-techies, this was not a big deal and made little sense, because Motorola is a brand name that we all know and respect.  But to Apple, the Motorola chip’s design was limiting their ability to make software compatible with PC’s, not to mention having a similar affect on their software developing partners.  With the move to Intel chips, it’s much easier to build operating systems and applications that can port to other platforms while using suppliers partial to Intel-oriented technology, making the costs of building and supporting the Apple product lines much lower for Apple and the vendors who work with them.  It also signaled Apple’s willingness to go head to head with IBM, HP and Dell rather than operate on the fringes of the computing World.
  4. Compatible files and compatible applications. In the past five years, since the move to Intel chips and OS X (version 10, a Unix-like operating system), file compatibility and application compatibility have improved dramatically.  I can now open my MS Word files on my Mac, edit them using iWork Pages or Open Office and send them back to a friend using a PC to read and update. I no longer have to wonder if they’ll be able to read my files or edit them, because software vendors have worked hard to make them compatible. 90% of the files I create can smoothly transition back and forth, and 90% of the features I cared about in Microsoft Office are available from one product or another on the Mac.
  5. Virtual Machines. About ten years ago, software developers created a virtual machine software for running Microsoft Windows on the old Mac OS that was slow, clunky and poorly integrated with peripheral devices like printers and external drives.  I can’t remember the name, but I tried it once on my Mom’s Macintosh and declared it inadequate.  Since the advent of VMWare’s Fusion and similar products, the virtual machine on the Mac has come of age and it’s now possible for me to run the half-dozen Windows-only applications essential for my job.  Microsoft Project not available on the Mac? No problem – boot Windows XP (yes, it’s licensed and legitimate) from VMWare, open the file stored on my Mac, update it, print it as a PDF and distribute it to the project team.  Too cheap to buy a new version of Adobe Photoshop that I bought four years ago for my PC?  No problem – boot Windows XP, update the graphic I need for a family greeting card, save it as a jpeg and use it in Mac OS.  Though they’re far from secure and Mac purists are probably upset that I still need to use Windows on occasion, virtual machines make it possible for me to switch to Apple, use the tools required for my job and keep my library of old software products that still work just fine.
  6. Changing work responsibilities. A half-decade ago, I was managing people and leading projects, but I was also still coding.  Back then, it was important to know that the code I wrote would work smoothly on 90% of the desktop machines on the planet – the Microsoft population.  For those of you who still code, the prospect of testing and re-testing on multiple platforms is almost as irritating as testing in multiple browsers, so the thought of adding yet another complication to the software development process was unappealing.  Now, I’m older and I don’t code any more, so I simply don’t worry about whether my code will port from one machine to another seamlessly.  And, with the addition of Java to the Mac environment along with the embracing of W3C standards in web browsers, I’d worry a heck of a lot less if I still did code.
  7. Helpful people. I’d be lying if I implied in this article that my transition to Mac as entirely trouble-free, because it took me about two weeks to become competent enough with the Apple to rival my abilities in Windows.  Along the way, my boss tolerated the “how the heck do I…on the Mac” questions while he patiently helped me to adjust.  He easily could’ve thought me incompetent and fired me. Similarly, a friend of mine who works at Apple (oooooh, how convenient), Scott H, helped me to map Windows applications to their Mac-based counterparts (“Windows Explorer” is now “Finder”, “The Dock” replaces the “Start” menu, “Preview” can open Acrobat PDFs, etc.).  Without their help and patience, I’m sure I would have made the move, but it probably would’ve taken a week or two more.  Thanks, gents.
  8. The Intuitive UI. Thus far, most of my reasons for the switch to Mac have been anti-Microsoft, but there’s one BIG reason to make the switch to Mac rather than the switch from Microsoft – the easy-to-understand user interface (UI).  Apple did something that Microsoft failed to do with Vista and Office ’07 – it created and maintained the intuitive user interface that’s existed on the Apple since the first box-shaped Macintosh rolled out in the early 1980’s.  Sure, enhancements have been made, but they’re additions more than radical changes.  The apple menu option still appears in the top left corner of applications to navigate the entire system, and the main desktop can still be just as cluttered or organized as the first hard-drive equipped Mac’s of years past.

So, it’s now been two + months and I’m a happy Mac user.  There are a few features I miss from Windows, but not many, and while it’s possible I’ll make a jump back to the Windows OS some day, the seas of change will have to be equally stormy to push me off of the Apple and on to another.  To Microsoft and their founder, Bill Gates, I close with the message, “You started well, but handed the keys off to the wrong driver(s).  Fix your mess, or we users will fix it for you.”  To Apple and their founder, Steve Jobs, I say, “nice operating system, beautiful computer.  And, Steve – don’t even THINK about dying.  Who in the World will lead Apple, but you?”