Crafting a Product Vision

In his book, “Crossing the Chasm,” Geoffrey Moore offers a template of sorts for crafting a product vision:

For (target customer)

Who (statement of the need or opportunity)

The (product name) is a (product category)

That (key benefit, problem-solving capability, or compelling reason to buy)

Unlike (primary competitive alternative, internal or external)

Our product/solution (statement of primary differentiation or key feature set)

To help wire this in, the following guided exercise can be helpful. Consider the following product vision statement for a fictitious software program, Checkwriter 1.0:

For the bill-paying member of the family who also uses a home PC

Who is tired fo filling out the same old checks month after month

Checkwriter is a home finance program for the PC

That automatically creates and tracks all your check-writing.

Unlike Managing Your Money, a financial analysis package,

Our product/solution is optimized specifically for home bill-paying.

Ask the team to raise their hand when an item on the following list of potential features does not fit the product/solution vision and to keep it up unless they hear an item that they feel does fit the product/solution vision. By doing this, the team is being asked, “At what point does the feature list begin to move outside the boundaries suggested by the product vision?” Most hands should go up around item #4 or #5. All hands should be up by #9. A facilitated discussion related to the transition between “fits vision” and “doesn’t fit vision” is often quite effective after this brief exercise.

  1. Logon to bank checking account
  2. Synchronize checking data
  3. Generate reconciliation reports
  4. Send and receive email
  5. Create and manage personal budget
  6. Manage customer contacts
  7. Display tutorial videos
  8. Edit videos
  9. Display the local weather forecast for the next 5 days

It should be clear that one or more of the later items on the list do not belong in Checkwriter 1.0. This is how product visions work. They provide a filter through which potential features can be run during the life of the project to determine if they are inside or outside the project’s scope of work. As powerful as this is, the product vision will only catch the larger features that threaten the project work scope. To catch the finer grain threats to scope creep, a product road map needs to be defined by the product owner.

Improving the Signal to Noise Ratio – Revisited

Additional thoughts about signals and noise that have been rattling around in my brain since first posting on this topic.

At the risk of becoming too ethereal about all this, before there is signal and before there is noise, there is data. Cold, harsh, cruelly indifferent data. It is after raw data encounters some sort of filter or boundary, something that triggers a calculation to evaluate what that data means or whether it is relevant to whomever is on the other side of the filter, that it begins to be characterized as “signal” or “noise.”

Since we’re talking about humans in this series of posts, that filter is an amazingly complex system built from both physiological and psychological elements. The small amount of physical data that hits our senses and actually makes it to our brains is then filtered by beliefs, values, biases, attitudes, emotions, and those pesky unicorns that can’t seem to stop talking while I’m trying to think! It’s after all this processing that data has now been sorted according to “signal” (what’s relevant) and “noise” (what’s irrelevant) for any particular individual. Our individual systems of filters impart value judgments on the data such that each of us, essentially, creates “signal” and “noise” from the raw data.

That’s a long winded way to say:

data -> [filter] -> signal, noise

Now apply this to everyone on the planet.

data -> [filter 1] -> signal 1, noise 1

data -> [filter 2] -> signal 2, noise 2

data -> [filter n] -> signal n, noise n

As an example, Google, itself a filter, is a useful one. Let’s assume for a moment that Google is some naturally occurring phenomenon and not a filter created by humans with their own set of filters driving what it means to create a let’s be evil good search engine. To retrieve 1,000,000 pieces of information, my friend, Bob, entered search criteria of interest to him, i.e. “filter 1.” Maybe he searched for “healthy keto diet recipes”. Scanning those search results, I determine (using my “filter 2”) 100% of the search results are useless because my filter is “how do i force the noisy unicorns in my head to shut the hell up”. The Venn diagram of those two search results is likely to show a vanishingly small set of relationships between the two. (Disclaimer: I have no knowledge of the carbohydrate content of unicorns nor how tasty they may be when served with capers and a lemon dill sauce.)

Google may return 1,000,000 search results. But only a small subset is viewable at a time. What of the rest of the result set that I know nothing about? Is it signal? Is it noise? Is it just data that has yet to be subjected to anyone’s system of filters? Because Google found stuff, does that make it signal? Accepting all 1,000,000 search results as signal seems to require a willingness to believe that Google knows best when it comes to determining what’s important to me. This would apply to any filter not our own.

All systems for distinguishing signal from noise are imperfect and some of us on the Intertubes are seeking ways to better tune our particular systems. The system I use lets non-relevant data fall through the sieve so that the gold nuggets are easier to find. Perhaps at some future date I’ll unwittingly re-pan the same chunk of data through an experienced-refined sieve and a newly relevant gem will emerge from the dirt. But until that time, I’ll trust my filters, let the dirt go as noise, and lurch forward.

Improving the Signal to Noise Ratio – In Defense of Noise

[This post follows from Improving the Signal to Noise Ratio.]

All signal all the time may not be a good thing. So I’d like to offer a defense for noise: It’s needed.

Signal is signal because there is noise. Without the presence of noise we risk living in the proverbial echo chamber. When we know what’s bad, we are better equipped to recognize what’s good. I deliberately tune into the noise on occasion for no other reason than to subject my ideas to a bit of rough and tumble. Its why I blog. Its why I participate in several select forums. “Here’s what I think, world. What say you?”

Of course, noise is noise because there is signal. Once we’ve had an experience of “better” we are then more skilled at recognizing what’s bad. I remember the food I grew up on as being good, but today I view some of it as poison (Wonder Bread anyone?) And there are subjects for which I no longer check out the noise. The exposure is too harmful.

There are subjects for which I seem to be swimming in noise and casting around for any sort of signal that suggests “better.” I’m recalling a joke about the two young fish who swim past an older fish. The older fish says to the younger fish, “The water sure is nice today.” A little further on, one of the young fish asks the other, “What’s water?” I’m hoping to catch that older fish in my net. He knows something I don’t.

To understand what I mean by noise being necessary it is important to understand the metaphor I’m using, where it applies and where it doesn’t.

Taking the metaphor literally, in the domain of electrical engineering, for example, the signal to noise ratio is indeed an established measure with clear unit definitions as to what is reflected by the ratio – decibels, for example. In this domain the goal is to push always for maximum signal and minimum noise.

In the world of biological systems, however, noise is most definitely needed. One of many examples I can think of is related to an underlying driver to evolution: mutations. In an evolving organism, anything that would potentially upset the genetic status quo is a threat to survival. Indeed, most mutations are at best benign or at worst lethal such that the organism or it’s progeny never survive and the mutation is selected against as evolutionary “noise.”

However, some mutations are a net benefit to survival and add to the evolutionary “signal.” We, as 21st Century homo sapiens, are who we are because of an uncountable number of noisy mutations that we’ll never know about because they didn’t survive. Even so, surviving mutations are not automatically “pure” signal. There are “noisy” mutations, such as that related to sickle cell anemia. Biological systems can’t recognize a mutation as “noise” or “signal” before the mutation occurs, only after, when they’ve been tested by the rough and tumble of life. This is why I speak in terms of “net benefit.”

For humans trying to find our way in the messy, sloppy world of human interactions and thought, pure signal can be just as undesirable as pure noise. I’ll defer to John Cook, who I think expresses more succinctly the idea I was clumsily trying to convey:

If you have a crackly recording, you want to remove the crackling and leave the music. If you do it well, you can remove most of the crackling effect and reveal the music, but the music signal will be slightly diminished. If you filter too aggressively, you’ll get rid of more noise, but create a dull version of the music. In the extreme, you get a single hum that’s the average of the entire recording.

This is a metaphor for life. If you only value your own opinion, you’re an idiot in the oldest sense of the word, someone in his or her own world. Your work may have a strong signal, but it also has a lot of noise. Getting even one outside opinion greatly cuts down on the noise. But it also cuts down on the signal to some extent. If you get too many opinions, the noise may be gone and the signal with it. Trying to please too many people leads to work that is offensively bland.

The goal in human systems is NOT to push always for maximum signal and minimum noise. For example, this is reflected in Justice Brandeis’s comment: “If there be time to expose through discussion the falsehood and fallacies, to avert the evil by the process of education, the remedy to be applied is more speech, not enforced silence.” So my amended thesis is: In the domain of human interactions and thought, noise is needed by anyone seeking to both evaluate and improve the quality of the signal they are following.

A final thought…

If we were to press for eliminating as much “noise” as possible from human systems much like the goal for electrical noise, I’m left with the question “Who decides what qualifies as noise?”

Agile and Changing Requirements or Design

I hear this (or some version) more frequently in recent years than in past:

Agile is all about changing requirements at anytime during a project, even at the very end.

I attribute the increased frequency to the increased popularity of Agile methods and practices.

That the “Responding to change over following a plan” Agile Manifesto value is cherry picked so frequently is probably due to a couple of factors:

  • It’s human nature for a person to resist being cornered into doing something they don’t want to do. So this value gets them out of performing a task.
  • The person doesn’t understand the problem or doesn’t have a solution. So this value buys them time to figure out how to solve the problem. Once they do have a solution, well, it’s time to change the design or the requirements to fit the solution. This reason isn’t necessary bad unless it’s the de facto solution strategy.

The intent behind the “Responding to change” value, and the way successful Agile is practiced, does not allow for constant and unending change. Taken to it’s logical conclusion, nothing would ever be completed and certainly nothing would ever be released to the market.

I’m not going to rehash the importance of the preposition in the value statement. Any need to explain the relativity implied by it’s use has become a useful signal for me to spend my energies elsewhere. But for those who are not challenged by the grammar, I’d like to say a few thing about how to know when change is appropriate and when it’s important to follow a plan.

The key is recognizing and tracking decision points. With traditional project management, decisions are built-in to the project plan. Every possible bit of work is defined and laid out on a Gantt chart, like the steel rails of a train track. Deviation from this path would be actively discouraged, if it were considered at all.

Using an Agile process, decision points that consider possible changes in direction are built into the process – daily scrums, sprint planning, backlog refinement, reviews and demonstrations at the end of sprints and releases, retrospectives, acceptance criteria, definitions of done, continuous integration – these all reflect deliberate opportunities in the process to evaluate progress and determine whether any changes need to be made. These are all activities that represent decisions or agreements to lock in work definitions for short periods of time.

For example, at sprint planning, a decision is made to complete a block of work in a specified period of time – often two weeks. After that, the work is reviewed and decisions are made as to whether or not that work satisfies the sprint goal and, by extension, the product vision. At this point, the product definition is specifically opened up for feedback from the stakeholders and any proposed changes are discussed. Except under unique circumstances, changes are not introduced mid-sprint and the teams stick to the plan.

Undoing decisions or agreements only happens if there is supporting information, such as technical infeasibility or a significant market shift. Undoing decisions and agreements doesn’t happen just because “Agile is all about changing requirements.” Agile supports changing requirements when there is good reason to do so, irrespective of the original plan. With traditional project management, it’s all about following the plan and change at any point is resisted.

This is the difference. With traditional project management, decisions are built-in to the project plan. With Agile they are adapted in.

How does Agile help with long term planning?

I’m often involved in discussions about Agile that question its efficacy in one way or another. This is, in my view, a very good thing and I highly encourage this line of inquiry. It’s important to challenge the assumptions behind Agile so as to counteract any complacency or expectation that it is a panacea to project management ills. Even so, with apologies to Winston Churchill, Agile is the worst form of project management…except for all the others.

Challenges like this also serve to instill a strong understanding of what an Agile mindset is, how it’s distinct from Agile frameworks, tools, and practices, and where it can best be applied. I would be the first to admin that there are projects for which a traditional waterfall approach is best. (For example, maintenance projects for nuclear power reactors. From experience, I can say traditional waterfall project management is clearly the superior approach in this context.)

A frequent challenge the idea that with Agile it is difficult to do any long-term planning.

Consider the notion of vanity vs actionable metrics. In many respects, large or long-term plans represent a vanity leading metric. The more detail added to a plan, the more people tend to believe and behave as if such plans are an accurate reflection of what will actually happen. “Surprised” doesn’t adequately describe the reaction when reality informs managers and leaders of the hard truth. I worked a multi-million dollar project many years ago for a Fortune 500 company that ended up being canceled. Years of very hard work by hundreds of people down the drain because projected revenues based on a software product design over seven years old were never going to materialize. Customers no longer wanted or needed what the product was offering. Our “solution” no longer had a problem to solve.

Agile – particularly more recent thinking around the values and principles in the Manifesto – acknowledges the cognitive biases in play with long-term plans and attempts to put practices in place that compensate for the risks they introduce into project management. One such bias is reflected in the planning fallacy – the further out the planning window extends into the future, the less accurate the plan. An iterative approach to solving problems (some of which just happen to use software) challenges development teams on up through managers and company leaders to reassess their direction and make much smaller course corrections to accommodate what’s being learned. As you can well imagine, we may have worked out how to do this in the highly controlled and somewhat predictable domain of software development, however, the critical areas for growth and Agile applicability are at the management and leadership levels of the business.

Another important aspect the Agile mindset is reflected in the Cone of Uncertainty. It is a deliberate, intentional recognition of the role of uncertainty in project management. Yes, the goal is to squeeze out as much uncertainty (and therefore risk) as possible, but there are limits. With a traditional project management plan, it may look like everything has been accounted for, but the rest of the world isn’t obligated to follow the plan laid out by a team or a company. In essence, an Agile mindset says, “Lift your gaze up off of the plan (the map) and look around for better, newer, more accurate information (the territory.) Then, update the plan and adjust course accordingly.” In Agile-speak, this is what is behind phrases like “delivery dates emerge.”

Final thought: You’ll probably hear me say many times that nothing in the Agile Manifesto can be taken in isolation. It’s a working system and some parts if it are more relevant than others depending on the project and the timing. So consider what I’ve presented here in concert with the Agile practices of developing good product visions and sprint goals. Product vision and sprint goals keep the project moving in the desired direction without holding it on an iron-rails-track that cannot be changed without a great deal of effort, if at all.

So, to answer the question in the post title, Agile helps with long term planning by first recognizing the the risks inherent in such plans and implementing process changes that mitigate or eliminate those risks. Unpacking that sentences would consist of listing all the risks inherent with long-term planning and the mechanics behind and reasons why scrum, XP, SAFe, LeSS, etc., etc., etc. have been developed.

Assessing and Tracking Team Performance – Part 8: Taming the Wild Horses

Over the years I have come to regard projects as a boat in the ocean and relationships as the ocean.Michael Wade

Remember the phrases from earlier in the article series? Here they are again.

  • “We’re not moving the delivery date.”
  • “We’ll just have to work harder.”
  • “The team will have to put in more time until we’re caught up.”
  • “We’ll need more people on the project.”
  • “The team will have to work faster.”
  • “We’re to the point of exhaustion.”
  • “I’m losing track of all the pieces.”
  • “There’s no time for training.”
  • “Where did those errors come from?”
  • “We’re waiting on another team.”
  • “Another person quit the company?!?!”
  • “I don’t care. I get done what I get done when I get it done.”

How much more meaningful these are to you now that you understand a little more about the system dynamics that drive projects. Choose just one of these and find where it’s reflected in the model. (Figure 1)1.

Figure 1 (click to enlarge)

Now follow the impact and consequences around the various feedback loops. Reflect for a moment an ask yourself, “What can I do to help keep the system healthy and productive in light of what I now know may be happening?” There’s a lot to consider. We’ll cover several options in this article.

Moving from the outside in, the most visible nodes in the system are also influenced the least by direct intervention. These are Morale, Fatigue, and Experience. “The beatings will continue until morale improves” is, I hope, recognized as a cynical joke. While offering free coffee, Red Bull, and unlimited M&Ms may perk up employees in the short term, the long term health consequences are grim indeed. As for Experience, well, that just takes time and a great deal of effort to fully shape and mature.

Attempting to alter these nodes directly is likely to be wasted effort at best and more probably harmful. Even if some cursory improvement can be made, the underlying systemic influences – the true drivers – will still be present and will exert a far more powerful influence. It’s Conway’s Law, pure and simple. It’s better to thinking of Morale, Fatigue, and Experience as symptoms or indicators to be recognized and tracked rather than root causes to be treated. As indicators, they are incredibility powerful sources of information on whether or not changes made to other parts of the system are being successful. They are to be used, not abused.

We’ll begin by working backward from the disaster that was built up over the last several articles in the series. Let’s imagine we have a demoralized team (or teams) that are exhausted and burdened with an impossible delivery schedule. As it stands, it’s unfixable.  A sprinter has a better chance of breaking the three minute mile than this team has in delivering their project by the stated delivery date.

Let’s also assume the choice is to continue the project. The two major actions for management at the is point are to move the Deadline and reduce the amount of Work to Do in the system. These aren’t choices, they’re actions that need to be engaged thoughtfully.

Simply moving the date to some point in the future that seems “doable” is yet another gamble. Neither will moving the date instantly resolve the other systemic issues. There is a considerable amount of recovery and rebuilding to be completed. It takes time to hire the people needed to rebuild the workforce. It takes time to rebuild trust and morale among the employees that remain. Moving the deadline out will begin to relieve pressure, but it will take time for the inflamed system to cool down and find an optimal working temperature.

The challenge for this first step is: How can you go about finding what is a reasonable date for the deadline? Answering this question is dependent on what is learned by looking to other parts of the system model for data.

  • How depleted is the Workforce and how long will it take to build it back up?
  • How much of the critical talent has remained with the organization (Experience)?
  • Is any compensation (time or money) going to be offered to offset the Overtime put in on the project?
  • How much time will it take to refactor and refine the product backlogs such that work streams can are brought into alignment and Overlap and Concurrence and Task Switching minimized?
  • What tool and process changes need to be made to reduce the Congestion and Communication Difficulties?
  • What’s the Total Known Remaining Work in the system?

Probably, the best thing to do is to declare that for some time boxed period, there will be no deadline date while these and many other questions are explored. This will have a side benefit of signaling to the development teams that management is serious about finding a realistic date. This will help to start rebuilding trust between management and the development teams.

One of the factors to consider in determining whether a new deadline can reliably be set is the Total Known Remaining Work in the system. As has been discussed previously, increasing the Total Known Remaining Work puts pressure on the completion date. Similarly, decreasing the
Total Known Remaining Work by some means will increase the likelihood that the completion date can be met. Actions to take that will allow management to regain control of the work flow include:

  • Revisit the release schedule and take a phased approach with clearly defined minimum viable/valuable product deliverables.
  • Complete a detailed review of the work done to date to get a clear picture of the amount of technical and dark debt in the system.
  • Reassess the sales and marketing strategies so they are in clear alignment with the capabilities of the development and delivery system. What can be eliminated? What can be pushed to future releases? Eliminate “nice to have’s” from this list. Either the feature can be completed in a particular release or it can’t. Those that can’t are bumped to a future release.

It’s been shown that changes in one part of the system will affect other parts of the system, whether by design or not. In this article we’ve discussed how adjusting the Deadline and Total Known Remaining Work can affect each other and the entire system. When adjusted in a way that considers system-wide effects, they can help restore balance and predictability to the overall system.

Previous article in the series: Assessing and Tracking Team Performance – Part 7: “Abandon All Hope,…”

References

1The core of the model I use to assess team and organization health is based on the work of James Lyneis and David Ford: System Dynamics Applied to Project Management, System Dynamics Review Volume 23 Number 2/3 Summer/Fall 2007

Assessing and Tracking Team Performance – Part 7: “Abandon All Hope,…”

“…ye who enter here.” So reads the inscription to the Gates of Hell in Dante Alighieri’s epic poem, “Divine Comedy.” Who among us hasn’t felt on occasion that stepping across the threshold to our place of employment is like passing through the gates of Dante’s Inferno? But as the poets have told us, the way to peace is to find the path through our troubles. In this article, we’ll look into just how deeply project system dynamics can adversely affect progress and even whether or not the project is successful.

But I do want to arm the reader with a couple of rays of hope. The concluding article in this series will focus on how this system model1 can be used to good effect, how it can be used to identify problems before they grow out of control. Therein lies the path to peace. Before we get there, we need to understand several more influential feedback loops.

As the Delay to Completion becomes critical, management begins to panic. Not wanting to push the deadline out they work to influence the other three options focused on modifying the behavior of the delivery team. The end result is a team that is caught in the Work Faster, Work More, and Add People loops along with all the other associated downstream loops. The effect is compounded by the emergence of other feedback loops if teams are placed in this position for an extended period of time.

Over time, the shortcuts, hacks, and quick fixes put in place to keep the pace of progress as high as possible settle in as technical debt. They work – for now – so they don’t surface as errors for quality assurance to discover. Down the road, however, solutions hastily put in place as stop-gaps fail when later solutions require existing solutions to be more robust then they are. For example, a software method that doesn’t take advantage of multi-threading may break when a later solution needs that method to scale beyond it’s single thread capacity. The shortcut is now a defect.

Figure 1 (click to enlarge)

If the technical debt remains in place for an extended period of time, it may be covered by several release layers. When it does flip to defect status due to some later stress, it can be much more time consuming and expensive to uncover. The original developer of the code may not be available or even if she is, it could take her quite a bit of time to become reacquainted with the code. This can be thought of as a form of dark debt and is reflected in the Errors Build Errors Loop (Figure 1, J).

As the teams struggle to keep up the pace of progress and reduce the Delay to Completion, work streams start to become out of sequence. One team has an easier time at crafting their solution while another, to which they are dependent on the output, hits a significant snag and is delayed several weeks. In order to stay busy, the first team starts work on something else while the second team finishes their work. When the second team delivers, the first team is not prepared to immediately shift back to their original work stream and so their deliverable is delayed even further. Meanwhile, a third team, that was dependent on the first team’s deliverable has now been delayed by the cumulative delay of the first two teams. Teams and individuals begin to take shortcuts as delivery of interim work products become out of sync with each other. The diminished focus and desynchronization of work streams leads to an increase in the Error Fraction, which in turn leads to a further Delay to Completion. This is the Haste Makes Out-of-Sequence Work Loop (Figure 1, K).

Figure 2 (click to enlarge)

As the effects of the Haste Makes Out-of-Sequence Work Loop build,  team begin switching back-and-forth between work streams depending on who is making the most noise for the completion of any particular deliverable. This is the Thrash and Churn Loop (Figure 2, L). Switching from stream to stream or, in worst cases, task to task, places a tremendous burden on development teams and can do more to slow progress than almost anything else I’ve encountered in team management. Not covered in this model is the type of churn that occurs when parts of the project undergo redesign after work has begun on the existing design. Long term projects are particularly susceptible to adverse impacts from redesign as the changes are often farther reaching. The drivers behind a redesign can range from trivial (a new CTO has a personal dislike for a platform vendor) to critical (a security flaw uncovered in a core technical component.)

If all the loops described to this point in the article series are allowed to run uncorrected the system is likely to crash as the project becomes one massive firefighting effort. A key indicator for when this is happening is employee morale.

Figure 3 (click to enlarge)

The increased Fatigue, the growing burden of Work/Rework to Do, the unsatisfying Task Switching between work assignments all combine to causes a decrease in team Morale. This is the Hopelessness Loop (Figure 3, M). Teams are left with a powerless feeling of being caught on a never ending treadmill. And so, stepping across the threshold to the office is like passing through the gates of Dante’s Inferno.

The ripple effect from a decrease in Morale leads to a decrease in the Workforce as employees leave the organization in search of less stressful, more satisfying work. This is the Turnover Loop (Figure 3, N). The remaining demoralized employees are even less productive and unhappy employees make more mistakes, thus increasing the Error Fraction in the system. The downstream result is that the Delay to Completion increases yet again.

If corrective action isn’t taken the law of diminishing returns becomes evident and the system collapses. The cost overruns become prohibitive and the project is cancelled. Worst case, the organization runs out of resources (money, time, or both) and goes out of business. Those are bad things. In the concluding article to this series, we look at how this model can be used to read the current state of a project’s system dynamics and explore some ways we can intervene such that the system doesn’t run out of control.

Previous article in the series: Assessing and Tracking Team Performance – Part 6: It Lives! But it’s Out of Control!

Next article in the series: Assessing and Tracking Team Performance – Part 8: Taming the Wild Horses

References

1The core of the model I use to assess team and organization health is based on the work of James Lyneis and David Ford: System Dynamics Applied to Project Management, System Dynamics Review Volume 23 Number 2/3 Summer/Fall 2007

Assessing and Tracking Team Performance – Part 6: It Lives! But it’s Out of Control!

In the previous article for this series, I described three options managers could consider if moving the project deadline was out of the question.

  1. Increase employee work intensity
  2. Call for overtime
  3. Hire people

On the face of it, they each appeared to offer a path toward returning a drifting schedule to be on time. Now let’s look a little further down the road to see what happens when the juice is applied to each of these options in turn. If we implement any of these options, what are the likely consequences?

We know that errors in the work flow are unavoidable. If we encourage or pressure the development team to finish more work in less time (the Work Faster Loop1, Figure 1, C) this will result in an increase in the errors along with an increase in the amount of Work Done.

Figure 1 (click to enlarge)

This is the Haste Makes Waste Loop (Figure 1, F). In other words, the increase in Work Intensity will have a concomitant increase in the Error Fraction which means there is an increase in Errors generated. The extended consequence of pulling the Work Intensity lever is an increase in Work to Do in the form of extra Rework to Do.

OK. So Option 1 isn’t a get-out-of-jail-free card. There are strings attached. How about Option 2, call for the development team to work overtime?

Figure 2 (click to enlarge)

By increasing Overtime, the risk of Fatigue increases sharply. This results in yet another increase in the Error Fraction (tired people make more mistakes than rested people) and a decrease in Productivity (tired people don’t work as efficiently as rested people.) Both slow down Progress and increases the amount of Rework to Do in the system. This is the Burnout Loop (Figure 2, G).

OK. So Option 2 doesn’t lead to sunshine and roses. There are dark clouds and weeds in the mix. Let’s give Option 3 a go, hire more people!

Figure 3 (click to enlarge)

So we’ve beefed up the Workforce by hiring a bunch of people to join the team. With all those extra people in the mix we’ve also increased the overall Congestion and Communication Difficulties. The email traffic increases, everyone’s Inbox fills up faster, meeting attendee size increases along with the number of meetings. The signal to noise ratio decreases and miscommunication increases. This increases the Error Fraction, decreases Productive, and decreased Progress. End result: the Too Big to Manage Loop (Figure 3, H).

But that’s not all. By hiring extra people, we’ve activated the Expertise Dilution Loop (Figure 5, I).

Figure 5 (click to enlarge)

All those new hires don’t come in off the street ready to go. They decrease the depth of Experience available to focus on making progress. Experienced employees have to slow down and assist new employees in understanding the technical systems, the architecture, and development standards. New employees will need some period of time to become familiar with the work environment, project objectives,  who’s who, and where the coffee is.

As they work to understand and gain experience with the systems, new hires will necessarily make mistakes and increase the Error Fraction. While there are more workers available to focus on the product backlog, the available expertise is spread much more thinly and is collectively less experienced until such time the new workers are up to speed with what needs to be done and how. So the errors go up and Productivity goes down. The down stream effect is often a further increase in the Delay to Completion. As the saying goes, throwing more people at the problem more often than not makes the problem worse.

OK. So no unicorns and rainbows here either. More like a lot of warthogs and rain.

Looks like the first level effects were negated by the second level consequences. That’s bad enough, but the third level consequences can be even worse in that they are often much longer lasting and much more difficult to resolve. We’ll look at those in the next article in this series.

Previous article in the series: Assessing and Tracking Team Performance – Part 5: Welcome to the Labyrinth

Next article in the series: Assessing and Tracking Team Performance – Part 7: “Abandon All Hope,…”

References

1The core of the model I use to assess team and organization health is based on the work of James Lyneis and David Ford: System Dynamics Applied to Project Management, System Dynamics Review Volume 23 Number 2/3 Summer/Fall 2007