The Pull of Well-Crafted Product Visions and Release Goals

There was even a trace of mild exhilaration in their attitude. At least, they had a clear-cut task ahead of them. The nine months of indecision, of speculation about what might happen, of aimless drifting with the pack were over. Now they simply had to get themselves out, however appallingly difficult that might be. [1]

In the early 20th Century, Sir Ernest Shackleton led an expedition attempting to cross the South Pole on foot. He was unsuccessful in that attempt. What he succeeded at, however, was something far more impressive. After nearly two years of battling conditions south of the Antarctic Circle, Shackleton saw to it that all 27 men of his crew made it safely home. As Alfred Lansing notes, “Though they had failed dismally even to come close to the expedition’s original objective, they knew now that somehow they had done much, much more than ever they set out to do.”

There is much I could write about the lessons from Shackleton, his crew, and the Endurance that apply to our own individual endeavors – personal and professional. For the moment, I wish to reflect on the sheer clarity of the goal 28 men had in 1915-1916: To survive, by any means and nothing short of complete dedicated effort.

To be sure, their goal was self-serving – no one can judge them for that – and no product team is ever likely to be placed in a situation of delivering in the face of such high stakes. Indeed, the lessons from Endurance are striking in their contrast to just how feeble the drama is that is often brought into product delivery schedules. We call them “death marches,” but we know not of what we speak.

One of the things we can learn from Endurance is the power of a clearly defined objective. Do or die. That’s pretty damn clear. Time and time again, Shackleton’s crew were faced with completing seemingly impossible tasks under the harshest of conditions with the barest of resources and vanishingly small chances for success.

What kept them going? Certainly, the will and desire to live. There were many other factors, too. What interests me in this post is reflected in the opening quote. The emergence of a well-defined task that cleared away the fog of speculation, indecision, and uncertainty. Episodes like this are described multiple times in Lansing’s book.

Why this is important to something like a product vision is that it clearly illustrates a phenomenon I learned about recently called “The Goal Gradient Hypothesis,” which basically says our efforts increase as we get closer to our goals. But here’s the rub. We have to know and understand what the goal is. “Do or die” is clear and leaves little room for misunderstanding. “Let’s go build a killer app,” not so much.

From the research:

We found that members of a café RP accelerated their coffee purchases as they progressed toward earning a free coffee. The goal-gradient effect also generalized to a very different incentive system, in which shorter goal distance led members to visit a song-rating Web site more frequently, rate more songs during each visit, and persist longer in the rating effort. Importantly, in both incentive systems, we observed the phenomenon of postreward resetting, whereby customers who accelerated toward their first reward exhibited a slowdown in their efforts when they began work (and subsequently accelerated) toward their second reward. [2]

Far away goals, like a product vision, are much less motivating than near-term goals, such as sprint goals. And yet it is the product vision that can, if well-crafted and well-communicated, pull a team forward during a postreward resetting period.

But perhaps the most important lesson from the research – as far as product development is concerned – is that incentives matter.  How an organization structures these is important. Since most people fail The Marshmallow Test, rewarding success on smaller goals that lead to a larger goal is likely to help teams stay focused and dedicated in the long run. Rather than one large post-product release celebration, smaller rewards after each successful sprint are more likely to keep teams engaged and productive.

References

[1] Lansing, A. (1957) Endurance: Shackleton’s Incredible Voyage, pg. 80

[2] Kivetz, R., Urminsky, O., Zheng, Y (2006) The Goal-Gradient Hypothesis Resurrected: Purchase Acceleration, Illusionary Goal Progress, and Customer Retention, Journal of Marketing Research, 39 Vol. XLIII (February 2006), 39–58

Improving the Signal to Noise Ratio – Coda

In a Scientific American column delightfully named “The Artful Amoeba” there is an article on a little critter called the “fire chaser” beetle: How a Half-Inch Beetle Finds Fires 80 Miles Away – Fire chaser beetles’ ability to sense heat borders on the spooky

Why a creature would choose to enter a situation from which all other forest creatures are enthusiastically attempting to exit is a compelling question of natural history. But it turns out the beetle has a very good reason. Freshly burnt trees are fire chaser beetle baby food. Their only baby food.

Fire chaser beetles are thus so hell bent on that objective that they have been known to bite firefighters, mistaking them, perhaps, for unusually squishy and unpleasant-smelling trees.

This part is interesting:

A flying fire chaser beetle appears to be trying to give itself up to the authorities. Its second set of legs reach for the sky at what appears to be an awkward and uncomfortable angle.

But the beetle has a good reason. It’s getting its legs out of the way of its heat eyes, pits filled with infrared sensors tucked just behind its legs.

A strategy suggested by the fire beetle life cycle is if you want to maximize a signal to noise ratio, iterate through three simple things:

  1. Work to develop a super well defined signal/goal/objective.
  2. Remove every possible barrier to receiving information about that signal – mental, emotional, even physical – that you can think of or that you discover over time.
  3. Repeat

Also, the “Way of the Amoeba” is now the “Way of the Artful Amoeba.” Update your phrase books accordingly.

Improving the Signal to Noise Ratio – Revisited

Additional thoughts about signals and noise that have been rattling around in my brain since first posting on this topic.

At the risk of becoming too ethereal about all this, before there is signal and before there is noise, there is data. Cold, harsh, cruelly indifferent data. It is after raw data encounters some sort of filter or boundary, something that triggers a calculation to evaluate what that data means or whether it is relevant to whomever is on the other side of the filter, that it begins to be characterized as “signal” or “noise.”

Since we’re talking about humans in this series of posts, that filter is an amazingly complex system built from both physiological and psychological elements. The small amount of physical data that hits our senses and actually makes it to our brains is then filtered by beliefs, values, biases, attitudes, emotions, and those pesky unicorns that can’t seem to stop talking while I’m trying to think! It’s after all this processing that data has now been sorted according to “signal” (what’s relevant) and “noise” (what’s irrelevant) for any particular individual. Our individual systems of filters impart value judgments on the data such that each of us, essentially, creates “signal” and “noise” from the raw data.

That’s a long winded way to say:

data -> [filter] -> signal, noise

Now apply this to everyone on the planet.

data -> [filter 1] -> signal 1, noise 1

data -> [filter 2] -> signal 2, noise 2

data -> [filter n] -> signal n, noise n

As an example, Google, itself a filter, is a useful one. Let’s assume for a moment that Google is some naturally occurring phenomenon and not a filter created by humans with their own set of filters driving what it means to create a let’s be evil good search engine. To retrieve 1,000,000 pieces of information, my friend, Bob, entered search criteria of interest to him, i.e. “filter 1.” Maybe he searched for “healthy keto diet recipes”. Scanning those search results, I determine (using my “filter 2”) 100% of the search results are useless because my filter is “how do i force the noisy unicorns in my head to shut the hell up”. The Venn diagram of those two search results is likely to show a vanishingly small set of relationships between the two. (Disclaimer: I have no knowledge of the carbohydrate content of unicorns nor how tasty they may be when served with capers and a lemon dill sauce.)

Google may return 1,000,000 search results. But only a small subset is viewable at a time. What of the rest of the result set that I know nothing about? Is it signal? Is it noise? Is it just data that has yet to be subjected to anyone’s system of filters? Because Google found stuff, does that make it signal? Accepting all 1,000,000 search results as signal seems to require a willingness to believe that Google knows best when it comes to determining what’s important to me. This would apply to any filter not our own.

All systems for distinguishing signal from noise are imperfect and some of us on the Intertubes are seeking ways to better tune our particular systems. The system I use lets non-relevant data fall through the sieve so that the gold nuggets are easier to find. Perhaps at some future date I’ll unwittingly re-pan the same chunk of data through an experienced-refined sieve and a newly relevant gem will emerge from the dirt. But until that time, I’ll trust my filters, let the dirt go as noise, and lurch forward.

Improving the Signal to Noise Ratio – In Defense of Noise

[This post follows from Improving the Signal to Noise Ratio.]

All signal all the time may not be a good thing. So I’d like to offer a defense for noise: It’s needed.

Signal is signal because there is noise. Without the presence of noise we risk living in the proverbial echo chamber. When we know what’s bad, we are better equipped to recognize what’s good. I deliberately tune into the noise on occasion for no other reason than to subject my ideas to a bit of rough and tumble. Its why I blog. Its why I participate in several select forums. “Here’s what I think, world. What say you?”

Of course, noise is noise because there is signal. Once we’ve had an experience of “better” we are then more skilled at recognizing what’s bad. I remember the food I grew up on as being good, but today I view some of it as poison (Wonder Bread anyone?) And there are subjects for which I no longer check out the noise. The exposure is too harmful.

There are subjects for which I seem to be swimming in noise and casting around for any sort of signal that suggests “better.” I’m recalling a joke about the two young fish who swim past an older fish. The older fish says to the younger fish, “The water sure is nice today.” A little further on, one of the young fish asks the other, “What’s water?” I’m hoping to catch that older fish in my net. He knows something I don’t.

To understand what I mean by noise being necessary it is important to understand the metaphor I’m using, where it applies and where it doesn’t.

Taking the metaphor literally, in the domain of electrical engineering, for example, the signal to noise ratio is indeed an established measure with clear unit definitions as to what is reflected by the ratio – decibels, for example. In this domain the goal is to push always for maximum signal and minimum noise.

In the world of biological systems, however, noise is most definitely needed. One of many examples I can think of is related to an underlying driver to evolution: mutations. In an evolving organism, anything that would potentially upset the genetic status quo is a threat to survival. Indeed, most mutations are at best benign or at worst lethal such that the organism or it’s progeny never survive and the mutation is selected against as evolutionary “noise.”

However, some mutations are a net benefit to survival and add to the evolutionary “signal.” We, as 21st Century homo sapiens, are who we are because of an uncountable number of noisy mutations that we’ll never know about because they didn’t survive. Even so, surviving mutations are not automatically “pure” signal. There are “noisy” mutations, such as that related to sickle cell anemia. Biological systems can’t recognize a mutation as “noise” or “signal” before the mutation occurs, only after, when they’ve been tested by the rough and tumble of life. This is why I speak in terms of “net benefit.”

For humans trying to find our way in the messy, sloppy world of human interactions and thought, pure signal can be just as undesirable as pure noise. I’ll defer to John Cook, who I think expresses more succinctly the idea I was clumsily trying to convey:

If you have a crackly recording, you want to remove the crackling and leave the music. If you do it well, you can remove most of the crackling effect and reveal the music, but the music signal will be slightly diminished. If you filter too aggressively, you’ll get rid of more noise, but create a dull version of the music. In the extreme, you get a single hum that’s the average of the entire recording.

This is a metaphor for life. If you only value your own opinion, you’re an idiot in the oldest sense of the word, someone in his or her own world. Your work may have a strong signal, but it also has a lot of noise. Getting even one outside opinion greatly cuts down on the noise. But it also cuts down on the signal to some extent. If you get too many opinions, the noise may be gone and the signal with it. Trying to please too many people leads to work that is offensively bland.

The goal in human systems is NOT to push always for maximum signal and minimum noise. For example, this is reflected in Justice Brandeis’s comment: “If there be time to expose through discussion the falsehood and fallacies, to avert the evil by the process of education, the remedy to be applied is more speech, not enforced silence.” So my amended thesis is: In the domain of human interactions and thought, noise is needed by anyone seeking to both evaluate and improve the quality of the signal they are following.

A final thought…

If we were to press for eliminating as much “noise” as possible from human systems much like the goal for electrical noise, I’m left with the question “Who decides what qualifies as noise?”

Improving the Signal to Noise Ratio

A question was posed, “Why not be an information sponge?”

I’d have to characterize myself as more of an information amoeba – (IIRC, the amoeba is, by weight, the most vicious life form on earth) – on the hunt for information and after internalizing it, going into rest mode while I decompose and reassemble it into something of use to my understanding of the world. Yum.

More generally, to be an effective and successful consumer of information these days, the Way of the Sponge (WotS, passive, information washes through them and they absorb everything) is no longer tenable and the Way of the Amoeba (WotA, active, information washes over them and they hunt down what they need) is likely to be the more successful strategy. The WotA requires considerable energy but the rewards are commensurate with the effort. WotS…well, there’s your obsessive processed food eating TV binge-watcher right there. Mr. Square Bob Sponge Pants.

What’s implied by the WotA vs the WotS is that the former has a more active role in optimizing the informational signal to noise ratio than the latter. So a few thoughts to begin with on signals and noise.

Depending on the moment and the context, one person’s signal is another person’s noise. Across the moments that make up a lifetime, one person’s noise may become the same person’s signal and vice versa. When I was in high school, I found Frank Sinatra’s voice annoying and not something to be mingled with my collection of Mozart, Bach, and Vivaldi. Today…well, to disparage the Chairman of the Board is fightin’ words in my house. Over time, at least, noise can become signal and signal become noise.

But I’m speaking here of the signal quality and not it’s quantity (i.e. volume)

Some years ago I came across Stuart Kauffman’s idea of the adjacent possible:

It may be that biospheres, as a secular trend, maximize the rate of exploration of the adjacent possible. If they did it too fast, they would destroy their own internal organization, so there may be internal gating mechanisms. This is why I call this an average secular trend, since they explore the adjacent possible as fast as they can get away with it.

This has been interpreted in a variety of ways. I carry this around in my head as a distillation from several of the more faithful versions: Expand the edge of what I know by studying the things that are close by. Over time, there is an accumulation of loosely coupled ideas and facts that begin to coalesce into a deeper meaning, a signal, if you will, relevant to my life.

With this insight, I’ve been able to be more deliberate and directed about what I want or need to know. I’ve learned to be a good custodian of the edge and what I allow to occupy space on that edge. These are my “internal gating mechanisms.” It isn’t an easy task, but there are some easy wins. For starters, learning to unplug completely. Especially from social media and what tragically passes for “news reporting” or “journalism”these days.

The task is largely one of filtering. I very rarely directly visit information sources. Rather, I leverage RSS feeds and employ filtering rules. I pull information of interest rather than have it pushed at me by “news” web sites, cable or TV channels, or newspapers. While this means I will occasionally miss some cool stuff, it’s more than compensated by the boost in signal quality achieved by excluding all the sludge from the edge. I suspect I still get the cool stuff, just in a slightly different form or revealed by a different source that makes it through the filter. In this way, it’s a matter of modulating the quantity such that the signal is easier to find.

There is a caution to consider while optimizing a signal-to-noise ratio, something reflected in Kauffman’s comments around the rate of exploration for new ideas: “If they did it too fast, they would destroy their own internal organization…”

Before the Internet, before PCs were a commodity, before television was popular it was much, much easier to find time to think. In fact, it was never something that had to be looked for or sought out. I think that’s what is different today. It takes WORK to find a quiet space and time to think. While my humble little RSS filters do a great job of keeping a high signal-to-noise ratio with all things Internet, accomplishing the same thing in the physical world is becoming more and more difficult.

The “attention economy,” or whatever it’s being called today, is reaching a truly disturbing level of invasion. It seems I’ve used more electrician’s tape to cover up camera lenses and microphones in the past year than I’ve used on actual electrical wires. The number of appliances and gadgets in the home with glowing screens crying out for bluetooth or wifi access like leaches seeking blood are their own source of noise. This is my current battleground for finding the signal within the noise.

Enough about filtering. What about boundaries. Fences make for good neighbors, said someone wise and experienced. And there’s a good chance that applies to information organization, too. Keeping the spiritual information in my head separate from my shopping list probably helps me stop short of forming some sort of cult around Costco. ( “All praise ‘Bulk,’ the God of Stuff!)

An amoeba has a much more develop boundary between self and other than a sponge and that’s probably a net gain even with the drawback of extra energy required to fuel that arrangement. Intellectually, we have our beliefs and values that mark where those edges between self and other are defined.

So I’ll stop for now with the question, “What are the strategies and mental models that promote permeability for desired or needed information while keeping, as much as possible, the garbage ‘out there?’”

 

Agile and Changing Requirements or Design

I hear this (or some version) more frequently in recent years than in past:

Agile is all about changing requirements at anytime during a project, even at the very end.

I attribute the increased frequency to the increased popularity of Agile methods and practices.

That the “Responding to change over following a plan” Agile Manifesto value is cherry picked so frequently is probably due to a couple of factors:

  • It’s human nature for a person to resist being cornered into doing something they don’t want to do. So this value gets them out of performing a task.
  • The person doesn’t understand the problem or doesn’t have a solution. So this value buys them time to figure out how to solve the problem. Once they do have a solution, well, it’s time to change the design or the requirements to fit the solution. This reason isn’t necessary bad unless it’s the de facto solution strategy.

The intent behind the “Responding to change” value, and the way successful Agile is practiced, does not allow for constant and unending change. Taken to it’s logical conclusion, nothing would ever be completed and certainly nothing would ever be released to the market.

I’m not going to rehash the importance of the preposition in the value statement. Any need to explain the relativity implied by it’s use has become a useful signal for me to spend my energies elsewhere. But for those who are not challenged by the grammar, I’d like to say a few thing about how to know when change is appropriate and when it’s important to follow a plan.

The key is recognizing and tracking decision points. With traditional project management, decisions are built-in to the project plan. Every possible bit of work is defined and laid out on a Gantt chart, like the steel rails of a train track. Deviation from this path would be actively discouraged, if it were considered at all.

Using an Agile process, decision points that consider possible changes in direction are built into the process – daily scrums, sprint planning, backlog refinement, reviews and demonstrations at the end of sprints and releases, retrospectives, acceptance criteria, definitions of done, continuous integration – these all reflect deliberate opportunities in the process to evaluate progress and determine whether any changes need to be made. These are all activities that represent decisions or agreements to lock in work definitions for short periods of time.

For example, at sprint planning, a decision is made to complete a block of work in a specified period of time – often two weeks. After that, the work is reviewed and decisions are made as to whether or not that work satisfies the sprint goal and, by extension, the product vision. At this point, the product definition is specifically opened up for feedback from the stakeholders and any proposed changes are discussed. Except under unique circumstances, changes are not introduced mid-sprint and the teams stick to the plan.

Undoing decisions or agreements only happens if there is supporting information, such as technical infeasibility or a significant market shift. Undoing decisions and agreements doesn’t happen just because “Agile is all about changing requirements.” Agile supports changing requirements when there is good reason to do so, irrespective of the original plan. With traditional project management, it’s all about following the plan and change at any point is resisted.

This is the difference. With traditional project management, decisions are built-in to the project plan. With Agile they are adapted in.

Assessing and Tracking Team Performance – Part 8: Taming the Wild Horses

Over the years I have come to regard projects as a boat in the ocean and relationships as the ocean.Michael Wade

Remember the phrases from earlier in the article series? Here they are again.

  • “We’re not moving the delivery date.”
  • “We’ll just have to work harder.”
  • “The team will have to put in more time until we’re caught up.”
  • “We’ll need more people on the project.”
  • “The team will have to work faster.”
  • “We’re to the point of exhaustion.”
  • “I’m losing track of all the pieces.”
  • “There’s no time for training.”
  • “Where did those errors come from?”
  • “We’re waiting on another team.”
  • “Another person quit the company?!?!”
  • “I don’t care. I get done what I get done when I get it done.”

How much more meaningful these are to you now that you understand a little more about the system dynamics that drive projects. Choose just one of these and find where it’s reflected in the model. (Figure 1)1.

Figure 1 (click to enlarge)

Now follow the impact and consequences around the various feedback loops. Reflect for a moment an ask yourself, “What can I do to help keep the system healthy and productive in light of what I now know may be happening?” There’s a lot to consider. We’ll cover several options in this article.

Moving from the outside in, the most visible nodes in the system are also influenced the least by direct intervention. These are Morale, Fatigue, and Experience. “The beatings will continue until morale improves” is, I hope, recognized as a cynical joke. While offering free coffee, Red Bull, and unlimited M&Ms may perk up employees in the short term, the long term health consequences are grim indeed. As for Experience, well, that just takes time and a great deal of effort to fully shape and mature.

Attempting to alter these nodes directly is likely to be wasted effort at best and more probably harmful. Even if some cursory improvement can be made, the underlying systemic influences – the true drivers – will still be present and will exert a far more powerful influence. It’s Conway’s Law, pure and simple. It’s better to thinking of Morale, Fatigue, and Experience as symptoms or indicators to be recognized and tracked rather than root causes to be treated. As indicators, they are incredibility powerful sources of information on whether or not changes made to other parts of the system are being successful. They are to be used, not abused.

We’ll begin by working backward from the disaster that was built up over the last several articles in the series. Let’s imagine we have a demoralized team (or teams) that are exhausted and burdened with an impossible delivery schedule. As it stands, it’s unfixable.  A sprinter has a better chance of breaking the three minute mile than this team has in delivering their project by the stated delivery date.

Let’s also assume the choice is to continue the project. The two major actions for management at the is point are to move the Deadline and reduce the amount of Work to Do in the system. These aren’t choices, they’re actions that need to be engaged thoughtfully.

Simply moving the date to some point in the future that seems “doable” is yet another gamble. Neither will moving the date instantly resolve the other systemic issues. There is a considerable amount of recovery and rebuilding to be completed. It takes time to hire the people needed to rebuild the workforce. It takes time to rebuild trust and morale among the employees that remain. Moving the deadline out will begin to relieve pressure, but it will take time for the inflamed system to cool down and find an optimal working temperature.

The challenge for this first step is: How can you go about finding what is a reasonable date for the deadline? Answering this question is dependent on what is learned by looking to other parts of the system model for data.

  • How depleted is the Workforce and how long will it take to build it back up?
  • How much of the critical talent has remained with the organization (Experience)?
  • Is any compensation (time or money) going to be offered to offset the Overtime put in on the project?
  • How much time will it take to refactor and refine the product backlogs such that work streams can are brought into alignment and Overlap and Concurrence and Task Switching minimized?
  • What tool and process changes need to be made to reduce the Congestion and Communication Difficulties?
  • What’s the Total Known Remaining Work in the system?

Probably, the best thing to do is to declare that for some time boxed period, there will be no deadline date while these and many other questions are explored. This will have a side benefit of signaling to the development teams that management is serious about finding a realistic date. This will help to start rebuilding trust between management and the development teams.

One of the factors to consider in determining whether a new deadline can reliably be set is the Total Known Remaining Work in the system. As has been discussed previously, increasing the Total Known Remaining Work puts pressure on the completion date. Similarly, decreasing the
Total Known Remaining Work by some means will increase the likelihood that the completion date can be met. Actions to take that will allow management to regain control of the work flow include:

  • Revisit the release schedule and take a phased approach with clearly defined minimum viable/valuable product deliverables.
  • Complete a detailed review of the work done to date to get a clear picture of the amount of technical and dark debt in the system.
  • Reassess the sales and marketing strategies so they are in clear alignment with the capabilities of the development and delivery system. What can be eliminated? What can be pushed to future releases? Eliminate “nice to have’s” from this list. Either the feature can be completed in a particular release or it can’t. Those that can’t are bumped to a future release.

It’s been shown that changes in one part of the system will affect other parts of the system, whether by design or not. In this article we’ve discussed how adjusting the Deadline and Total Known Remaining Work can affect each other and the entire system. When adjusted in a way that considers system-wide effects, they can help restore balance and predictability to the overall system.

Previous article in the series: Assessing and Tracking Team Performance – Part 7: “Abandon All Hope,…”

References

1The core of the model I use to assess team and organization health is based on the work of James Lyneis and David Ford: System Dynamics Applied to Project Management, System Dynamics Review Volume 23 Number 2/3 Summer/Fall 2007

Assessing and Tracking Team Performance – Part 7: “Abandon All Hope,…”

“…ye who enter here.” So reads the inscription to the Gates of Hell in Dante Alighieri’s epic poem, “Divine Comedy.” Who among us hasn’t felt on occasion that stepping across the threshold to our place of employment is like passing through the gates of Dante’s Inferno? But as the poets have told us, the way to peace is to find the path through our troubles. In this article, we’ll look into just how deeply project system dynamics can adversely affect progress and even whether or not the project is successful.

But I do want to arm the reader with a couple of rays of hope. The concluding article in this series will focus on how this system model1 can be used to good effect, how it can be used to identify problems before they grow out of control. Therein lies the path to peace. Before we get there, we need to understand several more influential feedback loops.

As the Delay to Completion becomes critical, management begins to panic. Not wanting to push the deadline out they work to influence the other three options focused on modifying the behavior of the delivery team. The end result is a team that is caught in the Work Faster, Work More, and Add People loops along with all the other associated downstream loops. The effect is compounded by the emergence of other feedback loops if teams are placed in this position for an extended period of time.

Over time, the shortcuts, hacks, and quick fixes put in place to keep the pace of progress as high as possible settle in as technical debt. They work – for now – so they don’t surface as errors for quality assurance to discover. Down the road, however, solutions hastily put in place as stop-gaps fail when later solutions require existing solutions to be more robust then they are. For example, a software method that doesn’t take advantage of multi-threading may break when a later solution needs that method to scale beyond it’s single thread capacity. The shortcut is now a defect.

Figure 1 (click to enlarge)

If the technical debt remains in place for an extended period of time, it may be covered by several release layers. When it does flip to defect status due to some later stress, it can be much more time consuming and expensive to uncover. The original developer of the code may not be available or even if she is, it could take her quite a bit of time to become reacquainted with the code. This can be thought of as a form of dark debt and is reflected in the Errors Build Errors Loop (Figure 1, J).

As the teams struggle to keep up the pace of progress and reduce the Delay to Completion, work streams start to become out of sequence. One team has an easier time at crafting their solution while another, to which they are dependent on the output, hits a significant snag and is delayed several weeks. In order to stay busy, the first team starts work on something else while the second team finishes their work. When the second team delivers, the first team is not prepared to immediately shift back to their original work stream and so their deliverable is delayed even further. Meanwhile, a third team, that was dependent on the first team’s deliverable has now been delayed by the cumulative delay of the first two teams. Teams and individuals begin to take shortcuts as delivery of interim work products become out of sync with each other. The diminished focus and desynchronization of work streams leads to an increase in the Error Fraction, which in turn leads to a further Delay to Completion. This is the Haste Makes Out-of-Sequence Work Loop (Figure 1, K).

Figure 2 (click to enlarge)

As the effects of the Haste Makes Out-of-Sequence Work Loop build,  team begin switching back-and-forth between work streams depending on who is making the most noise for the completion of any particular deliverable. This is the Thrash and Churn Loop (Figure 2, L). Switching from stream to stream or, in worst cases, task to task, places a tremendous burden on development teams and can do more to slow progress than almost anything else I’ve encountered in team management. Not covered in this model is the type of churn that occurs when parts of the project undergo redesign after work has begun on the existing design. Long term projects are particularly susceptible to adverse impacts from redesign as the changes are often farther reaching. The drivers behind a redesign can range from trivial (a new CTO has a personal dislike for a platform vendor) to critical (a security flaw uncovered in a core technical component.)

If all the loops described to this point in the article series are allowed to run uncorrected the system is likely to crash as the project becomes one massive firefighting effort. A key indicator for when this is happening is employee morale.

Figure 3 (click to enlarge)

The increased Fatigue, the growing burden of Work/Rework to Do, the unsatisfying Task Switching between work assignments all combine to causes a decrease in team Morale. This is the Hopelessness Loop (Figure 3, M). Teams are left with a powerless feeling of being caught on a never ending treadmill. And so, stepping across the threshold to the office is like passing through the gates of Dante’s Inferno.

The ripple effect from a decrease in Morale leads to a decrease in the Workforce as employees leave the organization in search of less stressful, more satisfying work. This is the Turnover Loop (Figure 3, N). The remaining demoralized employees are even less productive and unhappy employees make more mistakes, thus increasing the Error Fraction in the system. The downstream result is that the Delay to Completion increases yet again.

If corrective action isn’t taken the law of diminishing returns becomes evident and the system collapses. The cost overruns become prohibitive and the project is cancelled. Worst case, the organization runs out of resources (money, time, or both) and goes out of business. Those are bad things. In the concluding article to this series, we look at how this model can be used to read the current state of a project’s system dynamics and explore some ways we can intervene such that the system doesn’t run out of control.

Previous article in the series: Assessing and Tracking Team Performance – Part 6: It Lives! But it’s Out of Control!

Next article in the series: Assessing and Tracking Team Performance – Part 8: Taming the Wild Horses

References

1The core of the model I use to assess team and organization health is based on the work of James Lyneis and David Ford: System Dynamics Applied to Project Management, System Dynamics Review Volume 23 Number 2/3 Summer/Fall 2007

Assessing and Tracking Team Performance – Part 6: It Lives! But it’s Out of Control!

In the previous article for this series, I described three options managers could consider if moving the project deadline was out of the question.

  1. Increase employee work intensity
  2. Call for overtime
  3. Hire people

On the face of it, they each appeared to offer a path toward returning a drifting schedule to be on time. Now let’s look a little further down the road to see what happens when the juice is applied to each of these options in turn. If we implement any of these options, what are the likely consequences?

We know that errors in the work flow are unavoidable. If we encourage or pressure the development team to finish more work in less time (the Work Faster Loop1, Figure 1, C) this will result in an increase in the errors along with an increase in the amount of Work Done.

Figure 1 (click to enlarge)

This is the Haste Makes Waste Loop (Figure 1, F). In other words, the increase in Work Intensity will have a concomitant increase in the Error Fraction which means there is an increase in Errors generated. The extended consequence of pulling the Work Intensity lever is an increase in Work to Do in the form of extra Rework to Do.

OK. So Option 1 isn’t a get-out-of-jail-free card. There are strings attached. How about Option 2, call for the development team to work overtime?

Figure 2 (click to enlarge)

By increasing Overtime, the risk of Fatigue increases sharply. This results in yet another increase in the Error Fraction (tired people make more mistakes than rested people) and a decrease in Productivity (tired people don’t work as efficiently as rested people.) Both slow down Progress and increases the amount of Rework to Do in the system. This is the Burnout Loop (Figure 2, G).

OK. So Option 2 doesn’t lead to sunshine and roses. There are dark clouds and weeds in the mix. Let’s give Option 3 a go, hire more people!

Figure 3 (click to enlarge)

So we’ve beefed up the Workforce by hiring a bunch of people to join the team. With all those extra people in the mix we’ve also increased the overall Congestion and Communication Difficulties. The email traffic increases, everyone’s Inbox fills up faster, meeting attendee size increases along with the number of meetings. The signal to noise ratio decreases and miscommunication increases. This increases the Error Fraction, decreases Productive, and decreased Progress. End result: the Too Big to Manage Loop (Figure 3, H).

But that’s not all. By hiring extra people, we’ve activated the Expertise Dilution Loop (Figure 5, I).

Figure 5 (click to enlarge)

All those new hires don’t come in off the street ready to go. They decrease the depth of Experience available to focus on making progress. Experienced employees have to slow down and assist new employees in understanding the technical systems, the architecture, and development standards. New employees will need some period of time to become familiar with the work environment, project objectives,  who’s who, and where the coffee is.

As they work to understand and gain experience with the systems, new hires will necessarily make mistakes and increase the Error Fraction. While there are more workers available to focus on the product backlog, the available expertise is spread much more thinly and is collectively less experienced until such time the new workers are up to speed with what needs to be done and how. So the errors go up and Productivity goes down. The down stream effect is often a further increase in the Delay to Completion. As the saying goes, throwing more people at the problem more often than not makes the problem worse.

OK. So no unicorns and rainbows here either. More like a lot of warthogs and rain.

Looks like the first level effects were negated by the second level consequences. That’s bad enough, but the third level consequences can be even worse in that they are often much longer lasting and much more difficult to resolve. We’ll look at those in the next article in this series.

Previous article in the series: Assessing and Tracking Team Performance – Part 5: Welcome to the Labyrinth

Next article in the series: Assessing and Tracking Team Performance – Part 7: “Abandon All Hope,…”

References

1The core of the model I use to assess team and organization health is based on the work of James Lyneis and David Ford: System Dynamics Applied to Project Management, System Dynamics Review Volume 23 Number 2/3 Summer/Fall 2007

Assessing and Tracking Team Performance – Part 5: Welcome to the Labyrinth

The capable product owners I know have at least an intuitive understanding that the challenge of guiding a project through to completion is more than a bit like Theseus on his way to defeat the Minotaur. The great product owners have a much more present awareness of the labyrinth before them. Depending on the project, the team, and the work environment, the product backlog just might be the easy piece. It’s more knowable then the myriad of ways a system can work against project success.

The purpose of this series of articles is to shine light on those wily ways of the system, to make more known what capable product owners intuit, to help you become a great product owner.1

In the previous article, we covered how a project can end up with a growing delay before completion. The obvious fix was to push out the deadline, thus erasing the delay (The Shift Deadline Loop, Figure 1, B.) Management has a strong dislike for this and often avoids changing deadlines even when faced with minimal consequences. It’s more likely there are other factors that make the consequences significantly greater. Perhaps there are budget constraints or a delivery date that is tied to a major event like the launch of a suite of related products or a conference.

So if management is faced with an unmovable deadline, the Delay to Completion must be resolved by some other means.

Figure 1 (click to enlarge)

With more work to do and less time to do it, there is now a Talent Resource Deficit. X number of employees working 40 hours a week will no longer get the work done on time. Management’s next set of options lie with changing the behaviors of the development team. We’ll consider three of these options.

The first option is to put pressure on the development team to focus on work more during the time they are working. Maybe this involves tightening the work hours people are expected to be available. Or restricting remote work so team members are in close proximity for longer periods of the day in the hope of shorting the delays inherent in remote communication and problem solving. Or working to eliminate distractions in the workplace. There are many possibilities here.

Figure 2 (click to enlarge)

This is the Work Faster Loop (Figure 2, C) – complete more work in less time. If the development team is more focused, the thinking goes, Productivity will increase and in turn drive an increase in Progress. More Progress leads to less Work to Do which leads to less Total Known Remaining Work which leads to less Time Required to Complete Work and a decrease in the Delay to Completion. Eventually, the Talent Resource Deficit is reduced and the development team can relax a bit.

This looks great in principle. Will get to the messy reality in a future article, but for now, we just need to understand how management typically thinks things should work.

The second option is to ask the development team work Overtime.

Figure 3 (click to enlarge)

Officially, management asks. Unofficially, it isn’t presented as an option. If the development team is putting in more hours, the thinking goes, then the amount of Effort being applied to the work stream increases. As with an increase in Work Intensity, this works its way through the system to reduce the Delay to Completion and ultimately, the development team will no longer need to put in extra hours. This is the Work More Loop (Figure 3, D).

The third option is to simply hire more people to work on the development team.

Figure 4 (click to enlarge)

By deciding to Hire Talent, management will increase the Workforce and once again increase the Effort aimed at increasing progress. As with the increase in Work Intensity and Overtime, this eventually manifests as a decrease in the Delay to Completion. This is the Add People Loop (Figure 4, E).

There you have it. Schedule slipping? Flip one or more of the following switches…

  1. Extend the deadline
  2. Increase employee work intensity
  3. Call for overtime
  4. Hire people

…and in short order the system will be back in balance and the project on schedule. Problem solved.

Not so fast there, young Theseus. Remember, there’s a Minotaur on the hunt for you somewhere in this labyrinth. In the next article of this series we’ll begin looking a some of the ways this simplistic machine thinking can go sideways…fast.

Previous article in the series: Assessing and Tracking Team Performance – Part 4: Let the Interactions Begin!

Next article in the series: Assessing and Tracking Team Performance – Part 6: It Lives! But it’s Out of Control!

References

1The core of the model I use to assess team and organization health is based on the work of James Lyneis and David Ford: System Dynamics Applied to Project Management, System Dynamics Review Volume 23 Number 2/3 Summer/Fall 2007