Teaching Product Owners How To Fish

Product owners are responsible for defining WHAT the Agile team needs to create. The team is responsible for deciding HOW to build WHAT the product owner needs.

I don’t think any Agilist would fundamentally disagree with that statement. Where there is inevitably a great deal of discussion and disagreement is describing the boundary between “WHAT” and “HOW.” Is it thin or thick? How much overlap is there? Does this depend on the nature of the work or is there a fixed standard? Does it have to be determined for every story? These are often not easy questions to answer.

While reading some neuroscience material yesterday regarding how our brains construct “concepts,” the topic lead into the notion of “goal-based concepts.” Since I’m not a neuroscientist but an Agilist looking for ways neuroscientific discoveries can be applied to the implementation of Agile principles and practices, the material had me thinking about how product owners might learn from the idea of “goal-based concepts.” How can I teach them about the “WHAT” such that they better understand the type and amount of detail the team needs in order to figure out the “HOW?”

I came up with a series of short dialogs:

Product Owner (to team): I need a fish.

Team: OK.

(Team goes off to work on “fish” and returns the next day with a catfish.)

PO: That’s not what I wanted! I need a fish!

(Team goes off to work on “fish” and returns the next day with a nurse shark.)

PO: That’s not what I wanted either!

What if the PO had been more clear about not just WHAT she wanted, but the goal for that WHAT. That is, a description of the problem that the WHAT was intended to solve.

PO: I’m opening a fish store. I need a fish.

(Team goes off to work on “fish for fish store” and returns the next day with a goldfish.)

PO: That’s good. But it’s not enough. I need more fish. Different kinds of fish.

(The team goes off to work on “variety of fish for fish store” and returns the next day with guppies, mollies, swordtails, and angelfish.)

PO: Cool!

Or version two:

PO: I’m opening a restaurant. I need a fish.

(Team goes off to work on “fish for restaurant” and returns the next day with Poached Salmon in Dill Sauce.)

PO: That’s good. But it’s not enough. I need more fish. Different kinds of fish.

(The team goes off to work on “variety of fish for restaurant” and returns the next day with Miso-Glazed Chilean Sea Bass, Mediterranean Stuffed Swordfish, and Pan Seared Lemon Tilapia.)

PO: Yum!

Hopefully, you can see that by providing a little of the “WHY” for the “WHAT” has helped the team do a better job of delivering WHAT the product owner wanted.

Of course, these are simplified examples. There are many additional details the product owner could have supplied that would have helped the team dial in on exactly WHAT she needed. In the first dialog, the team had to make guesses about what the product owner meant by the rather broad concept of “fish.” In the subsequent dialogs, the team at least had the benefit of the context that was of interest to the product owner. In these cases, the team had a better understanding of the goal the product owner had in mind.

In Agile-speak, this context or goal information would be provided in the “As a…” and “…so that…” part of the story. “As a restaurant owner I want fish so that patrons can enjoy a variety of menu options.”

The less clear the product owner is on these elements, the longer it’s going to take for the team to guess what she really wants.

Concave, Convex, and Nonlinear Fragility

Nassim Nicholas Taleb’s book, “Antifragile,” is a wealth of information. I’ve returned to it often since first reading it several years ago. My latest revisit has been to better understand his ideas about representing the nonlinear and asymmetric aspects of fragile/antifragile in terms of “concave” and “convex.” My first read of this left me a bit confused, but I got the gist of it and moved on. Taleb is a very smart guy so I need to understand this.

The first thing I needed to sort out on this revisit was Taleb’s use of language. The fragile/antifragile comparison is variously described in his book as:

  • Concave/Convex
  • Slumped solicitor/Humped solicitor
  • Curves inward/Curves outward
  • Frown/Smile
  • Negative convexity effects/Positive convexity effects
  • Pain more than gain/Gain more than pain
  • Doesn’t “like” volatility (presumable)/”Likes” volatility

Tracking his descriptions is made a little more challenging by reversals in reference when writing of both together (concave and convex then convex and concave) and mis-matches between the text and illustrations. For example:

Nonlinearity comes in two kinds: concave (curves inward), as in the case of the king and the stone, or its opposite, convex (curves outward). And of course, mixed, with concave and convex sections. (note the order: concave / convex) Figures 10 and 11 show the following simplifications of nonlinearity: the convex and the concave resemble a smile and a frown, respectively. (note the order: convex / concave)

Figure 10 shows:

So, “convex, curves outward” is illustrated as an upward curve and “concave, curves inward” is illustrated as a downward curve. Outward is upward and inward is downward. It reads like a yoga pose instruction or a play-by-play call for a game of a Twister.

After this presentation, Taleb simplifies the ideas:

I use the term “convexity effect” for both, in order to simplify the vocabulary, saying “positive convexity effects” and “negative convexity effects.”

This was helpful. The big gain is when Taleb gets to the math and graphs what he’s talking about. Maybe the presentation to this point is helpful to non-math thinkers, but for me it was more obfuscating than illuminating. My adaptation of the graphs presented by Taleb:

With this picture, it’s easier for me to understand the non-linear relationship between a variable’s volatility and fragility vs antifragility. The rest of the chapter is easier to understand with this picture of the relationships in mind.

Estimating Effort – Adaptation

I’ve been running the informed intuition (or if you prefer, “disciplined intuition”)  approach to estimating effort for close to nine months now. For the most part, it has gone very well. The primary objective – inspire and support a conversation around the effort needed to complete a story – has most definitely been realized. Along the way the process has shifted to better support both the conversation and the team’s ability to internalize the process.

Originally, it was proposed that teams rate each of the effort characteristics on a sliding scale – 1 to 10 or 1 to 15, or whatever the team decided was most useful. Feedback from the teams lead to the discovery that it is easier to evaluate each effort characteristic using the modified Fibonacci scale rather than a sliding scale. This provides continuity across the method in that everything about a story’s effort value is considered using the same scale. It also reinforces the rationale behind the use of the Fibonacci scale and seems to facilitating the team’s ability to internalize the method. They are moving more quickly when deriving effort values.

A second adaptation is the use of several sets of characteristics, depending on the type of story, the predominant functional area represented by the team, and the nature of the work. For example, a story that involves the development of a computer board has a different set of criteria from stories that involve the creation of firmware for the board or the UI/UX features of the hardware product. The sets usually contain 3 or 4 common characteristics, such as “complexity” or “dependencies.” However, the hardware board may include something like “part sourcing” or “compliance testing.” This illustrates the importance of having the team deconstruct what “effort” means in the context of their world. When they determine the characteristics, the follow-on conversations about the effort are much more robust and meaningful.

In essence, this method is a reflection of the product owner’s responsibility for the “what” of the story and the team’s responsibility for figuring out the “how” of the story. “What I want,” says the product owner, “is an estimate of the effort involved to complete this story.” The teams effort criteria demonstrate to the product owner how they arrive at any particular value.

Intuition and Effort Estimates

In his book, “Blink,” Malcom Gladwell describes an interview between Gary Klein and a fire department commander. A lieutenant at the time, the firemen were attempting to put out a kitchen fire that didn’t “behave” like a kitchen fire should. The lieutenant ordered his men out of the house moments before the floor collapsed due to the fire being in the basement, not the kitchen. Klein later deconstructed the event with the commander and revealed a surprisingly rich set of experienced-based characteristics about that event the commander used to quickly evaluate the situation and respond. The lieutenant’s quick and well-calibrated-to-the-situation intuition undoubtedly saved them from serious injury or worse.

Intuition, however, is domain-specific. This same experienced-based intuition most probably wouldn’t have served the commander well if he suddenly found himself in a different situation – at the helm of a sailboat in rough water, for example, assuming the commander had never been on a sailboat before.

In the context of a software development environment, a highly experienced individual may have very good intuition on the amount of work needed to complete a specific piece of work assigned to them. But that intuition breaks down when the work effort necessarily includes several people or an entire team. So while intuition can serve a useful role in estimating work effort, that value is generally over-estimated, particularly when it needs to be a team estimate.

Consider work effort estimates when framed by Danial Kahneman’s work with System One and System Two thinking. System One is fast, based on experiences, and automatic. However, it isn’t very flexible and it’s difficult to train. This is the source of intuition. System Two, however, is analytical, methodical, intentional, deliberate, and slower. Also, it’s more trainable. It’s when the things that are trained in System Two sink into System One that new behaviors become automatic. With work effort estimates, we must first deliberately train our System Two using a method that is more deliberate about estimating before we can comfortably rely on our System One abilities.

Once calibrated, any number of changes could signal the need to re-calibrate by employing the deliberate process. Change the team composition and the team will need some measure of re-training of System One via System Two. Change a team’s project and the same re-training will need to occur.

The trained intuition approach to estimating effort develops what Kahneman called “disciplined intuition.” Begin with a deliberate, statistical approach to thinking about work effort. Establish a base rate using the value ranges for the effort characteristics. With experience, the team can begin to integrate their intuition later in the project process. If teams lead with their intuition (as is the case with planning poker and t-shirt sizes), they will filter for things that confirm their System One evaluation. With experience and a track record of success from training their intuition, teams can eventually lead with an intuitive approach. But it isn’t a very effective way to begin.

This method also leverages the work of Anders Ericsson and deliberate practice. The key here is the notion of increasing feedback into the process of estimating work effort. The deliberate action of working through a conversation that evaluates each of the work effort characteristics introduces more and better feedback loops that help the team evaluate the quality of their decision. Over time, they get better and better at correcting course and internalizing the lessons.

It’s like learning to drive a car. A new driver will leverage System Two heavily before they can comfortably rely on System One while driving. This is good enough for most driving situations. However, it wouldn’t be good enough if that same driver who is competent at driving in city traffic was suddenly placed on a NASCAR track in a powerful machine going 200 miles per hour.

A NASCAR track might be where we would go look for expert drivers but not where we would look for competent delivery truck drivers. For work estimates on software projects, we’re looking for a level of good enough that’s a reasonable match for the project work at hand. And we’re looking for better than untrained intuitive guesses.

Determining Effort Value – Tactics

While the concept and practice is straightforward, shifting a team from intuitive guesses about story points to a more deliberate approach for determining effort value (a.k.a. story points) can be a challenge at first. The following approach may help start the process.

  1. Begin by focusing on product backlog items (PBIs) that the team has estimated using their previous approach that are at a 5 or greater. There isn’t much to be gained by applying this approach to PBIs estimated at 1 or 2. PBIs that the team knows are a bigger effort but may not be able to articulate why that is the case are good candidates for learning how to apply this technique.
  2. Ask the team how much time it may take to complete a PBI. While I have written before about the importance of excluding time criteria when determining effort values, this can be a good place to start. It is what teams are most familiar with – for better or worse. Teams usually have not problem throwing out a time: 8 hours, 16 hours, etc.
  3. With the time estimate in hand ask the team:

“If you sit in front of your computer and start the clock, will the PBI be done if you do nothing and the estimated time elapses?”

I would hope the team would answer “No.”

  1. With the answer to the first question in hand, ask the following question:

If the passage of time alone won’t get the PBI work completed, what will you be doing (actions and behaviors) to complete the work?

The conversation that follows from this questions is the basis for determining the effort criteria the team needs to better describe what they will be doing on their way to completing the PBI. The techniques around establishing effort criteria are described in an earlier post.

Time Out!

In Estimating Effort – An Explicitly Implicit Approach I stated that time cannot be one of the attributes the team uses to describe what they mean by “effort.” The importance of this warrants the need for a deeper dive into the rationale behind this rule and how excluding time can lead to better predictability for team performance.

The primary objective for coaching teams to think about effort independent of time constraints is so that they can improve their skills for thinking about the actual work involved. Certainly they will spend time completing the work. But the simple passage of time won’t get the work done. Someone has to actually DO something. That something is the effort.

For example, maybe someone on the team says the product backlog item requires a lot of documentation. It isn’t complex and there aren’t any dependencies, it’s just going to take a lot of time – 7 days, maybe. So they want to give that PBI an effort value of 5 or 8 (or 5 or 8 story points, if that’s what you’re using) because it’s going to take a lot of time.

Remember, the purpose of these criteria is to generate a conversation around what the actual effort is. The criteria are just a set of guideposts that help the team hold a meaningful conversation about the effort.  So when someone on a team insists that they estimate using time, I ask them “What are you doing as the time you’ve estimated is passing? Are you just sitting there, watching the seconds tick away?” Of course they aren’t just sitting there. I’m asking the questions to elicit a comment about the actual work they are doing. Maybe they answer with something a little less vague, like “typing words.” That’s good. “What’s the difference between typing those words in a word processor and typing code in Vim?”

Continuing down this line of inquiry usually leads to the realization that typing documentation has many similar traits to coding. It can be complex. It may have dependencies.  It may require research for accuracy and it certainly will need a lot of debugging (professional writers call this “editing.”) Coders typically don’t like writing documentation. To them it’s just about the tedium of banging something out that’s not as fun as code. Sussing out the effort like this will lead to better acceptance criteria and definition of done associated with the PBI.

The downside of time estimates is that they hide all manner of sins and rabbit holes. The planning fallacy, precision bias, availability heuristic, and survivorship bias are just a few of the mental obstacles guaranteed to reduce the accuracy of time estimates. Or you may have to deal with a team member who wants to estimate using time because they know full well it offers the opportunity to hide slow work. (Gamers gotta game.) When teams have run the gauntlet of effort criteria, they are more likely to end up with a better picture of how much work they are being asked to do when time is excluded from the conversation. Effort criteria force the team to be more explicit about the activities they are engaged with as the clock ticks.

The investment in identifying time-independent effort criteria yields further benefits in the retrospective. Was the team unable to complete a PBI in the sprint? Was all the work finished two days early? Have a look at the effort criteria and ask which of them were a factor in making the PBIs a bigger or smaller effort than initially estimated. This is how teams learn and improve their skill at estimating. The better they are at estimating the more predictable their productivity.

OK, so let’s say you have a team doing a great job of determining the effort needed to complete a PBI and they do so without including time. No doubt, management will be unimpressed. They want time estimates. Good news! We can give them time estimates…in two week increments.

With the team focused on figuring out time independent effort values for every PBI in the backlog and an ongoing experience of how much effort they can reliably complete in two week increments, product owners can provide a reasonable forecast for when the release or project will be complete. The team focuses on accurate time independent effort estimates. The scrum master and product owner worry about the performance metrics and time projections.

It’s surprising how hard of a sell this can be for teams. They are hard wired to think in terms of time because that’s what traditional project management has hounded them for since before coding was a thing. I tell teams, “With Agile and scrum, you no longer have to worry about time. That’s the product owner’s job. But you do have to develop very good skills at estimating effort.” It’s common for them to have a hard time adjusting to the new paradigm.

Estimating Effort – An Explicitly Implicit Approach

It is difficult to make predictions, especially about the future.Unknown

Sage advice.

So why bother estimating the amount of work needed to complete a product backlog item? After all, since estimates are about the future the probability is high that they will be wrong. Actually, they may very well be guaranteed to be wrong. It’s just that some of the guesses will be more accurate than others. And if they happen to match what the effort ended up to be, they just look like they were “right.”

I’ve written in the past expressing my thoughts about estimating the effort needed to complete product backlog items, particularly with respect to story points. I believe working to find a relative gauge to how well teams are estimating work is important. Without them, cognitive biases such as the optimism bias and planning fallacy can significantly distort a project delivery timeline. However, the phrase “story point” is burdened with a lot of baggage. It has been abused and misused such that invoking the phrase often causes more harm than good.

I’ve been experimenting recently with a different approach to estimating effort. The method I’ll describe in this post got a bit of a boost after listening to a recent interview with Psychologist and Nobel laureate Daniel Kahneman. In this interview, Kahneman describes an experience he had while serving in the Israeli army some sixty years ago. He was assigned the job of setting up an interview process that would determine how well a recruit would do as a combat soldier. For this process, he selected six traits and instructed the interviewers to ask questions designed to evaluate each trait independently and score them. The interviewers were not happy with this approach. As a compromise, Kahneman instructed the interviewers, when they were finished asking about the six traits, to close their eyes and just jot down a number they felt matched how good a soldier the recruit might be. What he discovered:

When we validated the results of the interview, it was a big improvement on what had gone on before. But the other surprise was that the final intuitive judgments added, it was good. It was as good as the average of the six traits, and not the same. It added information, so actually we ended up with a score that was half determined by the specific ratings, and the intuition got half the weight. That, by the way, stayed in the Israeli army for well over 50 years.Daniel Kahneman

This intuitive evaluation made by the interviewers is similar to what Agile methods ask of development teams when determining a value for “story points.” T-shirt sizes, planning poker, dot voting, affinity mapping and many similar techniques are all designed to elicit an intuitive sense of the effort involved. If there is a disagreement between team members, than a dialog follows to understand what the discrepancy is all about. This continues until there is alignment on what the team believes the effort to be. When it works, it works well.

So on to the details of the approach I’ve been experimenting with. (It doesn’t have a name yet.) The result of this approach is a number I call the “effort value.” The word “value” is a reference to the actual elementary mathematics value being derived. Much like the answer to the question “What value results from adding 2 and 2?” Answer: 4. The word “value” also suggests an intrinsic worth, something beyond a hard number. My theory is that this will help teams think beyond the mere number and think also about the value they are delivering to stakeholders. The word “point” correlates to a hard number and lacks any association to intrinsic worth or value.

Changing the words introduces a simple and small shift that nonetheless has a significant impact. With the change, teams are more open to considering a different approach to determining estimates.

So how is the effort value derived?

I begin by having the team define 4-5 characteristics or attributes that, to them, describe what they mean by “effort.” It is important for the team to define these attributes. By doing so, they own the definition and it becomes much harder for them to dismiss the attributes as “someone else’s” and thereby object to their use in deriving an effort value. These attributes can be anything that is meaningful to the team. Examples:

  • Complexity – Is the work straightforward (e.g. code a bubble sort function) or does it involve interrelated systems (e.g. code a predictive inventory control algorithm)?
  • Dependencies – How dependent is the product backlog item on other backlog items or other teams?
  • Familiarity – Is this work very similar to work the team has done in the past or something quite new? Tasking a coder with documenting a piece of straightforward code may actually be a difficult effort because the coding language they spend most of their day with is familiar whereas writing clear sentences that non-technical people can understand is unfamiliar.
  • Information – Is the detail in the product backlog item complete? Are the acceptance criteria and definition of done clear?
  • Technical Debt Risk – Does the PBI require any refactoring of related code? Is any technical debt being incurred with the PBI?
  • Design Stability – Is there a lot of discovery and exploration needed to complete the PBI?
  • Confidence for Completing a PBI within the Sprint – This category may roll up several categories.
  • Tedium – Perhaps the effort involves a lot of repetitive copy and paste that nonetheless requires careful attention to avoid simple mistakes.

The team can define any attribute they wish. However, there are a few criteria to consider:

  • Keep the list limited to 4-6 attributes. More than that risks turning the derivation of an effort value into the equivalent of a product backlog item navel-gazing exercise.
  • Time cannot be one of the attributes.
  • The attributes should be reasonable. Assessing a product backlog item’s effort value by evaluating it’s “aura” or the current position of the stars are generally not useful attributes. On the other hand, I’ve listened to arguments against evaluating estimates in terms of “complexity” as being similarly useless. I see the point of those arguments, but my view is that the attributes must first and foremost be meaningful to the entire team. In the end, it’s an educated guess and arguments about the definition of terms like “complexity” are counterproductive to the overall intent of deriving an effort value.

Each of these attributes is then given a scale, the same scale for each attribute – 1 to 10, 1 to 15 – whatever the team feels is most appropriate. The team then goes through each of these attributes and evaluates the product backlog item attribute on the scale. (NB: After nine months of Plan-Do-Check-Adapt, a better approach for scoring the attributes has been determined.) The low number on the scale represents very little impact. If dependency, for example, is one of the attributes then a 1 might mean that the product backlog item is entirely self-contained. A 10 might represent a case where the product backlog item is dependent on several other product backlog items or perhaps the output from other teams.

When this is done, ask the team where on the modified Fibonacci scale they think this particular product backlog item’s effort value should be. If they’re struggling you can do the math: find the average for all the attributes and match that number in the modified Fibonacci scale. If the average is a decimal, for example 3.1, match the value to the next highest modified Fibonacci scale number. In this case the value would be 5. Then ask the team if they feel that number it’s a good representation of the effort value for the product backlog item.

This may seem like a lot of unnecessary gyrations, but for technical people it’s a simple process they can understand. The bonus is a number they can calculate. The number isn’t what’s important here. What’s important is the conversation that happens around the attributes and what the team feels about the number that results from the conversation. This exercise is meant to develop their intuitive muscles for considering multiple aspects and dimensions behind the “effort” needed for them to get the work done.

Use this process enough times and eventually calculating the average can be dropped from the process. Continue using this process and eventually calculating the numbers for the individual attributes can be dropped from the process. I don’t know if it’s a good idea to drop the use of the attributes for generating the needed conversation around the effort needed, but it will certainly be valuable to reconsider the list of attributes from time to time so as to fine tune the list to match what the team feels is important.

With this approach I’m turning the estimation process on its head (or back on its feet, if Kahneman is right.) Rather than seek the intuitive response first (e.g. t-shirt size) and elicit details later if there is a mismatch between team members, this method seeks to better prime and develop the team’s intuition about the effort value by having them explicitly consider a list of self-selected attributes (or traits) for effort first and then include an intuitive evaluation for effort.

Don’t try to form an intuition quickly, which was what we normally do. Focus on the separate points, and then when you have the whole profile, then you can have an intuition and it’s going to be better. Because people form intuitions too quickly, and the rapid intuitions are not particularly good. If you delay intuition until you have more information, it’s going to be better.Daniel Kahneman

Update

See Time Out! and Determining Effort Value – Tactics for additional information on this technique.

Assessing and Tracking Team Performance – Part 9: Design Changes and Scope

Changes in design can either be tightly or loosely coupled to changes in scope. In general, you can’t change one without changing the other. This is how I think of design and scope. Others think of them differently.

Few people intentionally change the scope of a project. Design changes, however, are usually intentional and frequent. They are also usually small relative to the overall project design so their effect on scope and progress can go unnoticed.

Nonetheless, small design changes are additive. Accumulate enough of them and it becomes apparent that scope has been affected. Few people recognize what has happened until it’s too late. A successive string of “little UI tweaks,” a “simple” addition to handle another file format that turned out to be not-so-simple to implement, a feature request slipped in by a senior executive to please a super important client – changes like this incrementally and adversely impact the delivery team’s performance.

Scope changes primarily impact the amount of Work to Do (Figure 1). Of course, Scope changes impact other parts of the system, too. The extent depends on the size of the Scope change and how management responds to the change in Scope. Do they push out the Deadline? Do they Hire Talent?

Figure 1 (click to enlarge)

The effect of Design Changes on the system are more immediate and significant. Progress slows down while the system works to understand and respond to the Design Changes. As with Scope, the effect will depend on the extent of the Design Changes introduced into the system. The amount of Work to Do will increase. The development team will need to switch focus to study the changes (Task Switching. ) If other teams are dependent on completion of prior work or are waiting for the new changes, Overlap and Concurrence will increase. To incorporate the changes mid-project, there will likely be Technical Debt incurred in order to keep the project on schedule. And if the design impacts work already completed or in progress, there will be an increase in the amount of Rework to Do for the areas impacted by the Design Changes.

Perhaps the most important secondary consequence of uncontrolled design changes is the effect on morale. Development teams love a good challenge and solving problems. But this only has a positive effect on morale if the goal posts don’t change much. If the end is perpetually just over the next hill, morale begins to suffer. This hit to morale usually happens much quicker than most managers realize.

It is better to push off non-critical design changes to a future release. This very act often serves as a clear demonstration to development teams that management is actively working to control scope and can have a positive effect on the team’s morale, even if they are under a heavy workload.

 

Center Point: The Product Backlog

If a rock wall is what is needed, but the only material available is a large boulder, how can you go about transforming the latter into the former?

Short answer: Work.

Longer answer: Lots of work.

Whether the task is to break apart a physical boulder into pieces suitable for building a wall or breaking up an idea into actionable tasks, there is a lot of work involved. Especially if a team is inexperienced or lacks the skilled needed to successfully complete such a process.

Large ideas are difficult to work with. They are difficult to translate into action until they are broken down into more manageable pieces. That is, descriptions of work that can be organized into manageable work streams.

We choose to go to the Moon in this decade and do the other things, not because they are easy, but because they are hard; because that goal will serve to organize and measure the best of our energies and skills, because that challenge is one that we are willing to accept, one we are unwilling to postpone, and one we intend to win, and the others, too.President John Kennedy

That’s a pretty big boulder. In fact, it’s so big it served quite well as a “product vision” for the effort that eventually got men safely to and from the moon in 1969. Even so, Kennedy’s speech called out the effort that it would take to realize the vision. Hard work. Exceptional energy and skill. What followed in the years after Kennedy’s speech in 1962 was a whole lot of boulder-busting activity

A user story is a brief, simple statement from the perspective of the product’s end user. It’s an invitation for a conversation about the what’s needed so that the story can meet all the user’s expectations.

In it’s simplest form, this is all a product backlog is. A collection of doable user stories derived from a larger vision for the product and ordered in a way that allows for a realistic path to completion to be defined. While this is simple, creating and maintaining such a thing is difficult.

Experience has taught me that the single biggest impediment to improving a team’s performance is the lack of a well-defined and maintained product backlog. Sprint velocities remain volatile if design changes or priorities are continually clobbering sprint scope. Team morale suffers if they don’t know what they’re going to be working on sprint-to-sprint or, even worse, if the work they have completed will have to be reworked or thrown out. The list of negative ripple effects from a poor quality product backlog is a long one.


Boulder image by pen_ash from Pixabay

How does Agile help with long term planning?

I’m often involved in discussions about Agile that question its efficacy in one way or another. This is, in my view, a very good thing and I highly encourage this line of inquiry. It’s important to challenge the assumptions behind Agile so as to counteract any complacency or expectation that it is a panacea to project management ills. Even so, with apologies to Winston Churchill, Agile is the worst form of project management…except for all the others.

Challenges like this also serve to instill a strong understanding of what an Agile mindset is, how it’s distinct from Agile frameworks, tools, and practices, and where it can best be applied. I would be the first to admin that there are projects for which a traditional waterfall approach is best. (For example, maintenance projects for nuclear power reactors. From experience, I can say traditional waterfall project management is clearly the superior approach in this context.)

A frequent challenge the idea that with Agile it is difficult to do any long-term planning.

Consider the notion of vanity vs actionable metrics. In many respects, large or long-term plans represent a vanity leading metric. The more detail added to a plan, the more people tend to believe and behave as if such plans are an accurate reflection of what will actually happen. “Surprised” doesn’t adequately describe the reaction when reality informs managers and leaders of the hard truth. I worked a multi-million dollar project many years ago for a Fortune 500 company that ended up being canceled. Years of very hard work by hundreds of people down the drain because projected revenues based on a software product design over seven years old were never going to materialize. Customers no longer wanted or needed what the product was offering. Our “solution” no longer had a problem to solve.

Agile – particularly more recent thinking around the values and principles in the Manifesto – acknowledges the cognitive biases in play with long-term plans and attempts to put practices in place that compensate for the risks they introduce into project management. One such bias is reflected in the planning fallacy – the further out the planning window extends into the future, the less accurate the plan. An iterative approach to solving problems (some of which just happen to use software) challenges development teams on up through managers and company leaders to reassess their direction and make much smaller course corrections to accommodate what’s being learned. As you can well imagine, we may have worked out how to do this in the highly controlled and somewhat predictable domain of software development, however, the critical areas for growth and Agile applicability are at the management and leadership levels of the business.

Another important aspect the Agile mindset is reflected in the Cone of Uncertainty. It is a deliberate, intentional recognition of the role of uncertainty in project management. Yes, the goal is to squeeze out as much uncertainty (and therefore risk) as possible, but there are limits. With a traditional project management plan, it may look like everything has been accounted for, but the rest of the world isn’t obligated to follow the plan laid out by a team or a company. In essence, an Agile mindset says, “Lift your gaze up off of the plan (the map) and look around for better, newer, more accurate information (the territory.) Then, update the plan and adjust course accordingly.” In Agile-speak, this is what is behind phrases like “delivery dates emerge.”

Final thought: You’ll probably hear me say many times that nothing in the Agile Manifesto can be taken in isolation. It’s a working system and some parts if it are more relevant than others depending on the project and the timing. So consider what I’ve presented here in concert with the Agile practices of developing good product visions and sprint goals. Product vision and sprint goals keep the project moving in the desired direction without holding it on an iron-rails-track that cannot be changed without a great deal of effort, if at all.

So, to answer the question in the post title, Agile helps with long term planning by first recognizing the the risks inherent in such plans and implementing process changes that mitigate or eliminate those risks. Unpacking that sentences would consist of listing all the risks inherent with long-term planning and the mechanics behind and reasons why scrum, XP, SAFe, LeSS, etc., etc., etc. have been developed.