How To Run an Agile Death March

Found on the Internet…

An experienced scrum master describes their work cycles as going “from being very busy during sprint end/start weeks to be [sic] very bored.” While this scrum master works very hard to fill in the gaps with 1:1’s with the team members and providing regular training opportunities, they nonetheless ask the question, “Does anyone have any suggestions of things I am maybe not doing that I should be doing?” One response included the following:

“Now, it could be that you have worked to create a hyper-performing team and there is no further room for improvement. A measure of this is that velocity (or similar metric) has increased by an order of magnitude in the last year.

However, the most likely scenario is that you and your team have become ‘comfortable’ and velocity has not increased significantly in the last few Sprints and/or there is a high variance in velocity.”

This reflects a common misunderstanding of “velocity” and its confusion with “acceleration.” (It also reflects the “more is better” and “winners vs losers” thinking derived from the scrum sports metaphor and points as a way of keeping score. I’ve written about that elsewhere.) Neither does the commenter understand what “order of magnitude” means. A velocity that increases by an order of magnitude in a year isn’t a velocity, it’s an acceleration. That’s a bad thing. This wouldn’t be a “hyper-performing” team. This would be a team headed for a crash as a continual acceleration in story points completed is untenable. More and more points each sprint isn’t the goal of scrum. A product owner cannot predict when their team might complete a feature or a project if the delivery of work is accelerating throughout the project.

Assuming a typical project, something that continues for a year or more, the team and the project will eventually crash as they’ve been pressured to work more and more hours and cut more and more corners in the interests of completing more and more points. The accumulation of bugs, small and large, will slow progress. Team fatigue will increase and moral decrease, resulting in turn-over and further delays. In common parlance, this is referred to as a “death march.”

Strictly speaking, velocity is some displacement over time. In the case of scrum, it is the number of story points completed in a sprint. We’ve “displace” some number of story points from being “not done” to “done.” By itself, a single sprint’s velocity isn’t particularly useful. Looking at the velocity of a number of successive sprints, however, is useful. There are two pieces of information from looking at successive sprint velocities that, when considered together, can reveal useful aspects of how well a team is performing or not. The first is the average over the previous 5 to 8 sprints, a rolling average. As a yard stick, this can provide a measure of predictability. Using this average, a product owner can make a rough calculation for how many sprints remain before completing components or the project based on the story point information in the product backlog.

The measure of confidence for this prediction would come from an analysis of the variance demonstrated in the sprint velocity values over time. Figures 1 and 2 show the distinction between the value provided by a rolling average and the value provided by the variance in values over time.

Figure 1

Figure 2

In both cases the respective teams have an average velocity of 21 points per sprint. However, the variability in the values over time show that the team in Figure 1 would have a much higher level of confidence in any predictions based on their past performance than the team shown in Figure 2.

What matters is the trend, each sprint’s velocity over a number of sprints. The steady completion of story points (i.e. work) sprint to sprint is the desirable goal. Another way to say this is that a steady velocity makes it possible to predict project delivery dates. In real life, there will be a variance (up and down) of sprint velocity over time and the goal is to guide the project such that this variance is within a manageable range.

If a team were to set as its goal an increase in the number of story points completed from sprint to sprint then their performance chart might initially look like Figure 3.

Figure 3

Such a pace is unsustainable and eventually the team burns out. Fatigue, decreased moral, and overall dissatisfaction with the project cause team members to quit and progress grinds to a halt. The fallout of such a collapse is likely to include the buildup of significant technical debt and code errors as the run-up to the crescendo forced team members to cut corners, take shortcuts, and otherwise compromise the quality of their effort. [1] The resulting performance chart would look something like Figure 4.

Figure 4

All that said, I grant that there is merit in coaching teams to make reasonable improvements in their overall sprint performance. An increase in the overall average velocity might be one way to measure this. However, to press the team into achieving an order of magnitude increase in performance is a fools errand and more than likely to end in disaster for the team and the project.

References

[1] Lyneis, J.M, Ford, D.N. (2007). System dynamics applied to project management: a survey, assessment, and directions for future research. System Dynamics Review, 23 (2/3), 157-189.

Openness, Grapevines, and Strangleholds

If you truly value openness on your Agile teams, you must untangle them from the grapevine.

Openness is one of the core scrum values. As stated on Scrum.org:

“The scrum team and it’s stakeholders agree to be open about all the work and the challenges with performing the work.”

This is a very broad statement, encompassing not only openness around work products and processes, but also each individual’s responsibility for ensuring that any challenges related to overall team performance are identified, acknowledged, and resolved. In my experience, issues with openness related to work products or the processes that impact them are relatively straightforward to recognize and resolve. If a key tool, for example, is mis-configured or ill-suited to what the team needs to accomplish than the need to focus on issues with the tool should be obvious. If there is an information hoarder on the team preventing the free flow of information, this will reveal itself within a few sprints after a string of unknown dependencies or misaligned deliverables have had a negative impact on the team’s performance. Similarly, if a team member is struggling with a particular story card and for whatever reason lacks the initiative to ask for help, this will reveal itself in short order.

Satisfying the need for openness around individual and team performance, however, is a much more difficult behavior to measure. Everyone – and by “everyone” I mean everyone – is by nature very sensitive to being called out as having come up short in any way. Maybe it’s a surprise to them. Maybe it isn’t. But it’s always a hot button. As much as we’d like to avoiding treading across this terrain, it’s precisely this hypersensitivity that points to where we need to go to make the most effective changes that impact team performance.

At the top of my list of things to constantly scan for at the team level are the degrees of separation (space and time) between a problem and the people who are part of the problem. Variously referred to as “the grapevine”, back channeling, or triangulation, it can be one of the most corrosive behaviors to a team’s trust and their ability to collaborate effectively. From his research over the past 30 years, Joseph Grenny [1] has observed “that you can largely predict the health of an organization by measuring the average lag time between identifying and discussing problems.” I’ve found this to be true. Triangulation and back-channeling adds significantly to the lag time.

To illustrate the problem and a possible solution: I was a newly hired scrum master responsible for two teams, about 15 people in total. At the end of my first week I was approached by one of the other scrum masters in the company. “Greg,” they said in a whisper, “You’ve triggered someone’s PTSD by using a bad word.” [2]

Not an easy thing to learn, having been on the job for less than a week. Double so because I couldn’t for the life of me think of what I could have said that would have “triggered” a PTSD response. This set me back on my heels but I did manage to ask the scrum master to please ask this individual to reach out to me so I could speak with them one-to-one and apologize. At the very least, suggest they contact HR as a PTSD response triggered by a word is a sign that someone needs help beyond what any one of us can provide. My colleague’s responses was “I’ll pass that on to the person who told me about this.”

“Hold up a minute. Your knowledge of this issue is second hand?”

Indeed it was. Someone told someone who told the scrum master who then told me. Knowing this, I retracted my request for the scrum master to pass along my request. The problem here was the grapevine and a different tack was needed. I coached the scrum master to 1) never bring something like this to me again, 2) inform the person who told you this tale that you will not be passing anything like this along to me in the future, and 3) to coach that person to do the same to the person who told them. The person for whom this was an issue should either come to me directly or to my manager. I then coached my manager and my product owners that if anyone were to approach them with a complaint like this to listen carefully to the person, acknowledge that you heard them, and to also encourage them to speak directly with me.

This should be the strategy for anyone with complaints that do not rise to the level of needing HR intervention. The goal of this approach is to develop behaviors around personal complaints such that everyone on the team knows they have a third person to talk to and that the issue isn’t going to be resolved unless they talked directly to the person with whom they have an issue. It’s a good strategy for cutting the grapevines and short circuiting triangulation (or in my case the quadrangulation.) To seal the strategy, I gave a blanket apology to each of my teams the following Monday and let them know what I requested of my manager and product owners.

The objective was to establish a practice of resolving issues like this at the team level. It’s highly unlikely (and in my case 100% certain) that a person new to a job would have prior knowledge of sensitive words and purposely use language that upsets their new co-workers. The presupposition of malice or an assumption that a new hire should know such things suggested a number of systemic issues with the teams, something later revealed to be accurate. It wouldn’t be a stretch to say that in this organization the grapevine supplanted instant messaging and email as the primary communication channel. With the cooperation of my manager and product owners, several sizable branches to the grapevine had been cut away. Indeed, there was a marked increase in the teams attention at stand-ups and the retrospectives became more animated and productive in the weeks that followed.

Each situation is unique, but the intervention pattern is more broadly applicable: Reduce the number of node hops and associated lag time between the people directly involved with any issues around openness. This in and of itself may not resolve the issues. It didn’t in the example described above. But it does significantly reduce the barriers to applying subsequent techniques for working through the issues to a successful resolution. Removing the grapevine changes the conversation.

References

[1] Grenny, J. (2016, August 19). How to Make Feedback Feel Normal. Harvard Business Review, Retrieved from https://hbr.org/2016/08/how-to-make-feedback-feel-normal

[2] The “bad” word was “refinement.” The team had been using the word “grooming” to refer to backlog refinement and I had suggested we use the more generally accepted word. Apparently, a previous scrum master for the team had been, shall we say, overly zealous in pressing this same recommendation such that it was a rather traumatic experience for someone on the one of the teams. It later became known that this event was grossly exaggerated, “crying PTSD” as a variation of “crying wolf,” and that the reporting scrum master was probably working to establish a superior position. It would have worked, had I simply cowered and accepted the report as complete and accurate. The strategy described in this article proved effective at preventing this type of behavior.

Behind the Curtain: The Delivery Team Member Role

Even with the formalization of Agile practices into numerous frameworks and methodologies, I have to say not much has changed for the software developer or engineer with respect to how they get work done. I’m not referring to technology. The changes in what software developers and engineers use to get work done has been seismic. The biggest shift in the “how,” in my experience, is that what were once underground practices are now openly accepted and encouraged.

When I was coding full time, in the pre-Agile days and under the burden of CMM, we followed all the practices for documenting use cases, hammering out technical and functional specs, and laboriously talking through requirements. (I smile when I hear developers today complain about the burden of meetings under Scrum.) And when it came time to actually code, I and my fellow developers set the multiple binders of documentation aside and engaged in many then unnamed Agile practices. We mixed and matched use cases in a way that allowed for more efficient coding of larger functional components. We “huddled” each morning in the passage way to cube pods to discuss dependencies and brainstorm solutions to technical challenges. Each of these became more efficiently organized in Agile as backlog refinement and daily stand-ups. We had numerous other loose practices that were not described in tomes such as CMM.

But Agile delivery teams today are frequently composed of more than just technical functional domains. There may be non-technical expertise included as integral members of the team. Learning strategists, content editors, creative illustrators, and marketing experts may be part of the team, depending on the objectives of the project. Consequently, this represents a significant challenge to technical members of the team (i.e. software development and  tech QA) who are unused to working with non-technical team members. Twenty years ago a developer who might say “Leave me alone so I can code.” would have been viewed as a dedicated worker. Today, it’s a sign that the developer risks working in isolation and consequently delivering something that is mis-matched with the work being done by the rest of the team.

On an Agile delivery team, whether composed of a diverse set of functional domains or exclusively technical experts, individual team members need to be thinking of the larger picture and the impact of their work on that of their team mates and the overall work flow. They need to be much more attentive to market influences than in the past. The half-life of major versions, let along entire products, is such that most software products outside a special niche can’t survive without leveraging Agile principles and practices. Their knowledge must expand beyond just their functional domain. The extent to which they possess this knowledge is reflected in the day-to-day behaviors displayed by the team and it’s individuals.

  • Is everyone on the team sensitive and respectful of everyone else’s time? This means following through on commitments and promises, including agreed upon meeting start times. One person showing up five minutes late to a 15 minute stand-up has just missed out on a third of the meeting at least. If the team waits for everyone to show up before starting, the late individual has just squandered 5 minutes multiplied by the number of team mates. For a 6 member team, that’s a half hour. And if it happens every day, that’s 2.5 hours a week. It adds up quickly. Habitual late-comers are also signaling a lack of respect to other team members. They are implicitly saying “Me and my time is more important than anyone else on the team.” Unchecked, this quickly spills over into other areas of the team’s interactions. Enforcing an on-time rule like this is key to encouraging the personal discipline necessary to work effectively as a team. When a scrum master keeps the team in line with a few basic items like this, the larger discipline issues never seem to arise. As U.S. Army General Ann Dunwoody (ret.) succinctly points out in her book, never walk by a mistake. Doing so gives implicit acceptance for the transgression. Problems blossom from there, and it isn’t a pretty flower. (As a scrum master, I cannot “make” someone show up on time. But I can address the respect aspect of this issue by always starting on time. That way, late-comers stand out as late, not more important. Over time, this tends to correct the lateness issue as well.)
  • Everyone on the team must be capable of tracking a constantly evolving set of dependencies and knowing where their work fits within the flow. To whom will they be delivering work? From whom are they expecting completed work? The answer to these questions may not be a name on the immediate team. Scrum masters must periodically explicitly ask these questions if the connections aren’t coming out naturally during stand-ups. Developing this behavior is about coaching the team to look beyond the work on their desk and understand how they are connected to the larger effort. Software programmers seem to have a natural tendency to build walls around their work. Software engineers less so. And on teams with diverse functional groups it is important for both the scrum master and product owner to be watchful for when barriers appear for reasons that have more to due to lack of familiarity across functional domains than anything else.
  • Is the entire team actively and consistently engaged with identifying and writing stories?
  • Is the team capable and willing to cross domain boundaries and help? Are they interested in learning about other parts of the product and business?

Product owners and scrum masters need to be constantly scanning for these and other signs of disengagement as well as opportunities to connect cross functional needs.

Behind the Curtain: The Product Owner Role

When someone owns something they tend to keep a closer watch on where that something is and whether or not it’s in good working order. Owners are more sensitive to actions that may adversely affect the value of their investment. It’s the car you own vs the car you rent. This holds true for products, projects, and teams. For this reason the title of “product owner” is well suited to the responsibilities assigned to the role. The explicit call-out to ownership carries a lot of goodness related to responsibility, leadership, and action.

Having been a product owner and having coached product owners, I have a deep respect for anyone who takes on the challenges associated with this role. In my view, it’s the most difficult position to fill on an Agile team. For starters, there are all the things a product owner is responsible for as described in any decent book on scrum: setting the product vision and road map, ordering the product backlog, creating epics and story cards, defining acceptance criteria, etc. What’s often missing from the standard set of bullet items is the “how” for doing them well. Newly minted product owners are usually left to their own devices for figuring this out. And unfortunately, in this short post I won’t be offering any how-to guidance for developing any of the skills generally recognized to be part of the product owner role.

What I’d most like to achieve in this post is calling out several of the key skills associated with quality product ownership that are usually omitted from the books and trainings – the beyond-the-basics items that any product owner will want to include in their continuous learning journey with Agile. In no particular order…

  • Product owners must be superb negotiators when working with stakeholders and team members. The techniques used for each group are different so it’s important to understand the motivations that drive them. I’ve found Jim Camp’s “Start with No” and Chris Voss’ “Never Split the Difference” to be particularly helpful in this regard.
  • One of the four values stated in the Agile Manifesto is “Responding to change over following a plan.” Unfortunately, this is often construed to mean any change at any time is valuable. This Agile value isn’t a directive for maximum entropy and chaos. Product owners must remain vigilant to scope changes. And the boundaries for scope are defined by the decisions product owners make. So product owners must be decisive and committed to the decisions, agreements, and promises that have been made with stakeholders and the team.
  • Product owners need to have a good sense for when experimentation may be needed to sort out any complex or risky features in the project – creating spikes or proof-of-concept work early enough in the project so as to avoid any costly pivots later in the project.
  • Even with experimentation, making the call to pivot will require a product owner’s clear understanding of past events and any path forward that offers the greatest chance for success. As the sage says, predictions are difficult to make, particularly about the future. The expectation isn’t for clairvoyance, rather for the ability to pay close attention to what the (non-vanity) data are telling them.
  • Understanding how to work through failures and dealing with “The Dip,” as Seth Godin calls it, are also important skills for a product owner. The team and the stakeholders are going to look to the product owner’s leadership to demonstrate confidence that they are on the right track.

More than other roles on an Agile team, the product owner must be a truly well-rounded and experienced individual. Paradoxically, it is a role that is both constrained by the highly visible nature of the position and dynamic due to the skill set required to maximize the chances for project success.

Clouds and Windmills

I recently resigned from the company I had been employed by for over 5 years. The reason? It was time.

During my tenure1 I had the opportunity to re-define my career several times within the organization in a way that added value and kept life productive, challenging, and rewarding. Each re-definition involved a rather extensive mind mapping exercises with hundreds of nodes to described what was working, what wasn’t working, what needed fixing, and where I believed I could add the highest value.

This past spring events prompted another iteration of this process. It began with the question “What wouldn’t happen if I didn’t go to work today?”2 This is the flip of asking “What do I do at work?” The latter is a little self-serving. We all want to believe we are adding value and are earning our pay. The answer is highly filtered through biases, justifications, excuses, and rationalizations. But if in the midsts of a meeting you ask yourself, “What would be different if I were not present or otherwise not participating?”, the answer can be a little unsettling.

This time around, in addition to mind mapping skills, I was equipped with the truly inspiring work of Tanmay Vora and his sketchnote project. Buy me a beer some day and I’ll let you in on a few of my discoveries. Suffice it to say, the overall picture wasn’t good. I was getting the feeling this re-definition cycle was going to include a new employer.

A cascade of follow-on questions flowed from this iteration’s initial question. At the top:

  1. Why am I staying?
  2. Is this work aligned with my purpose?
  3. Have my purpose and life goals changed?

The answers:

  1. The paycheck
  2. No.
  3. A little.

Of course, it wasn’t this simple. The organization changed, as did I, in a myriad of ways. While exploring these questions, I was reminded of a story my Aikido teacher, Gaku Homma, would tell when describing his school. He said it was like a rope. In the beginning, it had just a few threads that joined with him to form a simple string. Not very strong. Not very obvious. But very flexible. Over time, more and more students joined his school and wove their practice into Nippon Kan’s history. Each new thread subtlety changed the character of the emerging rope. More threads, more strength, and more visibility. Eventually, an equilibrium emerges. Some of those threads stop after a few short weeks of classes, other’s (like mine) are 25 years long before they stop, and for a few their thread ends in a much more significant way.

Homma Sensi has achieved something very difficult. The threads that form Nippon Kan’s history are very strong, very obvious, and yet remain very flexible. Even so, there came a time when the right decision for me was to leave, taking with me a powerful set of skills, many good memories, and friendships. The same was true for my previous employer. Their rope is bending in a way that is misaligned with my purpose and goals. Neither good nor bad. Just different. Better to leave with many friendships intact and a strong sense of having added value to the organization during my tenure.

The world is full of opportunities. And sometimes you have to deliberately and intentionally clear all the collected clutter from your mental workspace so those opportunities have a place to land. Be attentive to moments like this before your career is remembered only as someone who yells at clouds and tilts at windmills.


1 By the numbers…

1,788 Stand-ups
1,441 Wiki/Knowledgebase Contributions
311 Sprint/Release Planning Sessions
279 Reviews
189 Retrospectives
101 Projects
31 Internal Meet-ups
22 Agile Cafés
10 Newsletters
5.5 Years
3 Distinct Job Titles
1 Wild Ride

2 My thanks to colleague Lennie Noiles and his presentation on Powerful Questions. While Lennie didn’t ask me this particular question, it was inspired by his presentation.

Agile Metrics – Time (Part 3 of 3)

In Part 1 of this series, we set the frame for how to use time as a metric for assessing Agile team and project health. In Part 2, we looked at shifts in the cross-over point between burn-down and burn-up charts. In Part 3, we’ll look at other asymmetries and anomalies that can appear in time burn-down/burn-up charts and explore the issues the teams may be struggling with under these circumstances.

Figure 1 shows a burn-up that by the end of the sprint significantly exceeded the starting value for the original estimate.

Figure 1
Figure 1

There isn’t much mystery around a chart like this. The time needed to complete the work was significantly underestimated. The mystery is in the why and what that led to this situation.

  • Where there unexpected technical challenges?
  • Were the stories poorly defined?
  • Were the acceptance criteria unclear?
  • Were the sprint goals, objectives, or minimum viable product definition unclear?

Depending on the tools used to capture team metrics, it can be helpful to look at individual performances. What’s the differential between story points and estimated time vs actual time for each team member? Hardly every useful as a disciplinary tool, this type of analysis can be invaluable for knowing who needs professional development and in what areas.

In this case, there were several technical challenges related to new elements of the underlying architecture and the team put in extra hours to resolve them. Even so, they were unable to complete all the work they committed to in the sprint. The the scrum master and product owner need to monitor this so it isn’t a recurrent event or they risk team burnout and morale erosion if left unchecked. There are likely some unstated dependencies or skill deficiencies that need to be put on the table for discussion during the retrospective.

Figure 2 shows, among other things, unexpected jumps in the burn-down chart. There is clearly a significant amount of thrashing evident in the burn-down (which stubbornly refuses to actually burn down.)

Figure 2
Figure 2

Questions to explore:

  • Are cards being brought into the sprint after the sprint has started and why?
  • Are original time estimates being changed on cards after the sprint has started?
  • Is there a stakeholder in the grass, meddling with the team’s commitment?
  • Was a team member added to the team and cards brought into the sprint to accommodate the increased bandwidth?
  • Whatever is causing the thrashing, is the team (delivery team members, scrum master, and product owner) aware of the changes?

Scope change during a sprint is a very undesirable practice. Not just because it goes against the scrum framework, but more so because it almost always has an adverse effect on team morale and focus. If there is an addition to the team, better to set that person to work helping teammates complete the work already defined in the sprint and assign them cards in the next sprint.

If team members are adjusting original time estimates for “accuracy” or whatever reason they may provide, this is little more than gaming the system. It does more harm than good, assuming management is Agile savvy and not intent on using Agile metrics for punitive purposes. On occasion I’ve had to hide the original time estimate entry field from the view of delivery team members and place it under the control of the product owner – out of sight, out of mind. It’s less a concern to me that time estimates are “wrong,” particularly if time estimate accuracy is showing improvement over time or the delta is a somewhat consistent value. I can work with an delivery team member’s time estimates that are 30% off if they are consistently 30% off.

In the case of Figure 2 it was the team’s second sprint and at the retrospective the elephant was called out from hiding: The design was far from stable. The decision was made to set aside scrum in favor of using Kanban until the numerous design issues could be resolved.

Figure 3 shows a burn-down chart that doesn’t go to zero by the end of the sprint.

Figure 3
Figure 3

The team missed their commit and quite a few cards rolled to the next sprint. Since the issue emerged late in the sprint there was little corrective action that could be taken. The answers were left to discovery during the retrospective. In this case, one of the factors was the failure to move development efforts into QA until late in the sprint. This is an all too common issue in cases where the sprint commitments were not fully satisfied. For this team the QA issue was exacerbated by the team simply taking on more than they thought they could commit to completing. The solution was to reduce the amount of work the team committed to in subsequent sprints until a stable sprint velocity emerged.

Conclusion

For a two week sprint on a project that is 5-6 sprints in, I usually don’t bother looking at time burn-down/burn-up charts for the first 3-4 days. Early trends can be misleading, but by the time a third of the sprint has been completed this metric will usually start to show trends that suggest any emergent problems. For new projects or for newly formed teams I typically don’t look at intra-sprint time metrics until much later in the project life cycle as there are usually plenty of other obvious and more pressing issues to work through.

I’ll conclude by reiterating my caution that these metrics are yard sticks, not micrometers. It is tempting to read too much into pretty graphs that have precise scales. Rather, the expert Agilest will let the metrics, whatever they are, speak for themselves and work to limit the impact of any personal cognitive biases.

In this series we’ve explored several ways to interpret the signals available to us in estimated time burn-down and actual time burn-up charts. There are numerous others scenarios that can reveal important information from such burn-down/burn-up charts and I would be very interested in hearing about your experiences with using this particular metric in Agile environments.

Agile Metrics – Time (Part 2 of 3)

Agile Metrics – Time (Part 2 of 3)

In Part 1 of this series, we set the frame for how to use time as a metric for assessing Agile team and project health. In Part 2, we’ll look at shifts in the cross-over point between burn-down and burn-up charts and explore what issues may be in play for the teams under these circumstances.

Figure 1 shows a cross-over point occurring early in the sprint.

Figure 1
Figure 1

This suggests the following questions:

  • Is the team working longer hours than needed? If so, what is driving this effort? Are any of the team members struggling with personal problems that have them working longer hours? Are they worried they may have committed to more work than they can complete in the sprint and are therefore trying to stay ahead of the work load? Has someone from outside the team requested additional work outside the awareness of the product owner or scrum master?
  • Has the team over estimated the level of effort needed to complete the cards committed to the sprint? If so, this suggests an opportunity to coach the team on ways to improve their estimating or the quality of the story cards.
  • Has the team focused on the easy story cards early in the sprint and work on the more difficult story cards is pending? This isn’t necessarily a bad thing, just something to know and be aware of after confirming this with the team. If accurate, it also points out the importance of using this type of metric for intra-sprint monitoring only and not extrapolate what it shows to a project-level metric.

The answer to these questions may not become apparent until later in the sprint and the point isn’t to try and “correct” the work flow based on relatively little information. In the case of Figure 1, the “easy” cards had been sized as being more difficult than they actually were. The more difficult cards were sized too small and a number of key dependencies were not identified prior to the sprint planning session. This is reflected in the burn-up line that significantly exceeds the initial estimate for the sprint, the jumps in the burn-down line, and subsequent failure to complete a significant portion of the cards in the sprint backlog. All good fodder for the retrospective.

Figure 2 shows a cross-over point occurring late in the sprint.

Figure 2
Figure 2

On the face of it there are two significant stretches of inactivity. Unless you’re dealing with a blatantly apathetic team, there is undoubtedly some sort of activity going on. It’s just not being reflected in the work records. The task is to find out what that activity is and how to mitigate it.

The following questions will help expose the cause for the extended periods of apparent inactivity:

  • Are one or more members not feeling well or are there other personal issues impacting an individual’s ability to focus?
  • Have they been poached by another project to work on some pressing issue?
  • Are they waiting for feedback from stakeholders,  clients, or other team members?
  • Are the story cards unclear? As the saying goes, story cards are an invitation to a conversation. If a story card is confusing, contradictory, or unclear than the team needs to talk about that. What’s unclear? Where’s the contradiction? As my college calculus professor used to ask when teaching us how to solve math problems, “Where’s the source of the agony?”

The actual reasons behind Figure 2 were two fold. There was a significant technical challenge the developers had to resolve that wasn’t sufficiently described by any of the cards in the sprint and later in the sprint several key resources were pulled off the project to deal with issues on a separate project.

Figure 3 shows a similar case of a late sprint cross-over in the burn-down/burn-up chart. The reasons for this occurrence were quite different than those shown in Figure 2.

Figure 3
Figure 3

This was an early sprint and a combination of design and technical challenges were not as well understood as originally thought at the sprint planning session. As these issues emerged, additional cards were created in the product backlog to be address in future sprints. Nonetheless, the current sprint commitment was missed by a significant margin.

In Part 3, we’ll look at other asymmetries and anomalies that can appear in time burn-down/burn-up charts and explore the issues may be in play for the teams under these circumstances.

Agile Metrics – Time (Part 1 of 3)

Some teams choose to use card level estimated and actual time as one of the level of effort or performance markers for project progress and health. For others it’s a requirement of the work environment due to management or business constraints. If your situation resembles one of these cases then you will need to know how to use time metrics responsibly and effectively. This series of articles will establish several common practices you can use to develop your skills for evaluating and leveraging time-based metrics in an Agile environment.

It’s important to keep in mind that time estimates are just one of the level of effort or performance markers that can be used to track team and project health. There can, and probably should be other markers in the overall mix of how team and project performance is evaluated. Story points, business value, quality of information and conversation from stand-up meetings, various product backlog characteristics, cycle time, and cumulative flow are all examples of additional views into the health and progress of a project.

In addition to using multiple views, it’s important to be deeply aware of the strengths and limits presented by each of them. The limits are many while the strengths are few.  Their value comes in evaluating them in concert with one another, not in isolation.  One view may suggest something that can be confirmed or negated by another view into team performance. We’ll visit and review each of these and other metrics after this series of posts on time.

The examples presented in this series are never as cut and dried as presented. Just as I previously described multiple views based on different metrics, each metric can offer multiple views. My caution is that these views shouldn’t be read like an electrocardiogram, with the expectation of a rigidly repeatable pattern from which a slight deviation could signal a catastrophic event. The examples are extracted from hundreds of sprints and dozens of projects over the course of many years and are more like seismology graphs – they reveal patterns over time that are very much context dependent.

Estimated and actual time metrics allow teams to monitor sprint progress by comparing time remaining to time spent. Respectively, this will be a burn-down and a burn-up chart in reference to the direction of the data plotted on the chart. In Figure 1, the red line represents the estimated time remaining (burn-down) while the green line represents the amount of time logged against the story cards (burn-up) over the course of a two week sprint. (The gray line is a hypothetical ideal for burn-down.)

Figure 1
Figure 1

The principle value of a burn-down/burn-up chart for time is the view it gives to intra-sprint performance. I usually look at this chart just prior to a teams’ daily stand-up to get a sense if there are any questions I need to be asking about emerging trends. In this series of posts we’ll explore several of the things to look for when preparing for a stand-up. At the end of the sprint, the burn-down/burn-up chart can be a good reference to use during the retrospective when looking for ways to improve.

The sprint shown in Figure 1 is about as ideal a picture as one can expect. It shows all the points I look for that tell me, insofar as time is concerned, the sprint performance is in good health.

  • There is a cross-over point roughly in the middle of the sprint.
  • At the cross-over point about half of the estimated time has been burned down.
  • The burn-down time is a close match to the burn-up at both the cross-over point and the end of the sprint.
  • The burn-down and burn-up lines show daily movement in their respective directions.

In Part 2, we’ll look at several cases where the cross-over point shifts and explore the issues the teams under these circumstances might be struggling with.

(This article cross-posted on LinkedIn.)

False Barriers to Implementing Scrum – II

In a previous post, I described several barriers to implementing scrum. Recently, an additional example came to light similar to the mistake of elevating scrum or Agile to a philosophy.

In a conversation with a colleague, we were exploring ways on how we might drive interest for browsing the growing wealth of Agile related information being added to the company wiki.  It’s an impressive collection of experiences of how other teams have solved a wide array of interesting problems using Agile principles and practices. Knowing that we cannot personally attend to every need on every project team, we were talking through various ways to develop the capacity for exploration and self-education. I think I must have used the phrase “the information is out there and readily available” a couple of times to many because my colleague reacted to where I put the bar by comparing learning Agile to surgery.

Using the surgery metaphor, she pressed the comparison that all the information she needs about surgery is “out there and readily available” but even if she knew all that information she likely wouldn’t be a good surgeon. Fair point that experience and practice are important. And if that is the case, then everyone should be taking every opportunity they can to practice good agile rather than regressing to old habits.

More importantly, perhaps, is the misapplied metaphor. Practicing agile isn’t as complicated as surgery or rocket science or any other such endeavor that requires years of deep study and practice. Comparing it to something like that places the prospects of doing well in a short amount of time mentally beyond the reach of any potential practitioners.

Perhaps a better metaphor is the opening of a new rail line in the city. A good measure of effort needs to be expended to educate the public on the line’s availability, the schedules, how to purchase fares, where the connections are, what are the safety features, etc. Having done that, having “put the information out there where it is generally available,” it is a reasonable expectation that the public will make the effort to find it when they need it. It is unreasonable, and unscaleable, to build such a system and then expect that every passenger will be personally escorted from their front door to their seat on the train.

It is also interesting to consider what this does to the “empathy scale” when such an overextended metaphor is applied to efforts such as learning to practice Agile. If we frame learning Agile as similar to surgery then as people work to implement Agile are we more inclined to have an excessive amount of empathy for their struggles and be more accepting or accommodating of their short comings?

“Not to worry that you still don’t have a well formed product backlog. This is like surgery, after all.”

Are we as an organization and each of our employees better served by the application of a more appropriate metaphor, one that matches the skill and expectations of the task?

“We’ve provided instruction as to what a product backlog is and how to create one. We’ve guided you as you’ve practiced refining a product backlog. You know where to find suggestions for improving your skills for product backlog stewardship (wiki, books, colleagues, etc). Now role up your sleeves and do the work.”

Successful coaching for developing the ability in team members for actively seeking answers requires skillfully letting them struggle and fail in recoverable ways. It is possible to hold their hand too long. To use another metaphor, provide whatever guidance and instruction you need to so they know how to fish, then let them alone to practice casting their own line.


Photo credit: langll

How to know when Agile is working

On a recent flight into Houston, my plane was diverted to Austin due to weather. Before we could land at Austin, we were re-diverted back to Houston. I’ve no idea why the gears aligned this way, but this meant we were out-of-sequence with the baggage handling system and our luggage didn’t arrive at the claim carousel for close to an hour and a half. Leading up to the luggage arrival was an unfortunate display from an increasingly agitated young couple. They were loudly communicating their frustration to an airport employee with unknown authority. Their frustration was understandable in light of the fact that flights were undoubtedly going to be missed.

At one point, the woman exclaimed, “This isn’t how this is supposed to work!”

I matched this with a similar comment from one of the developers on one of my project teams. Stressed with the workload he had committed to, he declared there are too many meetings and therefore “the agile process is not working!” When explored, it turned out some version of this sentiment was common among the software development staff.

In both cases, the process was working. It just wasn’t working as desired or expected based on past experience. In both cases, present events were immune to expectations. The fact that our luggage almost always shows up on time and that agile frequently goes smoothly belies how susceptible the two processes actually are to unknown variables that can disrupt the usual flow of events.

There is a difference with agile, however. When practiced well, it adapts to the vagaries of human experience. We expect the unexpected, even if we don’t know what form that may take.

There is an assumption being made by the developers in that “working agile” makes work easy and stress free. I’ve never found that. And I don’t know anyone who has. It stresses teams differently than waterfall. I’ve experienced high stress developing code under both agile and waterfall. With agile, however, teams have a better shot at deciding for themselves the stress they want to take on. But there will be stress. Unstressed coders deliver code of questionable value and quality, if they deliver at all.

The more accurate assessment to make here is that the developers aren’t practicing agile as well as they could. That’s fundamentally different from “agile isn’t working.” In particular, the developers didn’t understand what they had committed to. Every single sprint planning session I’ve run (and the way I coach them to be run) begins with challenging the team members to think about things that may impact the work they will commit to in the next sprint – vacations, family obligations, doctor visits, other projects, stubbed toes, alien abductions – anything that may limit the effort they can commit to. What occurred with the developers was a failure to take responsibility for their actions and decisions, a measure of dishonesty (albeit unintended) to themselves and their team mates by saying “yes” to work and later wishing “no.”

Underlying this insight into developer workload may be something much more unsettling. If anyone on your team has committed to more than they can complete and has done so for a number of sprints, your project may be at risk. The safe assumption would be that the project has a hidden fragility that will surprise you when it breaks. Project time lines, deliverables, and quality will suffer not solely because there are too many meetings, but because the team does not have a good understanding of what they need to complete and what they can commit to. What is the potential impact on other projects (internal and client) knowing that one or more of the team members is over committing? What delays, quality issues, or major pivots are looming out there ready to cause significant disruptions?

The resolution to this issue requires time and the following actions:

  • Coaching for creating and refining story cards
  • Coaching for understanding how to estimate work efforts (points and/or time)
  • Develop skills in the development staff for recognizing card dependencies
  • Develop skills for time management
  • Find ways to modify the work environment such that it is easier for developers to focus on work for extended periods of time
  • Evaluate the meeting load to determine if there are extraneous meetings
  • Based on metrics, specifically limit each developer’s work commitment for several sprints such that it falls within their ability to complete

Image credit: Prawny