Agile Money

In a recent conversation with colleagues we were debating the merits of using story point velocity as a metric for team performance and, more specifically, how it relates to determining a team’s predictability. That is to say, how reliable the team is at completing the work they have promised to complete. At one point, the question of what is a story point came up and we hit on the idea of story points not being “points” at all. Rather, they are more like currency. This solved a number of issues for us.

First, it interrupts the all too common assumption that story points (and by extension, velocities) can be compared between teams. Experienced scrum practitioners know this isn’t true and that nothing good can come from normalizing story points and sprint velocities between teams. And yet this is something non-agile savvy management types are want to do. Thinking of a story’s effort in terms of currency carries with it the implicit assumption that one team’s “dollars” are not another team’s “rubles” or another teams “euros.” At the very least, an exchange evaluation would need to occur. Nonetheless, dollars, rubles, and euros convey an agreement of value, a store of value that serves as a reliable predictor of exchange. X number of story points will deliver Y value from the product backlog.

The second thing thinking about effort as currency accomplished was to clarify the consequences of populating the product backlog with a lot of busy work or non-value adding work tasks. By reducing the value of the story currency, the measure of the level of effort becomes inflated and the ability of the story currency to function as a store of value is diminished.

There are a host of other interesting economics derived thought experiments that can be played out with this frame around story effort. What’s the effect of supply and demand on available story currency (points)? What’s the state of the currency supply (resource availability)? Is there such a thing as counterfeit story currency? If so, what’s that look like? How might this mesh with the idea of technical or dark debt?

Try this out at your next backlog refinement session (or whenever it is you plan to size story efforts): Ask the team what you would have to pay them in order to complete the work. Choose whatever measure you wish – dollars, chickens, cookies – and use that as a basis for determining the effort needed to complete the story. You might also include in the conversation the consequences to the team – using the same measures – if they do not deliver on their promise.

How To Run an Agile Death March

Found on the Internet…

An experienced scrum master describes their work cycles as going “from being very busy during sprint end/start weeks to be [sic] very bored.” While this scrum master works very hard to fill in the gaps with 1:1’s with the team members and providing regular training opportunities, they nonetheless ask the question, “Does anyone have any suggestions of things I am maybe not doing that I should be doing?” One response included the following:

“Now, it could be that you have worked to create a hyper-performing team and there is no further room for improvement. A measure of this is that velocity (or similar metric) has increased by an order of magnitude in the last year.

However, the most likely scenario is that you and your team have become ‘comfortable’ and velocity has not increased significantly in the last few Sprints and/or there is a high variance in velocity.”

This reflects a common misunderstanding of “velocity” and its confusion with “acceleration.” (It also reflects the “more is better” and “winners vs losers” thinking derived from the scrum sports metaphor and points as a way of keeping score. I’ve written about that elsewhere.) Neither does the commenter understand what “order of magnitude” means. A velocity that increases by an order of magnitude in a year isn’t a velocity, it’s an acceleration. That’s a bad thing. This wouldn’t be a “hyper-performing” team. This would be a team headed for a crash as a continual acceleration in story points completed is untenable. More and more points each sprint isn’t the goal of scrum. A product owner cannot predict when their team might complete a feature or a project if the delivery of work is accelerating throughout the project.

Assuming a typical project, something that continues for a year or more, the team and the project will eventually crash as they’ve been pressured to work more and more hours and cut more and more corners in the interests of completing more and more points. The accumulation of bugs, small and large, will slow progress. Team fatigue will increase and moral decrease, resulting in turn-over and further delays. In common parlance, this is referred to as a “death march.”

Strictly speaking, velocity is some displacement over time. In the case of scrum, it is the number of story points completed in a sprint. We’ve “displace” some number of story points from being “not done” to “done.” By itself, a single sprint’s velocity isn’t particularly useful. Looking at the velocity of a number of successive sprints, however, is useful. There are two pieces of information from looking at successive sprint velocities that, when considered together, can reveal useful aspects of how well a team is performing or not. The first is the average over the previous 5 to 8 sprints, a rolling average. As a yard stick, this can provide a measure of predictability. Using this average, a product owner can make a rough calculation for how many sprints remain before completing components or the project based on the story point information in the product backlog.

The measure of confidence for this prediction would come from an analysis of the variance demonstrated in the sprint velocity values over time. Figures 1 and 2 show the distinction between the value provided by a rolling average and the value provided by the variance in values over time.

Figure 1

Figure 2

In both cases the respective teams have an average velocity of 21 points per sprint. However, the variability in the values over time show that the team in Figure 1 would have a much higher level of confidence in any predictions based on their past performance than the team shown in Figure 2.

What matters is the trend, each sprint’s velocity over a number of sprints. The steady completion of story points (i.e. work) sprint to sprint is the desirable goal. Another way to say this is that a steady velocity makes it possible to predict project delivery dates. In real life, there will be a variance (up and down) of sprint velocity over time and the goal is to guide the project such that this variance is within a manageable range.

If a team were to set as its goal an increase in the number of story points completed from sprint to sprint then their performance chart might initially look like Figure 3.

Figure 3

Such a pace is unsustainable and eventually the team burns out. Fatigue, decreased moral, and overall dissatisfaction with the project cause team members to quit and progress grinds to a halt. The fallout of such a collapse is likely to include the buildup of significant technical debt and code errors as the run-up to the crescendo forced team members to cut corners, take shortcuts, and otherwise compromise the quality of their effort. [1] The resulting performance chart would look something like Figure 4.

Figure 4

All that said, I grant that there is merit in coaching teams to make reasonable improvements in their overall sprint performance. An increase in the overall average velocity might be one way to measure this. However, to press the team into achieving an order of magnitude increase in performance is a fools errand and more than likely to end in disaster for the team and the project.

References

[1] Lyneis, J.M, Ford, D.N. (2007). System dynamics applied to project management: a survey, assessment, and directions for future research. System Dynamics Review, 23 (2/3), 157-189.

Openness, Grapevines, and Strangleholds

If you truly value openness on your Agile teams, you must untangle them from the grapevine.

Openness is one of the core scrum values. As stated on Scrum.org:

“The scrum team and it’s stakeholders agree to be open about all the work and the challenges with performing the work.”

This is a very broad statement, encompassing not only openness around work products and processes, but also each individual’s responsibility for ensuring that any challenges related to overall team performance are identified, acknowledged, and resolved. In my experience, issues with openness related to work products or the processes that impact them are relatively straightforward to recognize and resolve. If a key tool, for example, is mis-configured or ill-suited to what the team needs to accomplish than the need to focus on issues with the tool should be obvious. If there is an information hoarder on the team preventing the free flow of information, this will reveal itself within a few sprints after a string of unknown dependencies or misaligned deliverables have had a negative impact on the team’s performance. Similarly, if a team member is struggling with a particular story card and for whatever reason lacks the initiative to ask for help, this will reveal itself in short order.

Satisfying the need for openness around individual and team performance, however, is a much more difficult behavior to measure. Everyone – and by “everyone” I mean everyone – is by nature very sensitive to being called out as having come up short in any way. Maybe it’s a surprise to them. Maybe it isn’t. But it’s always a hot button. As much as we’d like to avoiding treading across this terrain, it’s precisely this hypersensitivity that points to where we need to go to make the most effective changes that impact team performance.

At the top of my list of things to constantly scan for at the team level are the degrees of separation (space and time) between a problem and the people who are part of the problem. Variously referred to as “the grapevine”, back channeling, or triangulation, it can be one of the most corrosive behaviors to a team’s trust and their ability to collaborate effectively. From his research over the past 30 years, Joseph Grenny [1] has observed “that you can largely predict the health of an organization by measuring the average lag time between identifying and discussing problems.” I’ve found this to be true. Triangulation and back-channeling adds significantly to the lag time.

To illustrate the problem and a possible solution: I was a newly hired scrum master responsible for two teams, about 15 people in total. At the end of my first week I was approached by one of the other scrum masters in the company. “Greg,” they said in a whisper, “You’ve triggered someone’s PTSD by using a bad word.” [2]

Not an easy thing to learn, having been on the job for less than a week. Double so because I couldn’t for the life of me think of what I could have said that would have “triggered” a PTSD response. This set me back on my heels but I did manage to ask the scrum master to please ask this individual to reach out to me so I could speak with them one-to-one and apologize. At the very least, suggest they contact HR as a PTSD response triggered by a word is a sign that someone needs help beyond what any one of us can provide. My colleague’s responses was “I’ll pass that on to the person who told me about this.”

“Hold up a minute. Your knowledge of this issue is second hand?”

Indeed it was. Someone told someone who told the scrum master who then told me. Knowing this, I retracted my request for the scrum master to pass along my request. The problem here was the grapevine and a different tack was needed. I coached the scrum master to 1) never bring something like this to me again, 2) inform the person who told you this tale that you will not be passing anything like this along to me in the future, and 3) to coach that person to do the same to the person who told them. The person for whom this was an issue should either come to me directly or to my manager. I then coached my manager and my product owners that if anyone were to approach them with a complaint like this to listen carefully to the person, acknowledge that you heard them, and to also encourage them to speak directly with me.

This should be the strategy for anyone with complaints that do not rise to the level of needing HR intervention. The goal of this approach is to develop behaviors around personal complaints such that everyone on the team knows they have a third person to talk to and that the issue isn’t going to be resolved unless they talked directly to the person with whom they have an issue. It’s a good strategy for cutting the grapevines and short circuiting triangulation (or in my case the quadrangulation.) To seal the strategy, I gave a blanket apology to each of my teams the following Monday and let them know what I requested of my manager and product owners.

The objective was to establish a practice of resolving issues like this at the team level. It’s highly unlikely (and in my case 100% certain) that a person new to a job would have prior knowledge of sensitive words and purposely use language that upsets their new co-workers. The presupposition of malice or an assumption that a new hire should know such things suggested a number of systemic issues with the teams, something later revealed to be accurate. It wouldn’t be a stretch to say that in this organization the grapevine supplanted instant messaging and email as the primary communication channel. With the cooperation of my manager and product owners, several sizable branches to the grapevine had been cut away. Indeed, there was a marked increase in the teams attention at stand-ups and the retrospectives became more animated and productive in the weeks that followed.

Each situation is unique, but the intervention pattern is more broadly applicable: Reduce the number of node hops and associated lag time between the people directly involved with any issues around openness. This in and of itself may not resolve the issues. It didn’t in the example described above. But it does significantly reduce the barriers to applying subsequent techniques for working through the issues to a successful resolution. Removing the grapevine changes the conversation.

References

[1] Grenny, J. (2016, August 19). How to Make Feedback Feel Normal. Harvard Business Review, Retrieved from https://hbr.org/2016/08/how-to-make-feedback-feel-normal

[2] The “bad” word was “refinement.” The team had been using the word “grooming” to refer to backlog refinement and I had suggested we use the more generally accepted word. Apparently, a previous scrum master for the team had been, shall we say, overly zealous in pressing this same recommendation such that it was a rather traumatic experience for someone on the one of the teams. It later became known that this event was grossly exaggerated, “crying PTSD” as a variation of “crying wolf,” and that the reporting scrum master was probably working to establish a superior position. It would have worked, had I simply cowered and accepted the report as complete and accurate. The strategy described in this article proved effective at preventing this type of behavior.

Creating Cycle Time Charts from TFS

Unfortunately, TFS doesn’t have ready-made or easily accessible metrics charts like I’m used to with Jira. There is a lot of clicking and waiting, in my experience. Queries have to be created then assigned to widgets then added to dashboards. Worse, sometimes things I need just aren’t available. For example, there is no cycle time chart built into TFS. There is a plug-in available for preview for VSTS, but as of this posting that plugin won’t be available for TFS. With Jira, cycle time and other charts are available and easily accessible out of the box.

Sigh. Nonetheless, we go to work with the tools we have. And if need be, we build the tools we need. In this post, I’ll give a high level description for how to create Cycle Time charts from TFS data using Python. I’ll leave it as an exercise for the reader to learn the value of cycle time charts and how to best use them.

To make life a little easier, I leveraged the open source (MIT License) TFS API Python client from the Open DevOps Community. To make it work for generating Cycle Time charts, I suggested a change to the code for accessing revisions. This was added in a subsequent version. For the remainder of this post I’m going to assume the reader has made themselves familiar with how this library works and resolved any dependencies.

OK, then. Time to roll up our sleeves and get to work.

First, let’s import a few things.

With that, we’re ready to make a call to TFS’s REST API. Again, to make things simple, I create a query in TFS to return the story cards I wish to look at. There are two slices I’m interested in, the past three and the past eight sprints worth of data. In the next block, I call the three sprint query.

With the result set in hand, the script then cycles through each of the returned work items (I’m interested in stories and bugs only) and analyzes the revision events. In my case, I’m interested when items first go into “In Progress,” when they move to “In Testing,” when they move to “Product Owner Approval,” and finally when they move to “Done.” Your status values may differ. The script is capable of displaying cycle time charts for each phase as well as the overall cycle time from when items first enter “In Progress” to when they reach “Done.” An example of overall cycle time is included at the end of this post.

When collected, I send that data to a function for creating the chart using matplotlib. I also save aside the chart data in a CSV file to help validate the chart’s accuracy based on the available data.

It’s a combination scatter plot (plt.scatter), rolling average (plt.plot), average (plt.plot), and standard deviation (plt.fill_between) of the data points. An example output (click for larger image):

The size variation in data points reflects occurrences of multiple items moving into a status of “Done” on the same date, the shaded area is the standard deviation (sample), the blue line is the rolling average, and the orange line is the overall average.

If you are interested in learning more about how to impliment this script in your organization, please contact me directly.

Clouds and Windmills

I recently resigned from the company I had been employed by for over 5 years. The reason? It was time.

During my tenure1 I had the opportunity to re-define my career several times within the organization in a way that added value and kept life productive, challenging, and rewarding. Each re-definition involved a rather extensive mind mapping exercises with hundreds of nodes to described what was working, what wasn’t working, what needed fixing, and where I believed I could add the highest value.

This past spring events prompted another iteration of this process. It began with the question “What wouldn’t happen if I didn’t go to work today?”2 This is the flip of asking “What do I do at work?” The latter is a little self-serving. We all want to believe we are adding value and are earning our pay. The answer is highly filtered through biases, justifications, excuses, and rationalizations. But if in the midsts of a meeting you ask yourself, “What would be different if I were not present or otherwise not participating?”, the answer can be a little unsettling.

This time around, in addition to mind mapping skills, I was equipped with the truly inspiring work of Tanmay Vora and his sketchnote project. Buy me a beer some day and I’ll let you in on a few of my discoveries. Suffice it to say, the overall picture wasn’t good. I was getting the feeling this re-definition cycle was going to include a new employer.

A cascade of follow-on questions flowed from this iteration’s initial question. At the top:

  1. Why am I staying?
  2. Is this work aligned with my purpose?
  3. Have my purpose and life goals changed?

The answers:

  1. The paycheck
  2. No.
  3. A little.

Of course, it wasn’t this simple. The organization changed, as did I, in a myriad of ways. While exploring these questions, I was reminded of a story my Aikido teacher, Gaku Homma, would tell when describing his school. He said it was like a rope. In the beginning, it had just a few threads that joined with him to form a simple string. Not very strong. Not very obvious. But very flexible. Over time, more and more students joined his school and wove their practice into Nippon Kan’s history. Each new thread subtlety changed the character of the emerging rope. More threads, more strength, and more visibility. Eventually, an equilibrium emerges. Some of those threads stop after a few short weeks of classes, other’s (like mine) are 25 years long before they stop, and for a few their thread ends in a much more significant way.

Homma Sensi has achieved something very difficult. The threads that form Nippon Kan’s history are very strong, very obvious, and yet remain very flexible. Even so, there came a time when the right decision for me was to leave, taking with me a powerful set of skills, many good memories, and friendships. The same was true for my previous employer. Their rope is bending in a way that is misaligned with my purpose and goals. Neither good nor bad. Just different. Better to leave with many friendships intact and a strong sense of having added value to the organization during my tenure.

The world is full of opportunities. And sometimes you have to deliberately and intentionally clear all the collected clutter from your mental workspace so those opportunities have a place to land. Be attentive to moments like this before your career is remembered only as someone who yells at clouds and tilts at windmills.


1 By the numbers…

1,788 Stand-ups
1,441 Wiki/Knowledgebase Contributions
311 Sprint/Release Planning Sessions
279 Reviews
189 Retrospectives
101 Projects
31 Internal Meet-ups
22 Agile Cafés
10 Newsletters
5.5 Years
3 Distinct Job Titles
1 Wild Ride

2 My thanks to colleague Lennie Noiles and his presentation on Powerful Questions. While Lennie didn’t ask me this particular question, it was inspired by his presentation.

The Path to Mastery: Begin with the Fundamentals

Somewhere along the path of studying Aikido for 25  years I found a useful perspective on the art that applies to a lot of skills in life.  Aikido is easy to understand. It’s a way of living that leaves behind it a trail of techniques. What’s hard is overcoming the unending stream of little frustrations and often self-imposed limitations. What’s hard is learning how to make getting up part of falling down. What’s hard is healing after getting hurt. What’s hard is learning the importance of recognizing when a white belt is more of a master than you are. In short, what’s hard is mastering the art.

The same can be said about practicing Agile. Agile is easy to understand. It is four fundamental values and twelve principles. The rest is just a trail of techniques and supporting tools – rapid application development, XP, scrum, Kanban, Lean, SAFe, TDD, BDD, stories, sprints, stand-ups – all just variations from a very simple foundation and adapted to meet the prevailing circumstances. Learning how to apply the best technique for a given situation is learned by walking the path toward mastery – working through the endless stream of frustrations and limitations, learning how to make failing part of succeeding, recognizing when you’re not the smartest person in the room, and learning how to heal after getting hurt.

If an Aikidoka is attempting to apply a particular technique to an opponent  and it isn’t working, their choices are to change how they’re performing the technique, change the technique, or invent a new technique based on the fundamentals. Expecting the world to adapt to how you think it should go is a fool’s path. Opponents in life – whether real people, ideas, or situations – are notoriously uncompromising in this regard.  The laws of physics, as they say, don’t much care about what’s going on inside your skull. They stubbornly refuse to accommodate your beliefs about how things “should” go.

The same applies to Agile practices. If something doesn’t seem to be working, it’s time to step in front of the Agile mirror and ask yourself a few questions. What is it about the fundamentals you’re not paying attention to? Which of the values are out of balance? What technique is being misapplied? What different technique will better serve? If your team or organization needs to practice Lean ScrumXPban SAFe-ly than do that. Be bold in your quest to find what works best for your team. The hue and cry you hear won’t be from the gods, only those who think they are – mere mortals more intent on ossifying Agile as policy, preserving their status, or preventing the perceived corruption of their legacy.

But I’m getting ahead of things. Before you can competently discern which practices a situation needs and how to best structure them you must know the fundamentals.

There are no shortcuts.

In this series of posts I hope to open a dialog about mastering Agile practices. We’ll begin by studying several maps that have been created over time that describe the path toward mastery, discuss the benefits and shortcomings of each of these maps, and explore the reasons why many people have a difficult time following these maps. From there we’ll move into the fundamentals of Agile practices and see how a solid understanding of these fundamentals can be used to respond to a wide variety of situations and contexts. Along the way we’ll discover how to develop an Agile mindset.

Story Points and Fuzzy Bunnies

The scrum framework is forever tied to the language of sports in general and rugby in particular. We organize our project work around goals, sprints, points, and daily scrums. An unfortunate consequence of organizing projects around a sports metaphor is that the language of gaming ends up driving behavior. For example, people have a natural inclination to associate the idea of story points to a measure of success rather than an indicator of the effort required to complete the story. The more points you have, the more successful you are. This is reflected in an actual quote from a retrospective on things a team did well:

We completed the highest number of points in this sprint than in any other sprint so far.

This was a team that lost sight of the fact they were the only team on the field. They were certain to be the winning team. They were also destine to be he losing team. They were focused on story point acceleration rather than a constant, predictable velocity.

More and more I’m finding less and less value in using story points as an indicator for level of effort estimation. If Atlassian made it easy to change the label on JIRA’s story point field, I’d change it to “Fuzzy Bunnies” just to drive this idea home. You don’t want more and more fuzzy bunnies, you want no more than the number you can commit to taking care of in a certain span of time typically referred to as a “sprint.” A team that decides to take on the care and feeding of 50 fuzzy bunnies over the next two weeks but has demonstrated – sprint after sprint – they can only keep 25 alive is going to lose a lot of fuzzy bunnies over the course of the project.

It is difficult for people new to scrum or Agile to grasp the purpose behind an abstract idea like story points. Consequently, they are unskilled in how to use them as a measure of performance and improvement. Developing this skill can take considerable time and effort. The care and feeding of fuzzy bunnies, however, they get. Particularly with teams that include non-technical domains of expertise, such as content development or learning strategy.

A note here for scrum masters. Unless you want to exchange your scrum master stripes for a saddle and spurs, be wary of your team turning story pointing into an animal farm. Sizing story cards to match the exact size and temperament from all manner of animals would be just as cumbersome as the sporting method of story points. So, watch where you throw your rope, Agile cowboys and cowgirls.

(This article cross-posted at LinkedIn)


Image credit: tsaiproject (Modified in accordance with Creative Commons Attribution 2.0 Generic license)

Autopilot Agile

There is a story about a bunch of corporate employees that have been working together for so long they’ve cataloged and numbered all the jokes they’ve told (and re-told) over the years. Eventually, no one need actually tell the joke. Someone simply yells out something like “Number Nine!” and everyone laughs in reply.

As Agile methodologies and practices become ubiquitous in the business world and jump more and more functional domain gaps, I’m seeing this type of cataloging and rote behavior emerge. Frameworks become reinforced structures. Practices become policies. “Stand-up” becomes code for “status meeting.” “Sprint Review” becomes code for “bigger status meeting.” Eventually, everyone is going through the motions and all that was Agile has drained from the project.

When you see this happening on any of your teams, start introducing small bits of randomness and pattern interruptions. In fact, do this anyway as a preventative measure.

  • One day a week, instead of the usual stand-up drill (Yesterday. Today. In the way.), have each team member answer the question “Why are you working on what today?” Or have each team member talk about what someone else on the team is working on.
  • Deliberately change the order in which team members “have the mic” during stand-ups.
  • Hold a sprint prospective. What are the specific things the team will be doing to further their success? What blockers or impediments can they foresee in the next sprint? Who will be dependent on what work to be completed by when?
  • Set aside story points or time estimates for several sprints. I guarantee the world won’t end. (And if it does, well, we’ve got bigger problems than my failed guarantee.) How did that impact performance? What was the impact on morale?
  • During a backlog refinement session, run the larger story cards through the 5 Whys. Begin with “Why are we doing this work?” This invariably ends up in smaller cards and additions to the backlog.

There’s no end to the small changes that can be introduced on the spur of the moment to shake things up just a bit without upsetting things a lot. The goal is to keep people in a mindset of fluidity, adaptability, and recalibration to the goal.

It’s more than a little ironic and somewhat funny to see autopilot-type behavior emerge in the name of Agile. But if you really want funny…Number Seven!

Agile Metrics – Time (Part 3 of 3)

In Part 1 of this series, we set the frame for how to use time as a metric for assessing Agile team and project health. In Part 2, we looked at shifts in the cross-over point between burn-down and burn-up charts. In Part 3, we’ll look at other asymmetries and anomalies that can appear in time burn-down/burn-up charts and explore the issues the teams may be struggling with under these circumstances.

Figure 1 shows a burn-up that by the end of the sprint significantly exceeded the starting value for the original estimate.

Figure 1
Figure 1

There isn’t much mystery around a chart like this. The time needed to complete the work was significantly underestimated. The mystery is in the why and what that led to this situation.

  • Where there unexpected technical challenges?
  • Were the stories poorly defined?
  • Were the acceptance criteria unclear?
  • Were the sprint goals, objectives, or minimum viable product definition unclear?

Depending on the tools used to capture team metrics, it can be helpful to look at individual performances. What’s the differential between story points and estimated time vs actual time for each team member? Hardly every useful as a disciplinary tool, this type of analysis can be invaluable for knowing who needs professional development and in what areas.

In this case, there were several technical challenges related to new elements of the underlying architecture and the team put in extra hours to resolve them. Even so, they were unable to complete all the work they committed to in the sprint. The the scrum master and product owner need to monitor this so it isn’t a recurrent event or they risk team burnout and morale erosion if left unchecked. There are likely some unstated dependencies or skill deficiencies that need to be put on the table for discussion during the retrospective.

Figure 2 shows, among other things, unexpected jumps in the burn-down chart. There is clearly a significant amount of thrashing evident in the burn-down (which stubbornly refuses to actually burn down.)

Figure 2
Figure 2

Questions to explore:

  • Are cards being brought into the sprint after the sprint has started and why?
  • Are original time estimates being changed on cards after the sprint has started?
  • Is there a stakeholder in the grass, meddling with the team’s commitment?
  • Was a team member added to the team and cards brought into the sprint to accommodate the increased bandwidth?
  • Whatever is causing the thrashing, is the team (delivery team members, scrum master, and product owner) aware of the changes?

Scope change during a sprint is a very undesirable practice. Not just because it goes against the scrum framework, but more so because it almost always has an adverse effect on team morale and focus. If there is an addition to the team, better to set that person to work helping teammates complete the work already defined in the sprint and assign them cards in the next sprint.

If team members are adjusting original time estimates for “accuracy” or whatever reason they may provide, this is little more than gaming the system. It does more harm than good, assuming management is Agile savvy and not intent on using Agile metrics for punitive purposes. On occasion I’ve had to hide the original time estimate entry field from the view of delivery team members and place it under the control of the product owner – out of sight, out of mind. It’s less a concern to me that time estimates are “wrong,” particularly if time estimate accuracy is showing improvement over time or the delta is a somewhat consistent value. I can work with an delivery team member’s time estimates that are 30% off if they are consistently 30% off.

In the case of Figure 2 it was the team’s second sprint and at the retrospective the elephant was called out from hiding: The design was far from stable. The decision was made to set aside scrum in favor of using Kanban until the numerous design issues could be resolved.

Figure 3 shows a burn-down chart that doesn’t go to zero by the end of the sprint.

Figure 3
Figure 3

The team missed their commit and quite a few cards rolled to the next sprint. Since the issue emerged late in the sprint there was little corrective action that could be taken. The answers were left to discovery during the retrospective. In this case, one of the factors was the failure to move development efforts into QA until late in the sprint. This is an all too common issue in cases where the sprint commitments were not fully satisfied. For this team the QA issue was exacerbated by the team simply taking on more than they thought they could commit to completing. The solution was to reduce the amount of work the team committed to in subsequent sprints until a stable sprint velocity emerged.

Conclusion

For a two week sprint on a project that is 5-6 sprints in, I usually don’t bother looking at time burn-down/burn-up charts for the first 3-4 days. Early trends can be misleading, but by the time a third of the sprint has been completed this metric will usually start to show trends that suggest any emergent problems. For new projects or for newly formed teams I typically don’t look at intra-sprint time metrics until much later in the project life cycle as there are usually plenty of other obvious and more pressing issues to work through.

I’ll conclude by reiterating my caution that these metrics are yard sticks, not micrometers. It is tempting to read too much into pretty graphs that have precise scales. Rather, the expert Agilest will let the metrics, whatever they are, speak for themselves and work to limit the impact of any personal cognitive biases.

In this series we’ve explored several ways to interpret the signals available to us in estimated time burn-down and actual time burn-up charts. There are numerous others scenarios that can reveal important information from such burn-down/burn-up charts and I would be very interested in hearing about your experiences with using this particular metric in Agile environments.

Agile Metrics – Time (Part 2 of 3)

Agile Metrics – Time (Part 2 of 3)

In Part 1 of this series, we set the frame for how to use time as a metric for assessing Agile team and project health. In Part 2, we’ll look at shifts in the cross-over point between burn-down and burn-up charts and explore what issues may be in play for the teams under these circumstances.

Figure 1 shows a cross-over point occurring early in the sprint.

Figure 1
Figure 1

This suggests the following questions:

  • Is the team working longer hours than needed? If so, what is driving this effort? Are any of the team members struggling with personal problems that have them working longer hours? Are they worried they may have committed to more work than they can complete in the sprint and are therefore trying to stay ahead of the work load? Has someone from outside the team requested additional work outside the awareness of the product owner or scrum master?
  • Has the team over estimated the level of effort needed to complete the cards committed to the sprint? If so, this suggests an opportunity to coach the team on ways to improve their estimating or the quality of the story cards.
  • Has the team focused on the easy story cards early in the sprint and work on the more difficult story cards is pending? This isn’t necessarily a bad thing, just something to know and be aware of after confirming this with the team. If accurate, it also points out the importance of using this type of metric for intra-sprint monitoring only and not extrapolate what it shows to a project-level metric.

The answer to these questions may not become apparent until later in the sprint and the point isn’t to try and “correct” the work flow based on relatively little information. In the case of Figure 1, the “easy” cards had been sized as being more difficult than they actually were. The more difficult cards were sized too small and a number of key dependencies were not identified prior to the sprint planning session. This is reflected in the burn-up line that significantly exceeds the initial estimate for the sprint, the jumps in the burn-down line, and subsequent failure to complete a significant portion of the cards in the sprint backlog. All good fodder for the retrospective.

Figure 2 shows a cross-over point occurring late in the sprint.

Figure 2
Figure 2

On the face of it there are two significant stretches of inactivity. Unless you’re dealing with a blatantly apathetic team, there is undoubtedly some sort of activity going on. It’s just not being reflected in the work records. The task is to find out what that activity is and how to mitigate it.

The following questions will help expose the cause for the extended periods of apparent inactivity:

  • Are one or more members not feeling well or are there other personal issues impacting an individual’s ability to focus?
  • Have they been poached by another project to work on some pressing issue?
  • Are they waiting for feedback from stakeholders,  clients, or other team members?
  • Are the story cards unclear? As the saying goes, story cards are an invitation to a conversation. If a story card is confusing, contradictory, or unclear than the team needs to talk about that. What’s unclear? Where’s the contradiction? As my college calculus professor used to ask when teaching us how to solve math problems, “Where’s the source of the agony?”

The actual reasons behind Figure 2 were two fold. There was a significant technical challenge the developers had to resolve that wasn’t sufficiently described by any of the cards in the sprint and later in the sprint several key resources were pulled off the project to deal with issues on a separate project.

Figure 3 shows a similar case of a late sprint cross-over in the burn-down/burn-up chart. The reasons for this occurrence were quite different than those shown in Figure 2.

Figure 3
Figure 3

This was an early sprint and a combination of design and technical challenges were not as well understood as originally thought at the sprint planning session. As these issues emerged, additional cards were created in the product backlog to be address in future sprints. Nonetheless, the current sprint commitment was missed by a significant margin.

In Part 3, we’ll look at other asymmetries and anomalies that can appear in time burn-down/burn-up charts and explore the issues may be in play for the teams under these circumstances.