Openness, Grapevines, and Strangleholds

If you truly value openness on your Agile teams, you must untangle them from the grapevine.

Openness is one of the core scrum values. As stated on Scrum.org:

“The scrum team and it’s stakeholders agree to be open about all the work and the challenges with performing the work.”

This is a very broad statement, encompassing not only openness around work products and processes, but also each individual’s responsibility for ensuring that any challenges related to overall team performance are identified, acknowledged, and resolved. In my experience, issues with openness related to work products or the processes that impact them are relatively straightforward to recognize and resolve. If a key tool, for example, is mis-configured or ill-suited to what the team needs to accomplish than the need to focus on issues with the tool should be obvious. If there is an information hoarder on the team preventing the free flow of information, this will reveal itself within a few sprints after a string of unknown dependencies or misaligned deliverables have had a negative impact on the team’s performance. Similarly, if a team member is struggling with a particular story card and for whatever reason lacks the initiative to ask for help, this will reveal itself in short order.

Satisfying the need for openness around individual and team performance, however, is a much more difficult behavior to measure. Everyone – and by “everyone” I mean everyone – is by nature very sensitive to being called out as having come up short in any way. Maybe it’s a surprise to them. Maybe it isn’t. But it’s always a hot button. As much as we’d like to avoiding treading across this terrain, it’s precisely this hypersensitivity that points to where we need to go to make the most effective changes that impact team performance.

At the top of my list of things to constantly scan for at the team level are the degrees of separation (space and time) between a problem and the people who are part of the problem. Variously referred to as “the grapevine”, back channeling, or triangulation, it can be one of the most corrosive behaviors to a team’s trust and their ability to collaborate effectively. From his research over the past 30 years, Joseph Grenny [1] has observed “that you can largely predict the health of an organization by measuring the average lag time between identifying and discussing problems.” I’ve found this to be true. Triangulation and back-channeling adds significantly to the lag time.

To illustrate the problem and a possible solution: I was a newly hired scrum master responsible for two teams, about 15 people in total. At the end of my first week I was approached by one of the other scrum masters in the company. “Greg,” they said in a whisper, “You’ve triggered someone’s PTSD by using a bad word.” [2]

Not an easy thing to learn, having been on the job for less than a week. Double so because I couldn’t for the life of me think of what I could have said that would have “triggered” a PTSD response. This set me back on my heels but I did manage to ask the scrum master to please ask this individual to reach out to me so I could speak with them one-to-one and apologize. At the very least, suggest they contact HR as a PTSD response triggered by a word is a sign that someone needs help beyond what any one of us can provide. My colleague’s responses was “I’ll pass that on to the person who told me about this.”

“Hold up a minute. Your knowledge of this issue is second hand?”

Indeed it was. Someone told someone who told the scrum master who then told me. Knowing this, I retracted my request for the scrum master to pass along my request. The problem here was the grapevine and a different tack was needed. I coached the scrum master to 1) never bring something like this to me again, 2) inform the person who told you this tale that you will not be passing anything like this along to me in the future, and 3) to coach that person to do the same to the person who told them. The person for whom this was an issue should either come to me directly or to my manager. I then coached my manager and my product owners that if anyone were to approach them with a complaint like this to listen carefully to the person, acknowledge that you heard them, and to also encourage them to speak directly with me.

This should be the strategy for anyone with complaints that do not rise to the level of needing HR intervention. The goal of this approach is to develop behaviors around personal complaints such that everyone on the team knows they have a third person to talk to and that the issue isn’t going to be resolved unless they talked directly to the person with whom they have an issue. It’s a good strategy for cutting the grapevines and short circuiting triangulation (or in my case the quadrangulation.) To seal the strategy, I gave a blanket apology to each of my teams the following Monday and let them know what I requested of my manager and product owners.

The objective was to establish a practice of resolving issues like this at the team level. It’s highly unlikely (and in my case 100% certain) that a person new to a job would have prior knowledge of sensitive words and purposely use language that upsets their new co-workers. The presupposition of malice or an assumption that a new hire should know such things suggested a number of systemic issues with the teams, something later revealed to be accurate. It wouldn’t be a stretch to say that in this organization the grapevine supplanted instant messaging and email as the primary communication channel. With the cooperation of my manager and product owners, several sizable branches to the grapevine had been cut away. Indeed, there was a marked increase in the teams attention at stand-ups and the retrospectives became more animated and productive in the weeks that followed.

Each situation is unique, but the intervention pattern is more broadly applicable: Reduce the number of node hops and associated lag time between the people directly involved with any issues around openness. This in and of itself may not resolve the issues. It didn’t in the example described above. But it does significantly reduce the barriers to applying subsequent techniques for working through the issues to a successful resolution. Removing the grapevine changes the conversation.

References

[1] Grenny, J. (2016, August 19). How to Make Feedback Feel Normal. Harvard Business Review, Retrieved from https://hbr.org/2016/08/how-to-make-feedback-feel-normal

[2] The “bad” word was “refinement.” The team had been using the word “grooming” to refer to backlog refinement and I had suggested we use the more generally accepted word. Apparently, a previous scrum master for the team had been, shall we say, overly zealous in pressing this same recommendation such that it was a rather traumatic experience for someone on the one of the teams. It later became known that this event was grossly exaggerated, “crying PTSD” as a variation of “crying wolf,” and that the reporting scrum master was probably working to establish a superior position. It would have worked, had I simply cowered and accepted the report as complete and accurate. The strategy described in this article proved effective at preventing this type of behavior.

Behind the Curtain: The Delivery Team Member Role

Even with the formalization of Agile practices into numerous frameworks and methodologies, I have to say not much has changed for the software developer or engineer with respect to how they get work done. I’m not referring to technology. The changes in what software developers and engineers use to get work done has been seismic. The biggest shift in the “how,” in my experience, is that what were once underground practices are now openly accepted and encouraged.

When I was coding full time, in the pre-Agile days and under the burden of CMM, we followed all the practices for documenting use cases, hammering out technical and functional specs, and laboriously talking through requirements. (I smile when I hear developers today complain about the burden of meetings under Scrum.) And when it came time to actually code, I and my fellow developers set the multiple binders of documentation aside and engaged in many then unnamed Agile practices. We mixed and matched use cases in a way that allowed for more efficient coding of larger functional components. We “huddled” each morning in the passage way to cube pods to discuss dependencies and brainstorm solutions to technical challenges. Each of these became more efficiently organized in Agile as backlog refinement and daily stand-ups. We had numerous other loose practices that were not described in tomes such as CMM.

But Agile delivery teams today are frequently composed of more than just technical functional domains. There may be non-technical expertise included as integral members of the team. Learning strategists, content editors, creative illustrators, and marketing experts may be part of the team, depending on the objectives of the project. Consequently, this represents a significant challenge to technical members of the team (i.e. software development and  tech QA) who are unused to working with non-technical team members. Twenty years ago a developer who might say “Leave me alone so I can code.” would have been viewed as a dedicated worker. Today, it’s a sign that the developer risks working in isolation and consequently delivering something that is mis-matched with the work being done by the rest of the team.

On an Agile delivery team, whether composed of a diverse set of functional domains or exclusively technical experts, individual team members need to be thinking of the larger picture and the impact of their work on that of their team mates and the overall work flow. They need to be much more attentive to market influences than in the past. The half-life of major versions, let along entire products, is such that most software products outside a special niche can’t survive without leveraging Agile principles and practices. Their knowledge must expand beyond just their functional domain. The extent to which they possess this knowledge is reflected in the day-to-day behaviors displayed by the team and it’s individuals.

  • Is everyone on the team sensitive and respectful of everyone else’s time? This means following through on commitments and promises, including agreed upon meeting start times. One person showing up five minutes late to a 15 minute stand-up has just missed out on a third of the meeting at least. If the team waits for everyone to show up before starting, the late individual has just squandered 5 minutes multiplied by the number of team mates. For a 6 member team, that’s a half hour. And if it happens every day, that’s 2.5 hours a week. It adds up quickly. Habitual late-comers are also signaling a lack of respect to other team members. They are implicitly saying “Me and my time is more important than anyone else on the team.” Unchecked, this quickly spills over into other areas of the team’s interactions. Enforcing an on-time rule like this is key to encouraging the personal discipline necessary to work effectively as a team. When a scrum master keeps the team in line with a few basic items like this, the larger discipline issues never seem to arise. As U.S. Army General Ann Dunwoody (ret.) succinctly points out in her book, never walk by a mistake. Doing so gives implicit acceptance for the transgression. Problems blossom from there, and it isn’t a pretty flower. (As a scrum master, I cannot “make” someone show up on time. But I can address the respect aspect of this issue by always starting on time. That way, late-comers stand out as late, not more important. Over time, this tends to correct the lateness issue as well.)
  • Everyone on the team must be capable of tracking a constantly evolving set of dependencies and knowing where their work fits within the flow. To whom will they be delivering work? From whom are they expecting completed work? The answer to these questions may not be a name on the immediate team. Scrum masters must periodically explicitly ask these questions if the connections aren’t coming out naturally during stand-ups. Developing this behavior is about coaching the team to look beyond the work on their desk and understand how they are connected to the larger effort. Software programmers seem to have a natural tendency to build walls around their work. Software engineers less so. And on teams with diverse functional groups it is important for both the scrum master and product owner to be watchful for when barriers appear for reasons that have more to due to lack of familiarity across functional domains than anything else.
  • Is the entire team actively and consistently engaged with identifying and writing stories?
  • Is the team capable and willing to cross domain boundaries and help? Are they interested in learning about other parts of the product and business?

Product owners and scrum masters need to be constantly scanning for these and other signs of disengagement as well as opportunities to connect cross functional needs.

Behind the Curtain: The Product Owner Role

When someone owns something they tend to keep a closer watch on where that something is and whether or not it’s in good working order. Owners are more sensitive to actions that may adversely affect the value of their investment. It’s the car you own vs the car you rent. This holds true for products, projects, and teams. For this reason the title of “product owner” is well suited to the responsibilities assigned to the role. The explicit call-out to ownership carries a lot of goodness related to responsibility, leadership, and action.

Having been a product owner and having coached product owners, I have a deep respect for anyone who takes on the challenges associated with this role. In my view, it’s the most difficult position to fill on an Agile team. For starters, there are all the things a product owner is responsible for as described in any decent book on scrum: setting the product vision and road map, ordering the product backlog, creating epics and story cards, defining acceptance criteria, etc. What’s often missing from the standard set of bullet items is the “how” for doing them well. Newly minted product owners are usually left to their own devices for figuring this out. And unfortunately, in this short post I won’t be offering any how-to guidance for developing any of the skills generally recognized to be part of the product owner role.

What I’d most like to achieve in this post is calling out several of the key skills associated with quality product ownership that are usually omitted from the books and trainings – the beyond-the-basics items that any product owner will want to include in their continuous learning journey with Agile. In no particular order…

  • Product owners must be superb negotiators when working with stakeholders and team members. The techniques used for each group are different so it’s important to understand the motivations that drive them. I’ve found Jim Camp’s “Start with No” and Chris Voss’ “Never Split the Difference” to be particularly helpful in this regard.
  • One of the four values stated in the Agile Manifesto is “Responding to change over following a plan.” Unfortunately, this is often construed to mean any change at any time is valuable. This Agile value isn’t a directive for maximum entropy and chaos. Product owners must remain vigilant to scope changes. And the boundaries for scope are defined by the decisions product owners make. So product owners must be decisive and committed to the decisions, agreements, and promises that have been made with stakeholders and the team.
  • Product owners need to have a good sense for when experimentation may be needed to sort out any complex or risky features in the project – creating spikes or proof-of-concept work early enough in the project so as to avoid any costly pivots later in the project.
  • Even with experimentation, making the call to pivot will require a product owner’s clear understanding of past events and any path forward that offers the greatest chance for success. As the sage says, predictions are difficult to make, particularly about the future. The expectation isn’t for clairvoyance, rather for the ability to pay close attention to what the (non-vanity) data are telling them.
  • Understanding how to work through failures and dealing with “The Dip,” as Seth Godin calls it, are also important skills for a product owner. The team and the stakeholders are going to look to the product owner’s leadership to demonstrate confidence that they are on the right track.

More than other roles on an Agile team, the product owner must be a truly well-rounded and experienced individual. Paradoxically, it is a role that is both constrained by the highly visible nature of the position and dynamic due to the skill set required to maximize the chances for project success.

Behind the Curtain: The Scrum Master Role

It is popular to characterize the scrum master role as following a “servant leader” style of engagement with scrum teams. Beyond that, not much else is offered to help unpack just what that means. The more you read about “servant leadership” the more confused you’re likely to get. A lot of what’s available on the Internet and offered in trainings ends up being contradictory and overburdening to someone just trying to help their team excel. Telling someone they’ll need to become a “servant leader” can be like telling them to go away closer.

So let’s unpack that moniker a little and state what it means in more practical terms for the average, yet effective, scrum master.

To begin, it’s most helpful to put a little space between the two skills. As a scrum master you’ll need to serve. As a scrum master you’ll need to lead. When you’re a good scrum master you’ll know when to do each of those and how. Someone who tries to do both at once usually ends up succeeding at neither.

As a servant, a scrum master has to keep a vigilant eye on making sure their servant behaviors are in the best interests of helping the delivery team succeed. If the scrum master lets the role devolve to little more than an admin to the product owner or the team then they’ve lost their way. It will certainly seem easier to write the story cards for the team, for example, rather than spend the time coaching them how to write story cards. This is a case where leadership is needed. On the other hand, if someone on the team is blocked by a dependency due to a past due deliverable from another team, then the scrum master would indeed serve the best interests of the team by working with the other team to resolve the dependency rather than require the delivery team member to shift focus away from completing work in the queue.

As a leader, a scrum master has to recognize any constraints the context places on the scope of their responsibilities. In meet-ups and conferences I’ve heard people say the scrum master “should be the team spiritual leader,” the team “therapist,” or “a shaman,” and able to administer to the team’s emotional needs. I take a more pragmatic approach and believe a scrum master should be able to recognize when someone on their team is in need of a qualified professional and, if necessary, assist them in finding the help they need. (HR departments exists, in part, to handle these situations.) This is still good leadership. It’s important to recognize the limits of one’s own capabilities and qualifications. Scrum master as therapist is a quagmire, to be sure, and I’ve yet to meet one with the boots to handle it.

Servant leader aside, the “master” in “scrum master” is the source of no end of grief. It’s partly to blame for the tendency in people to ascribe super powers to the scrum master role, such as those described above. In addition, the “master” part of the title implies “boss” or “manager.” It is a common occurrence for team members to address their stand-up conversation to me – as scrum master – rather than the team. I have to interrupt and specifically direct the individual to speak to the team. I’ve had team members share that they assumed I was going to take what was communicated in the stand-up and report directly to the product owner or the department head or even higher in the organization. “How come you’re not taking notes?”, they ask.”The stand-up is for you, not for me. Take your own notes, if you wish,” I reply. (I do take notes, but only for the purpose of following-up on certain issues or capturing action items for myself – things I need to do to remove impediments that are outside the team’s ability to resolve, for example.)

All the downside of what I’ve describe so far can be amplified by the Dunning–Kruger afflicted scrum master. Amplified again if they are “certified.” To paraphrase from the hacker culture on how you know you’re a hacker (in the classical sense), you aren’t a scrum master until someone else calls you a scrum master. There is always more to learn and apply. A scrum master who remains attentive to that fact will naturally develop quality servant and strong leadership skills.

Finally, the scrum master role is often a thankless job. When new to a team and working to establish trust and credibility the scrum master will need to build respect but often won’t be liked during the process. Shifting individuals out of their comfort zone, changing bad habits, or negotiating 21st Century sensitivities is no easy task even when people like and respect you. So a quality scrum master will work to establish respect first and let the liking follow in due course.

Having succeed at this the scrum master’s business will likely shift to the background. So much so some may wonder why they’re taking up space on the team. Working to establish and maintain this level of performance with a team while remaining in the background is at the heart of the servant part of servant/leader.

Story Points and Fuzzy Bunnies

The scrum framework is forever tied to the language of sports in general and rugby in particular. We organize our project work around goals, sprints, points, and daily scrums. An unfortunate consequence of organizing projects around a sports metaphor is that the language of gaming ends up driving behavior. For example, people have a natural inclination to associate the idea of story points to a measure of success rather than an indicator of the effort required to complete the story. The more points you have, the more successful you are. This is reflected in an actual quote from a retrospective on things a team did well:

We completed the highest number of points in this sprint than in any other sprint so far.

This was a team that lost sight of the fact they were the only team on the field. They were certain to be the winning team. They were also destine to be he losing team. They were focused on story point acceleration rather than a constant, predictable velocity.

More and more I’m finding less and less value in using story points as an indicator for level of effort estimation. If Atlassian made it easy to change the label on JIRA’s story point field, I’d change it to “Fuzzy Bunnies” just to drive this idea home. You don’t want more and more fuzzy bunnies, you want no more than the number you can commit to taking care of in a certain span of time typically referred to as a “sprint.” A team that decides to take on the care and feeding of 50 fuzzy bunnies over the next two weeks but has demonstrated – sprint after sprint – they can only keep 25 alive is going to lose a lot of fuzzy bunnies over the course of the project.

It is difficult for people new to scrum or Agile to grasp the purpose behind an abstract idea like story points. Consequently, they are unskilled in how to use them as a measure of performance and improvement. Developing this skill can take considerable time and effort. The care and feeding of fuzzy bunnies, however, they get. Particularly with teams that include non-technical domains of expertise, such as content development or learning strategy.

A note here for scrum masters. Unless you want to exchange your scrum master stripes for a saddle and spurs, be wary of your team turning story pointing into an animal farm. Sizing story cards to match the exact size and temperament from all manner of animals would be just as cumbersome as the sporting method of story points. So, watch where you throw your rope, Agile cowboys and cowgirls.

(This article cross-posted at LinkedIn)


Image credit: tsaiproject (Modified in accordance with Creative Commons Attribution 2.0 Generic license)

Agile Metrics – Time (Part 3 of 3)

In Part 1 of this series, we set the frame for how to use time as a metric for assessing Agile team and project health. In Part 2, we looked at shifts in the cross-over point between burn-down and burn-up charts. In Part 3, we’ll look at other asymmetries and anomalies that can appear in time burn-down/burn-up charts and explore the issues the teams may be struggling with under these circumstances.

Figure 1 shows a burn-up that by the end of the sprint significantly exceeded the starting value for the original estimate.

Figure 1
Figure 1

There isn’t much mystery around a chart like this. The time needed to complete the work was significantly underestimated. The mystery is in the why and what that led to this situation.

  • Where there unexpected technical challenges?
  • Were the stories poorly defined?
  • Were the acceptance criteria unclear?
  • Were the sprint goals, objectives, or minimum viable product definition unclear?

Depending on the tools used to capture team metrics, it can be helpful to look at individual performances. What’s the differential between story points and estimated time vs actual time for each team member? Hardly every useful as a disciplinary tool, this type of analysis can be invaluable for knowing who needs professional development and in what areas.

In this case, there were several technical challenges related to new elements of the underlying architecture and the team put in extra hours to resolve them. Even so, they were unable to complete all the work they committed to in the sprint. The the scrum master and product owner need to monitor this so it isn’t a recurrent event or they risk team burnout and morale erosion if left unchecked. There are likely some unstated dependencies or skill deficiencies that need to be put on the table for discussion during the retrospective.

Figure 2 shows, among other things, unexpected jumps in the burn-down chart. There is clearly a significant amount of thrashing evident in the burn-down (which stubbornly refuses to actually burn down.)

Figure 2
Figure 2

Questions to explore:

  • Are cards being brought into the sprint after the sprint has started and why?
  • Are original time estimates being changed on cards after the sprint has started?
  • Is there a stakeholder in the grass, meddling with the team’s commitment?
  • Was a team member added to the team and cards brought into the sprint to accommodate the increased bandwidth?
  • Whatever is causing the thrashing, is the team (delivery team members, scrum master, and product owner) aware of the changes?

Scope change during a sprint is a very undesirable practice. Not just because it goes against the scrum framework, but more so because it almost always has an adverse effect on team morale and focus. If there is an addition to the team, better to set that person to work helping teammates complete the work already defined in the sprint and assign them cards in the next sprint.

If team members are adjusting original time estimates for “accuracy” or whatever reason they may provide, this is little more than gaming the system. It does more harm than good, assuming management is Agile savvy and not intent on using Agile metrics for punitive purposes. On occasion I’ve had to hide the original time estimate entry field from the view of delivery team members and place it under the control of the product owner – out of sight, out of mind. It’s less a concern to me that time estimates are “wrong,” particularly if time estimate accuracy is showing improvement over time or the delta is a somewhat consistent value. I can work with an delivery team member’s time estimates that are 30% off if they are consistently 30% off.

In the case of Figure 2 it was the team’s second sprint and at the retrospective the elephant was called out from hiding: The design was far from stable. The decision was made to set aside scrum in favor of using Kanban until the numerous design issues could be resolved.

Figure 3 shows a burn-down chart that doesn’t go to zero by the end of the sprint.

Figure 3
Figure 3

The team missed their commit and quite a few cards rolled to the next sprint. Since the issue emerged late in the sprint there was little corrective action that could be taken. The answers were left to discovery during the retrospective. In this case, one of the factors was the failure to move development efforts into QA until late in the sprint. This is an all too common issue in cases where the sprint commitments were not fully satisfied. For this team the QA issue was exacerbated by the team simply taking on more than they thought they could commit to completing. The solution was to reduce the amount of work the team committed to in subsequent sprints until a stable sprint velocity emerged.

Conclusion

For a two week sprint on a project that is 5-6 sprints in, I usually don’t bother looking at time burn-down/burn-up charts for the first 3-4 days. Early trends can be misleading, but by the time a third of the sprint has been completed this metric will usually start to show trends that suggest any emergent problems. For new projects or for newly formed teams I typically don’t look at intra-sprint time metrics until much later in the project life cycle as there are usually plenty of other obvious and more pressing issues to work through.

I’ll conclude by reiterating my caution that these metrics are yard sticks, not micrometers. It is tempting to read too much into pretty graphs that have precise scales. Rather, the expert Agilest will let the metrics, whatever they are, speak for themselves and work to limit the impact of any personal cognitive biases.

In this series we’ve explored several ways to interpret the signals available to us in estimated time burn-down and actual time burn-up charts. There are numerous others scenarios that can reveal important information from such burn-down/burn-up charts and I would be very interested in hearing about your experiences with using this particular metric in Agile environments.

Agile Metrics – Time (Part 2 of 3)

Agile Metrics – Time (Part 2 of 3)

In Part 1 of this series, we set the frame for how to use time as a metric for assessing Agile team and project health. In Part 2, we’ll look at shifts in the cross-over point between burn-down and burn-up charts and explore what issues may be in play for the teams under these circumstances.

Figure 1 shows a cross-over point occurring early in the sprint.

Figure 1
Figure 1

This suggests the following questions:

  • Is the team working longer hours than needed? If so, what is driving this effort? Are any of the team members struggling with personal problems that have them working longer hours? Are they worried they may have committed to more work than they can complete in the sprint and are therefore trying to stay ahead of the work load? Has someone from outside the team requested additional work outside the awareness of the product owner or scrum master?
  • Has the team over estimated the level of effort needed to complete the cards committed to the sprint? If so, this suggests an opportunity to coach the team on ways to improve their estimating or the quality of the story cards.
  • Has the team focused on the easy story cards early in the sprint and work on the more difficult story cards is pending? This isn’t necessarily a bad thing, just something to know and be aware of after confirming this with the team. If accurate, it also points out the importance of using this type of metric for intra-sprint monitoring only and not extrapolate what it shows to a project-level metric.

The answer to these questions may not become apparent until later in the sprint and the point isn’t to try and “correct” the work flow based on relatively little information. In the case of Figure 1, the “easy” cards had been sized as being more difficult than they actually were. The more difficult cards were sized too small and a number of key dependencies were not identified prior to the sprint planning session. This is reflected in the burn-up line that significantly exceeds the initial estimate for the sprint, the jumps in the burn-down line, and subsequent failure to complete a significant portion of the cards in the sprint backlog. All good fodder for the retrospective.

Figure 2 shows a cross-over point occurring late in the sprint.

Figure 2
Figure 2

On the face of it there are two significant stretches of inactivity. Unless you’re dealing with a blatantly apathetic team, there is undoubtedly some sort of activity going on. It’s just not being reflected in the work records. The task is to find out what that activity is and how to mitigate it.

The following questions will help expose the cause for the extended periods of apparent inactivity:

  • Are one or more members not feeling well or are there other personal issues impacting an individual’s ability to focus?
  • Have they been poached by another project to work on some pressing issue?
  • Are they waiting for feedback from stakeholders,  clients, or other team members?
  • Are the story cards unclear? As the saying goes, story cards are an invitation to a conversation. If a story card is confusing, contradictory, or unclear than the team needs to talk about that. What’s unclear? Where’s the contradiction? As my college calculus professor used to ask when teaching us how to solve math problems, “Where’s the source of the agony?”

The actual reasons behind Figure 2 were two fold. There was a significant technical challenge the developers had to resolve that wasn’t sufficiently described by any of the cards in the sprint and later in the sprint several key resources were pulled off the project to deal with issues on a separate project.

Figure 3 shows a similar case of a late sprint cross-over in the burn-down/burn-up chart. The reasons for this occurrence were quite different than those shown in Figure 2.

Figure 3
Figure 3

This was an early sprint and a combination of design and technical challenges were not as well understood as originally thought at the sprint planning session. As these issues emerged, additional cards were created in the product backlog to be address in future sprints. Nonetheless, the current sprint commitment was missed by a significant margin.

In Part 3, we’ll look at other asymmetries and anomalies that can appear in time burn-down/burn-up charts and explore the issues may be in play for the teams under these circumstances.

Agile Metrics – Time (Part 1 of 3)

Some teams choose to use card level estimated and actual time as one of the level of effort or performance markers for project progress and health. For others it’s a requirement of the work environment due to management or business constraints. If your situation resembles one of these cases then you will need to know how to use time metrics responsibly and effectively. This series of articles will establish several common practices you can use to develop your skills for evaluating and leveraging time-based metrics in an Agile environment.

It’s important to keep in mind that time estimates are just one of the level of effort or performance markers that can be used to track team and project health. There can, and probably should be other markers in the overall mix of how team and project performance is evaluated. Story points, business value, quality of information and conversation from stand-up meetings, various product backlog characteristics, cycle time, and cumulative flow are all examples of additional views into the health and progress of a project.

In addition to using multiple views, it’s important to be deeply aware of the strengths and limits presented by each of them. The limits are many while the strengths are few.  Their value comes in evaluating them in concert with one another, not in isolation.  One view may suggest something that can be confirmed or negated by another view into team performance. We’ll visit and review each of these and other metrics after this series of posts on time.

The examples presented in this series are never as cut and dried as presented. Just as I previously described multiple views based on different metrics, each metric can offer multiple views. My caution is that these views shouldn’t be read like an electrocardiogram, with the expectation of a rigidly repeatable pattern from which a slight deviation could signal a catastrophic event. The examples are extracted from hundreds of sprints and dozens of projects over the course of many years and are more like seismology graphs – they reveal patterns over time that are very much context dependent.

Estimated and actual time metrics allow teams to monitor sprint progress by comparing time remaining to time spent. Respectively, this will be a burn-down and a burn-up chart in reference to the direction of the data plotted on the chart. In Figure 1, the red line represents the estimated time remaining (burn-down) while the green line represents the amount of time logged against the story cards (burn-up) over the course of a two week sprint. (The gray line is a hypothetical ideal for burn-down.)

Figure 1
Figure 1

The principle value of a burn-down/burn-up chart for time is the view it gives to intra-sprint performance. I usually look at this chart just prior to a teams’ daily stand-up to get a sense if there are any questions I need to be asking about emerging trends. In this series of posts we’ll explore several of the things to look for when preparing for a stand-up. At the end of the sprint, the burn-down/burn-up chart can be a good reference to use during the retrospective when looking for ways to improve.

The sprint shown in Figure 1 is about as ideal a picture as one can expect. It shows all the points I look for that tell me, insofar as time is concerned, the sprint performance is in good health.

  • There is a cross-over point roughly in the middle of the sprint.
  • At the cross-over point about half of the estimated time has been burned down.
  • The burn-down time is a close match to the burn-up at both the cross-over point and the end of the sprint.
  • The burn-down and burn-up lines show daily movement in their respective directions.

In Part 2, we’ll look at several cases where the cross-over point shifts and explore the issues the teams under these circumstances might be struggling with.

(This article cross-posted on LinkedIn.)

How to Know You Have a Well Defined Minimum Viable Product

Conceptually, the idea of a minimum viable product (MVP) is easy to grasp. Early in a project, it’s a deliverable that reflects some semblance to the final product such that it’s barely able to stand on it’s own without lots of hand-holding and explanation for the customer’s benefit. In short, it’s terrible, buggy, and unstable. By design, MVPs lack features that may eventually prove to be essential to the final product. And we deliberately show the MVP to the customer!

We do this because the MVP is the engine that turns the build-measure-learn feedback loop. The key here is the “learn” phase. The essential features to the final product are often unclear or even unknown early in a project. Furthermore, they are largely undefinable or unknowable without multiple iterations through the build-measure-learn feedback cycle with the customer early in the process.

So early MVPs aren’t very good. They’re also not very expensive. This, too, is by design because an MVP’s very raison d’être is to test the assumptions we make early on in a project. They are low budget experiments that follow from a simple strategy:

  1. State the good faith assumptions about what the customer wants and needs.
  2. Describe the tests the MVP will satisfy that are capable of measuring the MVP’s impact on the stated assumptions.
  3. Build an MVP that tests the assumptions.
  4. Evaluate the results.

If the assumptions are not stated and the tests are vague, the MVP will fail to achieve it’s purpose and will likely result in wasted effort.

The “product” in “minimum viable product” can be almost anything: a partial or early design flow, a wireframe, a collection of simulated email exchanges, the outline to a user guide, a static screen mock-up, a shell of screen panels with placeholder text that can nonetheless be navigated – anything that can be placed in front of a customer for feedback qualifies as an MVP. In other words, a sprint can contain multiple MVPs depending on the functional groups involved with the sprint and the maturity of the project. As the project progresses, the individual functional group MVPs will begin to integrate and converge on larger and more refined MVPs, each gaining in stability and quality.

MVPs are not an end unto themselves. They are tangible evidence of the development process in action. The practice of iteratively developing MVPs helps develop to skill of rapid evaluation and learning among product owners and agile delivery team members. A buggy, unstable, ugly, bloated, or poorly worded MVP is only a problem if it’s put forward as the final product. The driving goal behind iterative MVPs is not perfection, rather it is to support the process of learning what needs to be developed for the optimal solution that solves the customer’s problems.

“Unlike a prototype or concept test, an MVP is designed not just to answer product design or technical questions. Its goal is to test fundamental business hypotheses.” – Eric Ries, The Lean Startup

So how might product owners and Agile teams begin to get a handle on defining an MVP? There are several questions the product owner and team can ask of themselves, in light of the product backlog, that may help guide their focus and decisions. (Use of the following term “stakeholders” can mean company executives or external customers.)

  • Identify the likely set of stakeholders who will be attending the sprint review. What will these stakeholders need to see so that they can offer valuable feedback? What does the team need to show in order to spark the most valuable feedback from the stakeholders?
  • What expectations have been set for the stakeholders?
  • Is the distinction clear between what the stakeholders want vs what they need?
  • Is the distinction clear between high and low value? Is the design cart before the value horse?
  • What are the top two features or functions the stakeholders  will be expecting to see? What value – to the stakeholders – will these features or functions deliver?
  • Will the identified features or functions provide long term value or do they risk generating significant rework down the road?
  • Are the identified features or functions leveraging code, content, or UI/UX reuse?

Recognizing an MVP – Less is More

Since an MVP can be almost anything,  it is perhaps easier to begin any conversation about MVPs by touching on the elements missing from an MVP.

An MVP is not a quality product. Using any generally accepted definition of “quality” in the marketplace, an MVP will fail on all accounts. Well, on most accounts. The key is to consider relative quality. At the beginning of a sprint, the standards of quality for an MVP are framed by the sprint goals and objectives. If it meets those goals, the team has successfully created a quality MVP. If measured against the external marketplace or the quality expectations of the customer, the MVP will almost assuredly fail inspection.

Your MVPs will probably be ugly, especially at first. They will be missing features. They will be unstable. Build them anyway. Put them in front of the customer for feedback. Learn. And move on to the next MVP. Progressively, they will begin to converge on the final product that is of high quality in the eyes of the customer. MVPs are the stepping stones that get you across the development stream and to the other side where all is sunny, beautiful, and stable. (For more information on avoiding the trap of presupposing what a customer means by quality and value, see “The Value of ‘Good Enough’“)

An MVP is not permanent. Agile teams should expect to throw away several, maybe even many, MVPs on their way to the final product. If they aren’t, then it is probable they are not learning what they need to about what the customer actually wants. In this respect, waste can be a good, even important thing. The driving purpose of the MVP is to rapidly develop the team’s understanding of what the customer needs, the problems they are expecting to have solved, and the level of quality necessary to satisfy each of these goals.

MVPs are not the truth. They are experiments meant to get the team to the truth. By virtue of their low-quality, low-cost nature, MVPs quickly shake out the attributes to the solution the customer cares about and wants. The solid empirical foundation they provide is orders of magnitude more valuable to the Agile team than any amount of speculative strategy planning or theoretical posturing.

(This article cross-posted on LinkedIn.)

How to know when Agile is working

On a recent flight into Houston, my plane was diverted to Austin due to weather. Before we could land at Austin, we were re-diverted back to Houston. I’ve no idea why the gears aligned this way, but this meant we were out-of-sequence with the baggage handling system and our luggage didn’t arrive at the claim carousel for close to an hour and a half. Leading up to the luggage arrival was an unfortunate display from an increasingly agitated young couple. They were loudly communicating their frustration to an airport employee with unknown authority. Their frustration was understandable in light of the fact that flights were undoubtedly going to be missed.

At one point, the woman exclaimed, “This isn’t how this is supposed to work!”

I matched this with a similar comment from one of the developers on one of my project teams. Stressed with the workload he had committed to, he declared there are too many meetings and therefore “the agile process is not working!” When explored, it turned out some version of this sentiment was common among the software development staff.

In both cases, the process was working. It just wasn’t working as desired or expected based on past experience. In both cases, present events were immune to expectations. The fact that our luggage almost always shows up on time and that agile frequently goes smoothly belies how susceptible the two processes actually are to unknown variables that can disrupt the usual flow of events.

There is a difference with agile, however. When practiced well, it adapts to the vagaries of human experience. We expect the unexpected, even if we don’t know what form that may take.

There is an assumption being made by the developers in that “working agile” makes work easy and stress free. I’ve never found that. And I don’t know anyone who has. It stresses teams differently than waterfall. I’ve experienced high stress developing code under both agile and waterfall. With agile, however, teams have a better shot at deciding for themselves the stress they want to take on. But there will be stress. Unstressed coders deliver code of questionable value and quality, if they deliver at all.

The more accurate assessment to make here is that the developers aren’t practicing agile as well as they could. That’s fundamentally different from “agile isn’t working.” In particular, the developers didn’t understand what they had committed to. Every single sprint planning session I’ve run (and the way I coach them to be run) begins with challenging the team members to think about things that may impact the work they will commit to in the next sprint – vacations, family obligations, doctor visits, other projects, stubbed toes, alien abductions – anything that may limit the effort they can commit to. What occurred with the developers was a failure to take responsibility for their actions and decisions, a measure of dishonesty (albeit unintended) to themselves and their team mates by saying “yes” to work and later wishing “no.”

Underlying this insight into developer workload may be something much more unsettling. If anyone on your team has committed to more than they can complete and has done so for a number of sprints, your project may be at risk. The safe assumption would be that the project has a hidden fragility that will surprise you when it breaks. Project time lines, deliverables, and quality will suffer not solely because there are too many meetings, but because the team does not have a good understanding of what they need to complete and what they can commit to. What is the potential impact on other projects (internal and client) knowing that one or more of the team members is over committing? What delays, quality issues, or major pivots are looming out there ready to cause significant disruptions?

The resolution to this issue requires time and the following actions:

  • Coaching for creating and refining story cards
  • Coaching for understanding how to estimate work efforts (points and/or time)
  • Develop skills in the development staff for recognizing card dependencies
  • Develop skills for time management
  • Find ways to modify the work environment such that it is easier for developers to focus on work for extended periods of time
  • Evaluate the meeting load to determine if there are extraneous meetings
  • Based on metrics, specifically limit each developer’s work commitment for several sprints such that it falls within their ability to complete

Image credit: Prawny