Minimum, you say.
And viable, it must be.
And stand on it’s own?
Minimum, you say.
Minimum, you say.
And viable, it must be.
And stand on it’s own?
Somewhere along the path of studying Aikido for 25 years I found a useful perspective on the art that applies to a lot of skills in life. Aikido is easy to understand. It’s a way of living that leaves behind it a trail of techniques. What’s hard is overcoming the unending stream of little frustrations and often self-imposed limitations. What’s hard is learning how to make getting up part of falling down. What’s hard is healing after getting hurt. What’s hard is learning the importance of recognizing when a white belt is more of a master than you are. In short, what’s hard is mastering the art.
The same can be said about practicing Agile. Agile is easy to understand. It is four fundamental values and twelve principles. The rest is just a trail of techniques and supporting tools – rapid application development, XP, scrum, Kanban, Lean, SAFe, TDD, BDD, stories, sprints, stand-ups – all just variations from a very simple foundation and adapted to meet the prevailing circumstances. Learning how to apply the best technique for a given situation is learned by walking the path toward mastery – working through the endless stream of frustrations and limitations, learning how to make failing part of succeeding, recognizing when you’re not the smartest person in the room, and learning how to heal after getting hurt.
If an Aikidoka is attempting to apply a particular technique to an opponent and it isn’t working, their choices are to change how they’re performing the technique, change the technique, or invent a new technique based on the fundamentals. Expecting the world to adapt to how you think it should go is a fool’s path. Opponents in life – whether real people, ideas, or situations – are notoriously uncompromising in this regard. The laws of physics, as they say, don’t much care about what’s going on inside your skull. They stubbornly refuse to accommodate your beliefs about how things “should” go.
The same applies to Agile practices. If something doesn’t seem to be working, it’s time to step in front of the Agile mirror and ask yourself a few questions. What is it about the fundamentals you’re not paying attention to? Which of the values are out of balance? What technique is being misapplied? What different technique will better serve? If your team or organization needs to practice Lean ScrumXPban SAFe-ly than do that. Be bold in your quest to find what works best for your team. The hue and cry you hear won’t be from the gods, only those who think they are – mere mortals more intent on ossifying Agile as policy, preserving their status, or preventing the perceived corruption of their legacy.
But I’m getting ahead of things. Before you can competently discern which practices a situation needs and how to best structure them you must know the fundamentals.
There are no shortcuts.
In this series of posts I hope to open a dialog about mastering Agile practices. We’ll begin by studying several maps that have been created over time that describe the path toward mastery, discuss the benefits and shortcomings of each of these maps, and explore the reasons why many people have a difficult time following these maps. From there we’ll move into the fundamentals of Agile practices and see how a solid understanding of these fundamentals can be used to respond to a wide variety of situations and contexts. Along the way we’ll discover how to develop an Agile mindset.
Estimating levels of effort for a set of tasks by a group of individuals well qualified to complete those tasks can efficiently and reliable be determined with a collaborative estimation process like planning poker. Such teams have a good measure of skill overlap. In the context of the problem set, each of the team members are generalist in the sense it’s possible for any one team member to work on a variety of cross functional tasks during a sprint. Differences in preferred coding language among team members, for example, is less an issue when everyone understands advanced coding practices and the underlying architecture for the solution.
With a set of complimentary technical skills it’s is easier agree on work estimates. There are other benefits that flow from well-matched teams. A stable sprint velocity emerges much sooner. There is greater cross functional participation. And re-balancing the work load when “disruptors” occur – like vacations, illness, uncommon feature requests, etc. – is easier to coordinate.
Once the set of tasks starts to include items that fall outside the expertise of the group and the group begins to include cross functional team members, a process like planning poker becomes increasingly less reliable. The issue is the mismatch between relative scales of expertise. A content editor is likely to have very little insight into the effort required to modify a production database schema. Their estimation may be little more than a guess based on what they think it “should” be. Similarly for a coder faced with estimating the effort needed to translate 5,000 words of text from English to Latvian. Unless, of course, you have an English speaking coder on your team who speaks fluent Latvian.
These distinctions are easy to spot in project work. When knowledge and solution domains have a great deal of overlap, generalization allows for a lot of high quality collaboration. However, when an Agile team is formed to solve problems that do not have a purely technical solution, specialization rather than generalization has a greater influence on overall success. The risk is that with very little overlap specialized team expertise can result in either shallow solutions or wasteful speculation – waste that isn’t discovered until much later. Moreover, re-balancing the team becomes problematic and most often results in delays and missed commitments due to the limited ability for cross functional participation among team mates.
The challenge for teams where knowledge and solution domains have minimal overlap is to manage the specialized expertise domains in a way that is optimally useful, That is, reliable, predictable, and actionable. Success becomes increasingly dependent on how good an organization is at estimating levels of effort when the team is composed of specialists.
One approach I experimented with was to add a second dimension to the estimation: a weight factor to the estimator’s level of expertise relative to the nature of the card being considered. The idea is that with a weighted expertise factor calibrated to the problem and solution contexts, a more reliable velocity emerges over time. In practice, was difficult to implement. Teams spent valuable time challenging what the weighted factor should be and less experienced team members felt their opinion had been, quite literally, discounted.
The approach I’ve had the most success with on teams with diverse expertise is to have story cards sized by the individual assigned to complete the work. This still happens in a collaborative refinement or planning session so that other team members can contribute information that is often outside the perspective of the work assignee. Dependencies, past experience with similar work on other projects, missing acceptance criteria, or a refinement to the story card’s minimum viable product (MVP) definition are all examples of the kind of information team members have contributed. This invariably results in an adjustment to the overall level of effort estimate on the story card. It also has made details about the story card more explicit to the team in a way that a conversation focused on story point values doesn’t seem to achieve. The conversation shifts from “What are the points?” to “What’s the work needed to complete this story card?”
I’ve also observed that by focusing ownership of the estimate on the work assignee, accountability and transparency tend to increase. Potential blockers are surfaced sooner and team members communicate issues and dependencies more freely with each other. Of course, this isn’t always the case and in a future post we’ll explore aspects of team composition and dynamics that facilitate or prevent quality collaboration.
The scrum framework is forever tied to the language of sports in general and rugby in particular. We organize our project work around goals, sprints, points, and daily scrums. An unfortunate consequence of organizing projects around a sports metaphor is that the language of gaming ends up driving behavior. For example, people have a natural inclination to associate the idea of story points to a measure of success rather than an indicator of the effort required to complete the story. The more points you have, the more successful you are. This is reflected in an actual quote from a retrospective on things a team did well:
We completed the highest number of points in this sprint than in any other sprint so far.
This was a team that lost sight of the fact they were the only team on the field. They were certain to be the winning team. They were also destine to be he losing team. They were focused on story point acceleration rather than a constant, predictable velocity.
More and more I’m finding less and less value in using story points as an indicator for level of effort estimation. If Atlassian made it easy to change the label on JIRA’s story point field, I’d change it to “Fuzzy Bunnies” just to drive this idea home. You don’t want more and more fuzzy bunnies, you want no more than the number you can commit to taking care of in a certain span of time typically referred to as a “sprint.” A team that decides to take on the care and feeding of 50 fuzzy bunnies over the next two weeks but has demonstrated – sprint after sprint – they can only keep 25 alive is going to lose a lot of fuzzy bunnies over the course of the project.
It is difficult for people new to scrum or Agile to grasp the purpose behind an abstract idea like story points. Consequently, they are unskilled in how to use them as a measure of performance and improvement. Developing this skill can take considerable time and effort. The care and feeding of fuzzy bunnies, however, they get. Particularly with teams that include non-technical domains of expertise, such as content development or learning strategy.
A note here for scrum masters. Unless you want to exchange your scrum master stripes for a saddle and spurs, be wary of your team turning story pointing into an animal farm. Sizing story cards to match the exact size and temperament from all manner of animals would be just as cumbersome as the sporting method of story points. So, watch where you throw your rope, Agile cowboys and cowgirls.
(This article cross-posted at LinkedIn)
There is a story about a bunch of corporate employees that have been working together for so long they’ve cataloged and numbered all the jokes they’ve told (and re-told) over the years. Eventually, no one need actually tell the joke. Someone simply yells out something like “Number Nine!” and everyone laughs in reply.
As Agile methodologies and practices become ubiquitous in the business world and jump more and more functional domain gaps, I’m seeing this type of cataloging and rote behavior emerge. Frameworks become reinforced structures. Practices become policies. “Stand-up” becomes code for “status meeting.” “Sprint Review” becomes code for “bigger status meeting.” Eventually, everyone is going through the motions and all that was Agile has drained from the project.
When you see this happening on any of your teams, start introducing small bits of randomness and pattern interruptions. In fact, do this anyway as a preventative measure.
There’s no end to the small changes that can be introduced on the spur of the moment to shake things up just a bit without upsetting things a lot. The goal is to keep people in a mindset of fluidity, adaptability, and recalibration to the goal.
It’s more than a little ironic and somewhat funny to see autopilot-type behavior emerge in the name of Agile. But if you really want funny…Number Seven!
Seeking the simple path to
C. Northcote Parkinson is best known for, not surprisingly, Parkinson’s Law:
Work expands so as to fill the time available for its completion.
But there are many more gems in “Parkinson’s Law and Other Studies in Administration.” The value of re-reading classics is that what was missed on a prior read becomes apparent given the accumulation of a little more experience and the current context. On a re-read this past week, I discovered this:
It is now known that a perfection of planned layout is achieved only by institutions on the point of collapse. This apparently paradoxical conclusion is based upon a wealth of archaeological and historical research, with the more esoteric details of which we need not concern ourselves. In general principle, however, the method pursued has been to select and date the buildings which appear to have been perfectly designed for their purpose. A study and comparison of these has tended to prove that perfection of planning is a symptom of decay. During a period of exciting discovery or progress there is no time to plan the perfect headquarters. The time for that comes later, when all the important work has been done. Perfection, we know, is finality; and finality is death.
Several years back my focus for the better part of a year was on mapping out software design processes for a group of largely non-technical instructional designers. If managing software developers is akin to herding cats, finding a way to shepherd non-technical creative types such as instructional designers (particularly old school designers) can be likened to herding a flock of canaries – all over the place in three dimensions.
What made this effort successful was framing the design process as a set of guidelines that were easy to track and monitor. The design standards and common practices, for example, consisted of five bullet points. Building just enough fence to keep everyone in the same area while limiting free range behaviors to specific places was important. These were far from perfect, but they allowed for the dynamic vitality suggested by Parkinson. If the design standards and common practices document ever grew past something that could fit on one page, it would suggest the company was moving toward over specialization and providing services to a narrow slice of the potential client pie. In the rapidly changing world of adult eduction, this level of perfection would most certainly suggest decay and risk collapse as client needs change.
In Part 1 of this series, we set the frame for how to use time as a metric for assessing Agile team and project health. In Part 2, we looked at shifts in the cross-over point between burn-down and burn-up charts. In Part 3, we’ll look at other asymmetries and anomalies that can appear in time burn-down/burn-up charts and explore the issues the teams may be struggling with under these circumstances.
Figure 1 shows a burn-up that by the end of the sprint significantly exceeded the starting value for the original estimate.
There isn’t much mystery around a chart like this. The time needed to complete the work was significantly underestimated. The mystery is in the why and what that led to this situation.
Depending on the tools used to capture team metrics, it can be helpful to look at individual performances. What’s the differential between story points and estimated time vs actual time for each team member? Hardly every useful as a disciplinary tool, this type of analysis can be invaluable for knowing who needs professional development and in what areas.
In this case, there were several technical challenges related to new elements of the underlying architecture and the team put in extra hours to resolve them. Even so, they were unable to complete all the work they committed to in the sprint. The the scrum master and product owner need to monitor this so it isn’t a recurrent event or they risk team burnout and morale erosion if left unchecked. There are likely some unstated dependencies or skill deficiencies that need to be put on the table for discussion during the retrospective.
Figure 2 shows, among other things, unexpected jumps in the burn-down chart. There is clearly a significant amount of thrashing evident in the burn-down (which stubbornly refuses to actually burn down.)
Questions to explore:
Scope change during a sprint is a very undesirable practice. Not just because it goes against the scrum framework, but more so because it almost always has an adverse effect on team morale and focus. If there is an addition to the team, better to set that person to work helping teammates complete the work already defined in the sprint and assign them cards in the next sprint.
If team members are adjusting original time estimates for “accuracy” or whatever reason they may provide, this is little more than gaming the system. It does more harm than good, assuming management is Agile savvy and not intent on using Agile metrics for punitive purposes. On occasion I’ve had to hide the original time estimate entry field from the view of delivery team members and place it under the control of the product owner – out of sight, out of mind. It’s less a concern to me that time estimates are “wrong,” particularly if time estimate accuracy is showing improvement over time or the delta is a somewhat consistent value. I can work with an delivery team member’s time estimates that are 30% off if they are consistently 30% off.
In the case of Figure 2 it was the team’s second sprint and at the retrospective the elephant was called out from hiding: The design was far from stable. The decision was made to set aside scrum in favor of using Kanban until the numerous design issues could be resolved.
Figure 3 shows a burn-down chart that doesn’t go to zero by the end of the sprint.
The team missed their commit and quite a few cards rolled to the next sprint. Since the issue emerged late in the sprint there was little corrective action that could be taken. The answers were left to discovery during the retrospective. In this case, one of the factors was the failure to move development efforts into QA until late in the sprint. This is an all too common issue in cases where the sprint commitments were not fully satisfied. For this team the QA issue was exacerbated by the team simply taking on more than they thought they could commit to completing. The solution was to reduce the amount of work the team committed to in subsequent sprints until a stable sprint velocity emerged.
For a two week sprint on a project that is 5-6 sprints in, I usually don’t bother looking at time burn-down/burn-up charts for the first 3-4 days. Early trends can be misleading, but by the time a third of the sprint has been completed this metric will usually start to show trends that suggest any emergent problems. For new projects or for newly formed teams I typically don’t look at intra-sprint time metrics until much later in the project life cycle as there are usually plenty of other obvious and more pressing issues to work through.
I’ll conclude by reiterating my caution that these metrics are yard sticks, not micrometers. It is tempting to read too much into pretty graphs that have precise scales. Rather, the expert Agilest will let the metrics, whatever they are, speak for themselves and work to limit the impact of any personal cognitive biases.
In this series we’ve explored several ways to interpret the signals available to us in estimated time burn-down and actual time burn-up charts. There are numerous others scenarios that can reveal important information from such burn-down/burn-up charts and I would be very interested in hearing about your experiences with using this particular metric in Agile environments.
Agile Metrics – Time (Part 2 of 3)
In Part 1 of this series, we set the frame for how to use time as a metric for assessing Agile team and project health. In Part 2, we’ll look at shifts in the cross-over point between burn-down and burn-up charts and explore what issues may be in play for the teams under these circumstances.
Figure 1 shows a cross-over point occurring early in the sprint.
This suggests the following questions:
The answer to these questions may not become apparent until later in the sprint and the point isn’t to try and “correct” the work flow based on relatively little information. In the case of Figure 1, the “easy” cards had been sized as being more difficult than they actually were. The more difficult cards were sized too small and a number of key dependencies were not identified prior to the sprint planning session. This is reflected in the burn-up line that significantly exceeds the initial estimate for the sprint, the jumps in the burn-down line, and subsequent failure to complete a significant portion of the cards in the sprint backlog. All good fodder for the retrospective.
Figure 2 shows a cross-over point occurring late in the sprint.
On the face of it there are two significant stretches of inactivity. Unless you’re dealing with a blatantly apathetic team, there is undoubtedly some sort of activity going on. It’s just not being reflected in the work records. The task is to find out what that activity is and how to mitigate it.
The following questions will help expose the cause for the extended periods of apparent inactivity:
The actual reasons behind Figure 2 were two fold. There was a significant technical challenge the developers had to resolve that wasn’t sufficiently described by any of the cards in the sprint and later in the sprint several key resources were pulled off the project to deal with issues on a separate project.
Figure 3 shows a similar case of a late sprint cross-over in the burn-down/burn-up chart. The reasons for this occurrence were quite different than those shown in Figure 2.
This was an early sprint and a combination of design and technical challenges were not as well understood as originally thought at the sprint planning session. As these issues emerged, additional cards were created in the product backlog to be address in future sprints. Nonetheless, the current sprint commitment was missed by a significant margin.
In Part 3, we’ll look at other asymmetries and anomalies that can appear in time burn-down/burn-up charts and explore the issues may be in play for the teams under these circumstances.
Some teams choose to use card level estimated and actual time as one of the level of effort or performance markers for project progress and health. For others it’s a requirement of the work environment due to management or business constraints. If your situation resembles one of these cases then you will need to know how to use time metrics responsibly and effectively. This series of articles will establish several common practices you can use to develop your skills for evaluating and leveraging time-based metrics in an Agile environment.
It’s important to keep in mind that time estimates are just one of the level of effort or performance markers that can be used to track team and project health. There can, and probably should be other markers in the overall mix of how team and project performance is evaluated. Story points, business value, quality of information and conversation from stand-up meetings, various product backlog characteristics, cycle time, and cumulative flow are all examples of additional views into the health and progress of a project.
In addition to using multiple views, it’s important to be deeply aware of the strengths and limits presented by each of them. The limits are many while the strengths are few. Their value comes in evaluating them in concert with one another, not in isolation. One view may suggest something that can be confirmed or negated by another view into team performance. We’ll visit and review each of these and other metrics after this series of posts on time.
The examples presented in this series are never as cut and dried as presented. Just as I previously described multiple views based on different metrics, each metric can offer multiple views. My caution is that these views shouldn’t be read like an electrocardiogram, with the expectation of a rigidly repeatable pattern from which a slight deviation could signal a catastrophic event. The examples are extracted from hundreds of sprints and dozens of projects over the course of many years and are more like seismology graphs – they reveal patterns over time that are very much context dependent.
Estimated and actual time metrics allow teams to monitor sprint progress by comparing time remaining to time spent. Respectively, this will be a burn-down and a burn-up chart in reference to the direction of the data plotted on the chart. In Figure 1, the red line represents the estimated time remaining (burn-down) while the green line represents the amount of time logged against the story cards (burn-up) over the course of a two week sprint. (The gray line is a hypothetical ideal for burn-down.)
The principle value of a burn-down/burn-up chart for time is the view it gives to intra-sprint performance. I usually look at this chart just prior to a teams’ daily stand-up to get a sense if there are any questions I need to be asking about emerging trends. In this series of posts we’ll explore several of the things to look for when preparing for a stand-up. At the end of the sprint, the burn-down/burn-up chart can be a good reference to use during the retrospective when looking for ways to improve.
The sprint shown in Figure 1 is about as ideal a picture as one can expect. It shows all the points I look for that tell me, insofar as time is concerned, the sprint performance is in good health.
In Part 2, we’ll look at several cases where the cross-over point shifts and explore the issues the teams under these circumstances might be struggling with.