Seeking the simple path to
Seeking the simple path to
C. Northcote Parkinson is best known for, not surprisingly, Parkinson’s Law:
Work expands so as to fill the time available for its completion.
But there are many more gems in “Parkinson’s Law and Other Studies in Administration.” The value of re-reading classics is that what was missed on a prior read becomes apparent given the accumulation of a little more experience and the current context. On a re-read this past week, I discovered this:
It is now known that a perfection of planned layout is achieved only by institutions on the point of collapse. This apparently paradoxical conclusion is based upon a wealth of archaeological and historical research, with the more esoteric details of which we need not concern ourselves. In general principle, however, the method pursued has been to select and date the buildings which appear to have been perfectly designed for their purpose. A study and comparison of these has tended to prove that perfection of planning is a symptom of decay. During a period of exciting discovery or progress there is no time to plan the perfect headquarters. The time for that comes later, when all the important work has been done. Perfection, we know, is finality; and finality is death.
Several years back my focus for the better part of a year was on mapping out software design processes for a group of largely non-technical instructional designers. If managing software developers is akin to herding cats, finding a way to shepherd non-technical creative types such as instructional designers (particularly old school designers) can be likened to herding a flock of canaries – all over the place in three dimensions.
What made this effort successful was framing the design process as a set of guidelines that were easy to track and monitor. The design standards and common practices, for example, consisted of five bullet points. Building just enough fence to keep everyone in the same area while limiting free range behaviors to specific places was important. These were far from perfect, but they allowed for the dynamic vitality suggested by Parkinson. If the design standards and common practices document ever grew past something that could fit on one page, it would suggest the company was moving toward over specialization and providing services to a narrow slice of the potential client pie. In the rapidly changing world of adult eduction, this level of perfection would most certainly suggest decay and risk collapse as client needs change.
In Part 1 of this series, we set the frame for how to use time as a metric for assessing Agile team and project health. In Part 2, we looked at shifts in the cross-over point between burn-down and burn-up charts. In Part 3, we’ll look at other asymmetries and anomalies that can appear in time burn-down/burn-up charts and explore the issues the teams may be struggling with under these circumstances.
Figure 1 shows a burn-up that by the end of the sprint significantly exceeded the starting value for the original estimate.
There isn’t much mystery around a chart like this. The time needed to complete the work was significantly underestimated. The mystery is in the why and what that led to this situation.
Depending on the tools used to capture team metrics, it can be helpful to look at individual performances. What’s the differential between story points and estimated time vs actual time for each team member? Hardly every useful as a disciplinary tool, this type of analysis can be invaluable for knowing who needs professional development and in what areas.
In this case, there were several technical challenges related to new elements of the underlying architecture and the team put in extra hours to resolve them. Even so, they were unable to complete all the work they committed to in the sprint. The the scrum master and product owner need to monitor this so it isn’t a recurrent event or they risk team burnout and morale erosion if left unchecked. There are likely some unstated dependencies or skill deficiencies that need to be put on the table for discussion during the retrospective.
Figure 2 shows, among other things, unexpected jumps in the burn-down chart. There is clearly a significant amount of thrashing evident in the burn-down (which stubbornly refuses to actually burn down.)
Questions to explore:
Scope change during a sprint is a very undesirable practice. Not just because it goes against the scrum framework, but more so because it almost always has an adverse effect on team morale and focus. If there is an addition to the team, better to set that person to work helping teammates complete the work already defined in the sprint and assign them cards in the next sprint.
If team members are adjusting original time estimates for “accuracy” or whatever reason they may provide, this is little more than gaming the system. It does more harm than good, assuming management is Agile savvy and not intent on using Agile metrics for punitive purposes. On occasion I’ve had to hide the original time estimate entry field from the view of delivery team members and place it under the control of the product owner – out of sight, out of mind. It’s less a concern to me that time estimates are “wrong,” particularly if time estimate accuracy is showing improvement over time or the delta is a somewhat consistent value. I can work with an delivery team member’s time estimates that are 30% off if they are consistently 30% off.
In the case of Figure 2 it was the team’s second sprint and at the retrospective the elephant was called out from hiding: The design was far from stable. The decision was made to set aside scrum in favor of using Kanban until the numerous design issues could be resolved.
Figure 3 shows a burn-down chart that doesn’t go to zero by the end of the sprint.
The team missed their commit and quite a few cards rolled to the next sprint. Since the issue emerged late in the sprint there was little corrective action that could be taken. The answers were left to discovery during the retrospective. In this case, one of the factors was the failure to move development efforts into QA until late in the sprint. This is an all too common issue in cases where the sprint commitments were not fully satisfied. For this team the QA issue was exacerbated by the team simply taking on more than they thought they could commit to completing. The solution was to reduce the amount of work the team committed to in subsequent sprints until a stable sprint velocity emerged.
For a two week sprint on a project that is 5-6 sprints in, I usually don’t bother looking at time burn-down/burn-up charts for the first 3-4 days. Early trends can be misleading, but by the time a third of the sprint has been completed this metric will usually start to show trends that suggest any emergent problems. For new projects or for newly formed teams I typically don’t look at intra-sprint time metrics until much later in the project life cycle as there are usually plenty of other obvious and more pressing issues to work through.
I’ll conclude by reiterating my caution that these metrics are yard sticks, not micrometers. It is tempting to read too much into pretty graphs that have precise scales. Rather, the expert Agilest will let the metrics, whatever they are, speak for themselves and work to limit the impact of any personal cognitive biases.
In this series we’ve explored several ways to interpret the signals available to us in estimated time burn-down and actual time burn-up charts. There are numerous others scenarios that can reveal important information from such burn-down/burn-up charts and I would be very interested in hearing about your experiences with using this particular metric in Agile environments.
Agile Metrics – Time (Part 2 of 3)
In Part 1 of this series, we set the frame for how to use time as a metric for assessing Agile team and project health. In Part 2, we’ll look at shifts in the cross-over point between burn-down and burn-up charts and explore what issues may be in play for the teams under these circumstances.
Figure 1 shows a cross-over point occurring early in the sprint.
This suggests the following questions:
The answer to these questions may not become apparent until later in the sprint and the point isn’t to try and “correct” the work flow based on relatively little information. In the case of Figure 1, the “easy” cards had been sized as being more difficult than they actually were. The more difficult cards were sized too small and a number of key dependencies were not identified prior to the sprint planning session. This is reflected in the burn-up line that significantly exceeds the initial estimate for the sprint, the jumps in the burn-down line, and subsequent failure to complete a significant portion of the cards in the sprint backlog. All good fodder for the retrospective.
Figure 2 shows a cross-over point occurring late in the sprint.
On the face of it there are two significant stretches of inactivity. Unless you’re dealing with a blatantly apathetic team, there is undoubtedly some sort of activity going on. It’s just not being reflected in the work records. The task is to find out what that activity is and how to mitigate it.
The following questions will help expose the cause for the extended periods of apparent inactivity:
The actual reasons behind Figure 2 were two fold. There was a significant technical challenge the developers had to resolve that wasn’t sufficiently described by any of the cards in the sprint and later in the sprint several key resources were pulled off the project to deal with issues on a separate project.
Figure 3 shows a similar case of a late sprint cross-over in the burn-down/burn-up chart. The reasons for this occurrence were quite different than those shown in Figure 2.
This was an early sprint and a combination of design and technical challenges were not as well understood as originally thought at the sprint planning session. As these issues emerged, additional cards were created in the product backlog to be address in future sprints. Nonetheless, the current sprint commitment was missed by a significant margin.
In Part 3, we’ll look at other asymmetries and anomalies that can appear in time burn-down/burn-up charts and explore the issues may be in play for the teams under these circumstances.
Some teams choose to use card level estimated and actual time as one of the level of effort or performance markers for project progress and health. For others it’s a requirement of the work environment due to management or business constraints. If your situation resembles one of these cases then you will need to know how to use time metrics responsibly and effectively. This series of articles will establish several common practices you can use to develop your skills for evaluating and leveraging time-based metrics in an Agile environment.
It’s important to keep in mind that time estimates are just one of the level of effort or performance markers that can be used to track team and project health. There can, and probably should be other markers in the overall mix of how team and project performance is evaluated. Story points, business value, quality of information and conversation from stand-up meetings, various product backlog characteristics, cycle time, and cumulative flow are all examples of additional views into the health and progress of a project.
In addition to using multiple views, it’s important to be deeply aware of the strengths and limits presented by each of them. The limits are many while the strengths are few. Their value comes in evaluating them in concert with one another, not in isolation. One view may suggest something that can be confirmed or negated by another view into team performance. We’ll visit and review each of these and other metrics after this series of posts on time.
The examples presented in this series are never as cut and dried as presented. Just as I previously described multiple views based on different metrics, each metric can offer multiple views. My caution is that these views shouldn’t be read like an electrocardiogram, with the expectation of a rigidly repeatable pattern from which a slight deviation could signal a catastrophic event. The examples are extracted from hundreds of sprints and dozens of projects over the course of many years and are more like seismology graphs – they reveal patterns over time that are very much context dependent.
Estimated and actual time metrics allow teams to monitor sprint progress by comparing time remaining to time spent. Respectively, this will be a burn-down and a burn-up chart in reference to the direction of the data plotted on the chart. In Figure 1, the red line represents the estimated time remaining (burn-down) while the green line represents the amount of time logged against the story cards (burn-up) over the course of a two week sprint. (The gray line is a hypothetical ideal for burn-down.)
The principle value of a burn-down/burn-up chart for time is the view it gives to intra-sprint performance. I usually look at this chart just prior to a teams’ daily stand-up to get a sense if there are any questions I need to be asking about emerging trends. In this series of posts we’ll explore several of the things to look for when preparing for a stand-up. At the end of the sprint, the burn-down/burn-up chart can be a good reference to use during the retrospective when looking for ways to improve.
The sprint shown in Figure 1 is about as ideal a picture as one can expect. It shows all the points I look for that tell me, insofar as time is concerned, the sprint performance is in good health.
In Part 2, we’ll look at several cases where the cross-over point shifts and explore the issues the teams under these circumstances might be struggling with.
The code is frozen.
Phone rings. Last minute changes.
Sound of tests failing.
Conceptually, the idea of a minimum viable product (MVP) is easy to grasp. Early in a project, it’s a deliverable that reflects some semblance to the final product such that it’s barely able to stand on it’s own without lots of hand-holding and explanation for the customer’s benefit. In short, it’s terrible, buggy, and unstable. By design, MVPs lack features that may eventually prove to be essential to the final product. And we deliberately show the MVP to the customer!
We do this because the MVP is the engine that turns the build-measure-learn feedback loop. The key here is the “learn” phase. The essential features to the final product are often unclear or even unknown early in a project. Furthermore, they are largely undefinable or unknowable without multiple iterations through the build-measure-learn feedback cycle with the customer early in the process.
So early MVPs aren’t very good. They’re also not very expensive. This, too, is by design because an MVP’s very raison d’être is to test the assumptions we make early on in a project. They are low budget experiments that follow from a simple strategy:
If the assumptions are not stated and the tests are vague, the MVP will fail to achieve it’s purpose and will likely result in wasted effort.
The “product” in “minimum viable product” can be almost anything: a partial or early design flow, a wireframe, a collection of simulated email exchanges, the outline to a user guide, a static screen mock-up, a shell of screen panels with placeholder text that can nonetheless be navigated – anything that can be placed in front of a customer for feedback qualifies as an MVP. In other words, a sprint can contain multiple MVPs depending on the functional groups involved with the sprint and the maturity of the project. As the project progresses, the individual functional group MVPs will begin to integrate and converge on larger and more refined MVPs, each gaining in stability and quality.
MVPs are not an end unto themselves. They are tangible evidence of the development process in action. The practice of iteratively developing MVPs helps develop to skill of rapid evaluation and learning among product owners and agile delivery team members. A buggy, unstable, ugly, bloated, or poorly worded MVP is only a problem if it’s put forward as the final product. The driving goal behind iterative MVPs is not perfection, rather it is to support the process of learning what needs to be developed for the optimal solution that solves the customer’s problems.
“Unlike a prototype or concept test, an MVP is designed not just to answer product design or technical questions. Its goal is to test fundamental business hypotheses.” – Eric Ries, The Lean Startup
So how might product owners and Agile teams begin to get a handle on defining an MVP? There are several questions the product owner and team can ask of themselves, in light of the product backlog, that may help guide their focus and decisions. (Use of the following term “stakeholders” can mean company executives or external customers.)
Since an MVP can be almost anything, it is perhaps easier to begin any conversation about MVPs by touching on the elements missing from an MVP.
An MVP is not a quality product. Using any generally accepted definition of “quality” in the marketplace, an MVP will fail on all accounts. Well, on most accounts. The key is to consider relative quality. At the beginning of a sprint, the standards of quality for an MVP are framed by the sprint goals and objectives. If it meets those goals, the team has successfully created a quality MVP. If measured against the external marketplace or the quality expectations of the customer, the MVP will almost assuredly fail inspection.
Your MVPs will probably be ugly, especially at first. They will be missing features. They will be unstable. Build them anyway. Put them in front of the customer for feedback. Learn. And move on to the next MVP. Progressively, they will begin to converge on the final product that is of high quality in the eyes of the customer. MVPs are the stepping stones that get you across the development stream and to the other side where all is sunny, beautiful, and stable. (For more information on avoiding the trap of presupposing what a customer means by quality and value, see “The Value of ‘Good Enough’“)
An MVP is not permanent. Agile teams should expect to throw away several, maybe even many, MVPs on their way to the final product. If they aren’t, then it is probable they are not learning what they need to about what the customer actually wants. In this respect, waste can be a good, even important thing. The driving purpose of the MVP is to rapidly develop the team’s understanding of what the customer needs, the problems they are expecting to have solved, and the level of quality necessary to satisfy each of these goals.
MVPs are not the truth. They are experiments meant to get the team to the truth. By virtue of their low-quality, low-cost nature, MVPs quickly shake out the attributes to the solution the customer cares about and wants. The solid empirical foundation they provide is orders of magnitude more valuable to the Agile team than any amount of speculative strategy planning or theoretical posturing.
(This article cross-posted on LinkedIn.)
Any company interested in being successful, whether offering a product or service, promises quality to its customers. Those that don’t deliver, die away. Those that do, survive. Those that deliver quality consistently, thrive. Seems like easy math. But then, 1 + 1 = 2 seems like easy math until you struggle through the 350+ pages Whitehead and Russell1 spent on setting up the proof for this very equation. Add the subjective filters for evaluating “quality” and one is left with a measure that can be a challenge to define in any practical way.
Math aside, when it comes to quality, everyone “knows it when they see it,” usually in counterpoint to a decidedly non-quality experience with a product or service. The nature of quality is indeed chameleonic – durability, materials, style, engineering, timeliness, customer service, utility, aesthetics – the list of measures is nearly endless. Reading customer reviews can reveal a surprising array of criteria used to evaluate the quality for a single product.
The view from within the company, however, is even less clear. Businesses often believe they know quality when they see it. Yet that belief is often predicate on how the organization defines quality, not how their customers define quality. It is a definition that is frequently biased in ways that accentuate what the organization values, not necessarily what the customer values.
Organization leaders may define quality too high, such that their product or service can’t be priced competitively or delivered to the market in a timely manner. If the high quality niche is there, the business might succeed. If not, the business loses out to lower priced competitors that deliver products sooner and satisfy the customer’s criteria for quality (see Figure 1).
Certainly, there is a case that can be made for providing the highest quality possible and developing the business around that niche. For startups and new product development, this may not be be best place to start.
On the other end of the spectrum, businesses that fall short of customer expectations for quality suffer incremental, or in some cases catastrophic, reputation erosion. Repairing or rebuilding a reputation for quality in a competitive market is difficult, maybe even impossible (see Figure 2).
The process for defining quality on the company side of the equation, while difficult, is more or less deliberate. Not so on the customer side. Customers often don’t know what they mean by “quality” until they have an experience that fails to meet their unstated, or even unknown, expectations. Quality savvy companies, therefore, invest in understanding what their customers mean by “quality” and plan accordingly. Less guess work, more effort toward actual understanding.
Furthermore, looking to what the competition is doing may not be the best strategy. They may be guessing as well. It may very well be that the successful quality strategy isn’t down the path of adding more bells and whistles that market research and focus groups suggest customers want. Rather, it may be that improvements in existing features and services are more desirable.
Focus on being clear about whether or not potential customers value the offered solution and how they define value. When following an Agile approach to product development, leveraging minimum viable product definitions can help bring clarity to the effort. With customer-centric benchmarks for quality in hand, companies are better served by first defining quality in terms of “good enough” in the eyes of their customers and then setting the internal goal a little higher. This will maximize internal resources (usually time and money) and deliver a product or service that satisfies the customer’s idea of “quality.”
Case in point: Several months back, I was assembling several bar clamps and needed a set of cutting tools used to put the thread on the end of metal pipes – a somewhat exotic tool for a woodworker’s shop. Shopping around, I could easily drop $300 for a five star “professional” set or $35 for a set that was rated to be somewhat mediocre. I’ve gone high end on many of the tools in my shop, but in this case the $35 set was the best solution for my needs. Most of the negative reviews revolved around issues with durability after repeated use. My need was extremely limited and the “valuable and good enough” threshold was crossed at $35. The tool set performed perfectly and more than paid for itself when compared with the alternatives, whether that be a more expensive tool or my time to find a local shop to thread the pipes for me. This would not have been the case for a pipefitter or someone working in a machine shop.
By understanding where the “good enough and valuable” line is, project and organization leaders are in a better position to evaluate the benefits of incremental improvements to core products and services that don’t break the bank or burn out the people tasked with delivering the goods. Of course, determining what is “good enough” depends on the end goal. Sending a rover to Mars, “good enough” had better be as near to perfection as possible. Threading a dozen pipes for bar clamps used in a wood shop can be completed quite successful with low quality tools that are “good enough” to get the job done.
1Volume 1 of Principia Mathematica by Alfred North Whitehead and Bertrand Russell (Cambridge University Press, page 379). The proof was actually not completed until Volume 2.
(This article cross-posted at LinkedIn.)
In Parkinson’s Law of Triviality and Story Sizing, I touched on the issue of relative expertise among team members during collaborative efforts to size story cards. I’d like to expand on that idea by considering several types of team compositions.
Team 1 is a tight knit band of four software developers represented in Figure 1.
Team 1 represents a near-ideal team composition for a typical software related project. However, the real world isn’t so generous in it’s allocation of near-ideal, let alone ideal, teams. A typical team for a software related project is more likely to resemble Team 2, as represented in Figure 2.
As Agile practices become more ubiquitous in the business world, team composition beings to resemble Team 3, as shown in Figure 3.
The mix now includes non-technical people – content developers and editors, strategists, and designers. Even assuming an equal level of experience in their respective domains, the company, and the business environment, there is very little overlap. Arriving at a consensus during a story sizing exercise now becomes a significant challenge. But again, the real world isn’t even so kind as this. We are increasingly more likely to encounter teams that resemble Team 4 as shown in Figure 4.
As before, the relative circle of expertise among team members can vary quite a bit. When a team resembles the composition of Team 4, the software developers (HTML5/CSS and C#) will have trouble understanding what the Learning Strategist is asking for while the Learning Strategist may not understand why what he wants the software developers to deliver isn’t possible.
When I’ve attempted to facilitate story sizing sessions with teams that resemble Team 4 they either become quite contentious (and therefore time consuming) or team members that don’t have the expertise to understand a particular card simply accept the opinion of the stronger voices. Neither one of these situations is desirable.
To counteract these possibilities, I’ve found it much more effective to have the card assignee determine the card size (points and time estimate) and work to have the other team members ask questions about the work described on the card such that the assignee and the team better understand the context in which the card is positioned. The team members that lack domain expertise, it turns out, are in a good position to help craft good acceptance criteria.
At the end of a brief conversation where the entire team is working to evaluate the card for anything other than level of effort (time) and complexity (points), it is not uncommon for the assignee to reconsider their sizing, break the card into multiple cards, or determine the card shouldn’t be included in the sprint backlog. In short, it ends up being a much more productive conversation if teammates aren’t haggling over point distinctions or passively accepting what more experienced teammates are advocating. The benefit to the product owner is that they now have additional information that will undoubtedly influence the product backlog prioritization.
Take a moment or two to gaze at the image below. What do you see?
Do you see white dots embedded within the grid connected by diagonal white lines? If you do, try and ignore them. Chances are, your brain won’t let you even though the white circles and diagonal lines don’t exist. Their “thereness” is created by the thin black lines. By carefully drawing a simple repetitive pattern of black lines, your brain has filled in the void and enhanced the image with white dots and diagonal white lines. You cannot not do this. This cognitive process is important to be aware of if you are a product owner because both your agile delivery team members and clients will run this program without fail.
Think of the black lines as the minimum viable product definition for one of your sprints. When shown to your team or your client, they will naturally fill the void for what’s next or what’s missing. Maybe as a statement, most likely as a question. But what if the product owner defined the minimum viable product further and presented, metaphorically, something like this:
By removing the white space from the original image there are fewer possibilities for your team and the client to explore. We’ve reduced their response to our proposed solution to a “yes” or “no” and in doing so have started moving down the path of near endless cycles of the product owner guessing what the client wants and the agile delivery team guessing what the product owner wants. Both the client and the team will grow increasingly frustrated at the lack of progress. Played out too long, the client is likely to doubt our skills and competency at finding a solution.
On the other hand, by strategically limiting the information presented in the minimum viable product (or effort, if you like) we invite the client and the agile delivery team to explore the white space. This will make them co-creators of the solution and more fully invested in its success. Since they co-created the solution, they are much more likely to view the solution as brilliant, perfect, and the shiniest of shiny objects.
I can’t remember where I heard or read this, but in the first image the idea is that the black lines are you talking and the white spaces are you listening.