Agile Team Composition: Generalists versus Specialists

Estimating levels of effort for a set of tasks by a group of individuals well qualified to complete those tasks can efficiently and reliable be determined with a collaborative estimation process like planning poker. Such teams have a good measure of skill overlap. In the context of the problem set, each of the team members are generalist in the sense  it’s possible for any one team member to work on a variety of cross functional tasks during a sprint. Differences in preferred coding language among team members, for example, is less an issue when everyone understands advanced coding practices and the underlying architecture for the solution.

With a set of complimentary technical skills it’s is easier agree on work estimates. There are other benefits that flow from well-matched teams. A stable sprint velocity emerges much sooner. There is greater cross functional participation. And re-balancing the work load when “disruptors” occur – like vacations, illness, uncommon feature requests, etc. – is easier to coordinate.

Once the set of tasks starts to include items that fall outside the expertise of the group and the group begins to include cross functional team members, a process like planning poker becomes increasingly less reliable. The issue is the mismatch between relative scales of expertise. A content editor is likely to have very little insight into the effort required to modify a production database schema. Their estimation may be little more than a guess based on what they think it “should” be. Similarly for a coder faced with estimating the effort needed to translate 5,000 words of text from English to Latvian. Unless, of course, you have an English speaking coder on your team who speaks fluent Latvian.

These distinctions are easy to spot in project work. When knowledge and solution domains have a great deal of overlap, generalization allows for a lot of high quality collaboration. However, when an Agile team is formed to solve problems that do not have a purely technical solution, specialization rather than generalization has a greater influence on overall success. The risk is that with very little overlap specialized team expertise can result in either shallow solutions or wasteful speculation – waste that isn’t discovered until much later. Moreover, re-balancing the team becomes problematic and most often results in delays and missed commitments due to the limited ability for cross functional participation among team mates.

The challenge for teams where knowledge and solution domains have minimal overlap is to manage the specialized expertise domains in a way that is optimally useful, That is, reliable, predictable, and actionable. Success becomes increasingly dependent on how good an organization is at estimating levels of effort when the team is composed of specialists.

One approach I experimented with was to add a second dimension to the estimation: a weight factor to the estimator’s level of expertise relative to the nature of the card being considered. The idea is that with a weighted expertise factor calibrated to the problem and solution contexts, a more reliable velocity emerges over time. In practice, was difficult to implement. Teams spent valuable time challenging what the weighted factor should be and less experienced team members felt their opinion had been, quite literally, discounted.

The approach I’ve had the most success with on teams with diverse expertise is to have story cards sized by the individual assigned to complete the work. This still happens in a collaborative refinement or planning session so that other team members can contribute information that is often outside the perspective of the work assignee. Dependencies, past experience with similar work on other projects, missing acceptance criteria, or a refinement to the story card’s minimum viable product (MVP) definition are all examples of the kind of information team members have contributed. This invariably results in an adjustment to the overall level of effort estimate on the story card. It also has made details about the story card more explicit to the team in a way that a conversation focused on story point values doesn’t seem to achieve. The conversation shifts from “What are the points?” to “What’s the work needed to complete this story card?”

I’ve also observed that by focusing ownership of the estimate on the work assignee, accountability and transparency tend to increase. Potential blockers are surfaced sooner and team members communicate issues and dependencies more freely with each other. Of course, this isn’t always the case and in a future post we’ll explore aspects of team composition and dynamics that facilitate or prevent quality collaboration.

Story Points and Fuzzy Bunnies

The scrum framework is forever tied to the language of sports in general and rugby in particular. We organize our project work around goals, sprints, points, and daily scrums. An unfortunate consequence of organizing projects around a sports metaphor is that the language of gaming ends up driving behavior. For example, people have a natural inclination to associate the idea of story points to a measure of success rather than an indicator of the effort required to complete the story. The more points you have, the more successful you are. This is reflected in an actual quote from a retrospective on things a team did well:

We completed the highest number of points in this sprint than in any other sprint so far.

This was a team that lost sight of the fact they were the only team on the field. They were certain to be the winning team. They were also destine to be he losing team. They were focused on story point acceleration rather than a constant, predictable velocity.

More and more I’m finding less and less value in using story points as an indicator for level of effort estimation. If Atlassian made it easy to change the label on JIRA’s story point field, I’d change it to “Fuzzy Bunnies” just to drive this idea home. You don’t want more and more fuzzy bunnies, you want no more than the number you can commit to taking care of in a certain span of time typically referred to as a “sprint.” A team that decides to take on the care and feeding of 50 fuzzy bunnies over the next two weeks but has demonstrated – sprint after sprint – they can only keep 25 alive is going to lose a lot of fuzzy bunnies over the course of the project.

It is difficult for people new to scrum or Agile to grasp the purpose behind an abstract idea like story points. Consequently, they are unskilled in how to use them as a measure of performance and improvement. Developing this skill can take considerable time and effort. The care and feeding of fuzzy bunnies, however, they get. Particularly with teams that include non-technical domains of expertise, such as content development or learning strategy.

A note here for scrum masters. Unless you want to exchange your scrum master stripes for a saddle and spurs, be wary of your team turning story pointing into an animal farm. Sizing story cards to match the exact size and temperament from all manner of animals would be just as cumbersome as the sporting method of story points. So, watch where you throw your rope, Agile cowboys and cowgirls.

(This article cross-posted at LinkedIn)


Image credit: tsaiproject (Modified in accordance with Creative Commons Attribution 2.0 Generic license)

Autopilot Agile

There is a story about a bunch of corporate employees that have been working together for so long they’ve cataloged and numbered all the jokes they’ve told (and re-told) over the years. Eventually, no one need actually tell the joke. Someone simply yells out something like “Number Nine!” and everyone laughs in reply.

As Agile methodologies and practices become ubiquitous in the business world and jump more and more functional domain gaps, I’m seeing this type of cataloging and rote behavior emerge. Frameworks become reinforced structures. Practices become policies. “Stand-up” becomes code for “status meeting.” “Sprint Review” becomes code for “bigger status meeting.” Eventually, everyone is going through the motions and all that was Agile has drained from the project.

When you see this happening on any of your teams, start introducing small bits of randomness and pattern interruptions. In fact, do this anyway as a preventative measure.

  • One day a week, instead of the usual stand-up drill (Yesterday. Today. In the way.), have each team member answer the question “Why are you working on what today?” Or have each team member talk about what someone else on the team is working on.
  • Deliberately change the order in which team members “have the mic” during stand-ups.
  • Hold a sprint prospective. What are the specific things the team will be doing to further their success? What blockers or impediments can they foresee in the next sprint? Who will be dependent on what work to be completed by when?
  • Set aside story points or time estimates for several sprints. I guarantee the world won’t end. (And if it does, well, we’ve got bigger problems than my failed guarantee.) How did that impact performance? What was the impact on morale?
  • During a backlog refinement session, run the larger story cards through the 5 Whys. Begin with “Why are we doing this work?” This invariably ends up in smaller cards and additions to the backlog.

There’s no end to the small changes that can be introduced on the spur of the moment to shake things up just a bit without upsetting things a lot. The goal is to keep people in a mindset of fluidity, adaptability, and recalibration to the goal.

It’s more than a little ironic and somewhat funny to see autopilot-type behavior emerge in the name of Agile. But if you really want funny…Number Seven!

Agile Metrics – Time (Part 3 of 3)

In Part 1 of this series, we set the frame for how to use time as a metric for assessing Agile team and project health. In Part 2, we looked at shifts in the cross-over point between burn-down and burn-up charts. In Part 3, we’ll look at other asymmetries and anomalies that can appear in time burn-down/burn-up charts and explore the issues the teams may be struggling with under these circumstances.

Figure 1 shows a burn-up that by the end of the sprint significantly exceeded the starting value for the original estimate.

Figure 1
Figure 1

There isn’t much mystery around a chart like this. The time needed to complete the work was significantly underestimated. The mystery is in the why and what that led to this situation.

  • Where there unexpected technical challenges?
  • Were the stories poorly defined?
  • Were the acceptance criteria unclear?
  • Were the sprint goals, objectives, or minimum viable product definition unclear?

Depending on the tools used to capture team metrics, it can be helpful to look at individual performances. What’s the differential between story points and estimated time vs actual time for each team member? Hardly every useful as a disciplinary tool, this type of analysis can be invaluable for knowing who needs professional development and in what areas.

In this case, there were several technical challenges related to new elements of the underlying architecture and the team put in extra hours to resolve them. Even so, they were unable to complete all the work they committed to in the sprint. The the scrum master and product owner need to monitor this so it isn’t a recurrent event or they risk team burnout and morale erosion if left unchecked. There are likely some unstated dependencies or skill deficiencies that need to be put on the table for discussion during the retrospective.

Figure 2 shows, among other things, unexpected jumps in the burn-down chart. There is clearly a significant amount of thrashing evident in the burn-down (which stubbornly refuses to actually burn down.)

Figure 2
Figure 2

Questions to explore:

  • Are cards being brought into the sprint after the sprint has started and why?
  • Are original time estimates being changed on cards after the sprint has started?
  • Is there a stakeholder in the grass, meddling with the team’s commitment?
  • Was a team member added to the team and cards brought into the sprint to accommodate the increased bandwidth?
  • Whatever is causing the thrashing, is the team (delivery team members, scrum master, and product owner) aware of the changes?

Scope change during a sprint is a very undesirable practice. Not just because it goes against the scrum framework, but more so because it almost always has an adverse effect on team morale and focus. If there is an addition to the team, better to set that person to work helping teammates complete the work already defined in the sprint and assign them cards in the next sprint.

If team members are adjusting original time estimates for “accuracy” or whatever reason they may provide, this is little more than gaming the system. It does more harm than good, assuming management is Agile savvy and not intent on using Agile metrics for punitive purposes. On occasion I’ve had to hide the original time estimate entry field from the view of delivery team members and place it under the control of the product owner – out of sight, out of mind. It’s less a concern to me that time estimates are “wrong,” particularly if time estimate accuracy is showing improvement over time or the delta is a somewhat consistent value. I can work with an delivery team member’s time estimates that are 30% off if they are consistently 30% off.

In the case of Figure 2 it was the team’s second sprint and at the retrospective the elephant was called out from hiding: The design was far from stable. The decision was made to set aside scrum in favor of using Kanban until the numerous design issues could be resolved.

Figure 3 shows a burn-down chart that doesn’t go to zero by the end of the sprint.

Figure 3
Figure 3

The team missed their commit and quite a few cards rolled to the next sprint. Since the issue emerged late in the sprint there was little corrective action that could be taken. The answers were left to discovery during the retrospective. In this case, one of the factors was the failure to move development efforts into QA until late in the sprint. This is an all too common issue in cases where the sprint commitments were not fully satisfied. For this team the QA issue was exacerbated by the team simply taking on more than they thought they could commit to completing. The solution was to reduce the amount of work the team committed to in subsequent sprints until a stable sprint velocity emerged.

Conclusion

For a two week sprint on a project that is 5-6 sprints in, I usually don’t bother looking at time burn-down/burn-up charts for the first 3-4 days. Early trends can be misleading, but by the time a third of the sprint has been completed this metric will usually start to show trends that suggest any emergent problems. For new projects or for newly formed teams I typically don’t look at intra-sprint time metrics until much later in the project life cycle as there are usually plenty of other obvious and more pressing issues to work through.

I’ll conclude by reiterating my caution that these metrics are yard sticks, not micrometers. It is tempting to read too much into pretty graphs that have precise scales. Rather, the expert Agilest will let the metrics, whatever they are, speak for themselves and work to limit the impact of any personal cognitive biases.

In this series we’ve explored several ways to interpret the signals available to us in estimated time burn-down and actual time burn-up charts. There are numerous others scenarios that can reveal important information from such burn-down/burn-up charts and I would be very interested in hearing about your experiences with using this particular metric in Agile environments.

Agile Metrics – Time (Part 2 of 3)

Agile Metrics – Time (Part 2 of 3)

In Part 1 of this series, we set the frame for how to use time as a metric for assessing Agile team and project health. In Part 2, we’ll look at shifts in the cross-over point between burn-down and burn-up charts and explore what issues may be in play for the teams under these circumstances.

Figure 1 shows a cross-over point occurring early in the sprint.

Figure 1
Figure 1

This suggests the following questions:

  • Is the team working longer hours than needed? If so, what is driving this effort? Are any of the team members struggling with personal problems that have them working longer hours? Are they worried they may have committed to more work than they can complete in the sprint and are therefore trying to stay ahead of the work load? Has someone from outside the team requested additional work outside the awareness of the product owner or scrum master?
  • Has the team over estimated the level of effort needed to complete the cards committed to the sprint? If so, this suggests an opportunity to coach the team on ways to improve their estimating or the quality of the story cards.
  • Has the team focused on the easy story cards early in the sprint and work on the more difficult story cards is pending? This isn’t necessarily a bad thing, just something to know and be aware of after confirming this with the team. If accurate, it also points out the importance of using this type of metric for intra-sprint monitoring only and not extrapolate what it shows to a project-level metric.

The answer to these questions may not become apparent until later in the sprint and the point isn’t to try and “correct” the work flow based on relatively little information. In the case of Figure 1, the “easy” cards had been sized as being more difficult than they actually were. The more difficult cards were sized too small and a number of key dependencies were not identified prior to the sprint planning session. This is reflected in the burn-up line that significantly exceeds the initial estimate for the sprint, the jumps in the burn-down line, and subsequent failure to complete a significant portion of the cards in the sprint backlog. All good fodder for the retrospective.

Figure 2 shows a cross-over point occurring late in the sprint.

Figure 2
Figure 2

On the face of it there are two significant stretches of inactivity. Unless you’re dealing with a blatantly apathetic team, there is undoubtedly some sort of activity going on. It’s just not being reflected in the work records. The task is to find out what that activity is and how to mitigate it.

The following questions will help expose the cause for the extended periods of apparent inactivity:

  • Are one or more members not feeling well or are there other personal issues impacting an individual’s ability to focus?
  • Have they been poached by another project to work on some pressing issue?
  • Are they waiting for feedback from stakeholders,  clients, or other team members?
  • Are the story cards unclear? As the saying goes, story cards are an invitation to a conversation. If a story card is confusing, contradictory, or unclear than the team needs to talk about that. What’s unclear? Where’s the contradiction? As my college calculus professor used to ask when teaching us how to solve math problems, “Where’s the source of the agony?”

The actual reasons behind Figure 2 were two fold. There was a significant technical challenge the developers had to resolve that wasn’t sufficiently described by any of the cards in the sprint and later in the sprint several key resources were pulled off the project to deal with issues on a separate project.

Figure 3 shows a similar case of a late sprint cross-over in the burn-down/burn-up chart. The reasons for this occurrence were quite different than those shown in Figure 2.

Figure 3
Figure 3

This was an early sprint and a combination of design and technical challenges were not as well understood as originally thought at the sprint planning session. As these issues emerged, additional cards were created in the product backlog to be address in future sprints. Nonetheless, the current sprint commitment was missed by a significant margin.

In Part 3, we’ll look at other asymmetries and anomalies that can appear in time burn-down/burn-up charts and explore the issues may be in play for the teams under these circumstances.

Agile Metrics – Time (Part 1 of 3)

Some teams choose to use card level estimated and actual time as one of the level of effort or performance markers for project progress and health. For others it’s a requirement of the work environment due to management or business constraints. If your situation resembles one of these cases then you will need to know how to use time metrics responsibly and effectively. This series of articles will establish several common practices you can use to develop your skills for evaluating and leveraging time-based metrics in an Agile environment.

It’s important to keep in mind that time estimates are just one of the level of effort or performance markers that can be used to track team and project health. There can, and probably should be other markers in the overall mix of how team and project performance is evaluated. Story points, business value, quality of information and conversation from stand-up meetings, various product backlog characteristics, cycle time, and cumulative flow are all examples of additional views into the health and progress of a project.

In addition to using multiple views, it’s important to be deeply aware of the strengths and limits presented by each of them. The limits are many while the strengths are few.  Their value comes in evaluating them in concert with one another, not in isolation.  One view may suggest something that can be confirmed or negated by another view into team performance. We’ll visit and review each of these and other metrics after this series of posts on time.

The examples presented in this series are never as cut and dried as presented. Just as I previously described multiple views based on different metrics, each metric can offer multiple views. My caution is that these views shouldn’t be read like an electrocardiogram, with the expectation of a rigidly repeatable pattern from which a slight deviation could signal a catastrophic event. The examples are extracted from hundreds of sprints and dozens of projects over the course of many years and are more like seismology graphs – they reveal patterns over time that are very much context dependent.

Estimated and actual time metrics allow teams to monitor sprint progress by comparing time remaining to time spent. Respectively, this will be a burn-down and a burn-up chart in reference to the direction of the data plotted on the chart. In Figure 1, the red line represents the estimated time remaining (burn-down) while the green line represents the amount of time logged against the story cards (burn-up) over the course of a two week sprint. (The gray line is a hypothetical ideal for burn-down.)

Figure 1
Figure 1

The principle value of a burn-down/burn-up chart for time is the view it gives to intra-sprint performance. I usually look at this chart just prior to a teams’ daily stand-up to get a sense if there are any questions I need to be asking about emerging trends. In this series of posts we’ll explore several of the things to look for when preparing for a stand-up. At the end of the sprint, the burn-down/burn-up chart can be a good reference to use during the retrospective when looking for ways to improve.

The sprint shown in Figure 1 is about as ideal a picture as one can expect. It shows all the points I look for that tell me, insofar as time is concerned, the sprint performance is in good health.

  • There is a cross-over point roughly in the middle of the sprint.
  • At the cross-over point about half of the estimated time has been burned down.
  • The burn-down time is a close match to the burn-up at both the cross-over point and the end of the sprint.
  • The burn-down and burn-up lines show daily movement in their respective directions.

In Part 2, we’ll look at several cases where the cross-over point shifts and explore the issues the teams under these circumstances might be struggling with.

(This article cross-posted on LinkedIn.)

How to Know You Have a Well Defined Minimum Viable Product

Conceptually, the idea of a minimum viable product (MVP) is easy to grasp. Early in a project, it’s a deliverable that reflects some semblance to the final product such that it’s barely able to stand on it’s own without lots of hand-holding and explanation for the customer’s benefit. In short, it’s terrible, buggy, and unstable. By design, MVPs lack features that may eventually prove to be essential to the final product. And we deliberately show the MVP to the customer!

We do this because the MVP is the engine that turns the build-measure-learn feedback loop. The key here is the “learn” phase. The essential features to the final product are often unclear or even unknown early in a project. Furthermore, they are largely undefinable or unknowable without multiple iterations through the build-measure-learn feedback cycle with the customer early in the process.

So early MVPs aren’t very good. They’re also not very expensive. This, too, is by design because an MVP’s very raison d’être is to test the assumptions we make early on in a project. They are low budget experiments that follow from a simple strategy:

  1. State the good faith assumptions about what the customer wants and needs.
  2. Describe the tests the MVP will satisfy that are capable of measuring the MVP’s impact on the stated assumptions.
  3. Build an MVP that tests the assumptions.
  4. Evaluate the results.

If the assumptions are not stated and the tests are vague, the MVP will fail to achieve it’s purpose and will likely result in wasted effort.

The “product” in “minimum viable product” can be almost anything: a partial or early design flow, a wireframe, a collection of simulated email exchanges, the outline to a user guide, a static screen mock-up, a shell of screen panels with placeholder text that can nonetheless be navigated – anything that can be placed in front of a customer for feedback qualifies as an MVP. In other words, a sprint can contain multiple MVPs depending on the functional groups involved with the sprint and the maturity of the project. As the project progresses, the individual functional group MVPs will begin to integrate and converge on larger and more refined MVPs, each gaining in stability and quality.

MVPs are not an end unto themselves. They are tangible evidence of the development process in action. The practice of iteratively developing MVPs helps develop to skill of rapid evaluation and learning among product owners and agile delivery team members. A buggy, unstable, ugly, bloated, or poorly worded MVP is only a problem if it’s put forward as the final product. The driving goal behind iterative MVPs is not perfection, rather it is to support the process of learning what needs to be developed for the optimal solution that solves the customer’s problems.

“Unlike a prototype or concept test, an MVP is designed not just to answer product design or technical questions. Its goal is to test fundamental business hypotheses.” – Eric Ries, The Lean Startup

So how might product owners and Agile teams begin to get a handle on defining an MVP? There are several questions the product owner and team can ask of themselves, in light of the product backlog, that may help guide their focus and decisions. (Use of the following term “stakeholders” can mean company executives or external customers.)

  • Identify the likely set of stakeholders who will be attending the sprint review. What will these stakeholders need to see so that they can offer valuable feedback? What does the team need to show in order to spark the most valuable feedback from the stakeholders?
  • What expectations have been set for the stakeholders?
  • Is the distinction clear between what the stakeholders want vs what they need?
  • Is the distinction clear between high and low value? Is the design cart before the value horse?
  • What are the top two features or functions the stakeholders  will be expecting to see? What value – to the stakeholders – will these features or functions deliver?
  • Will the identified features or functions provide long term value or do they risk generating significant rework down the road?
  • Are the identified features or functions leveraging code, content, or UI/UX reuse?

Recognizing an MVP – Less is More

Since an MVP can be almost anything,  it is perhaps easier to begin any conversation about MVPs by touching on the elements missing from an MVP.

An MVP is not a quality product. Using any generally accepted definition of “quality” in the marketplace, an MVP will fail on all accounts. Well, on most accounts. The key is to consider relative quality. At the beginning of a sprint, the standards of quality for an MVP are framed by the sprint goals and objectives. If it meets those goals, the team has successfully created a quality MVP. If measured against the external marketplace or the quality expectations of the customer, the MVP will almost assuredly fail inspection.

Your MVPs will probably be ugly, especially at first. They will be missing features. They will be unstable. Build them anyway. Put them in front of the customer for feedback. Learn. And move on to the next MVP. Progressively, they will begin to converge on the final product that is of high quality in the eyes of the customer. MVPs are the stepping stones that get you across the development stream and to the other side where all is sunny, beautiful, and stable. (For more information on avoiding the trap of presupposing what a customer means by quality and value, see “The Value of ‘Good Enough’“)

An MVP is not permanent. Agile teams should expect to throw away several, maybe even many, MVPs on their way to the final product. If they aren’t, then it is probable they are not learning what they need to about what the customer actually wants. In this respect, waste can be a good, even important thing. The driving purpose of the MVP is to rapidly develop the team’s understanding of what the customer needs, the problems they are expecting to have solved, and the level of quality necessary to satisfy each of these goals.

MVPs are not the truth. They are experiments meant to get the team to the truth. By virtue of their low-quality, low-cost nature, MVPs quickly shake out the attributes to the solution the customer cares about and wants. The solid empirical foundation they provide is orders of magnitude more valuable to the Agile team than any amount of speculative strategy planning or theoretical posturing.

(This article cross-posted on LinkedIn.)

The Value of “Good Enough”

Any company interested in being successful, whether offering a product or service, promises quality to its customers. Those that don’t deliver, die away. Those that do, survive. Those that deliver quality consistently, thrive. Seems like easy math. But then, 1 + 1 = 2 seems like easy math until you struggle through the 350+ pages Whitehead and Russell1 spent on setting up the proof for this very equation. Add the subjective filters for evaluating “quality” and one is left with a measure that can be a challenge to define in any practical way.

Math aside, when it comes to quality, everyone “knows it when they see it,” usually in counterpoint to a decidedly non-quality experience with a product or service. The nature of quality is indeed chameleonic – durability, materials, style, engineering, timeliness, customer service, utility, aesthetics – the list of measures is nearly endless. Reading customer reviews can reveal a surprising array of criteria used to evaluate the quality for a single product.

The view from within the company, however, is even less clear. Businesses often believe they know quality when they see it. Yet that belief is often predicate on how the organization defines quality, not how their customers define quality. It is a definition that is frequently biased in ways that accentuate what the organization values, not necessarily what the customer values.

Organization leaders may define quality too high, such that their product or service can’t be priced competitively or delivered to the market in a timely manner. If the high quality niche is there, the business might succeed. If not, the business loses out to lower priced competitors that deliver products sooner and satisfy the customer’s criteria for quality (see Figure 1).

Figure 1. Quality Mismatch I
Figure 1. Quality Mismatch I

Certainly, there is a case that can be made for providing the highest quality possible and developing the business around that niche. For startups and new product development, this may not be be best place to start.

On the other end of the spectrum, businesses that fall short of customer expectations for quality suffer incremental, or in some cases catastrophic, reputation erosion. Repairing or rebuilding a reputation for quality in a competitive market is difficult, maybe even impossible (see Figure 2).

Figure 2. Quality Mismatch II
Figure 2. Quality Mismatch II

The process for defining quality on the company side of the equation, while difficult, is more or less deliberate. Not so on the customer side. Customers often don’t know what they mean by “quality” until they have an experience that fails to meet their unstated, or even unknown, expectations. Quality savvy companies, therefore, invest in understanding what their customers mean by “quality” and plan accordingly. Less guess work, more effort toward actual understanding.

Furthermore, looking to what the competition is doing may not be the best strategy. They may be guessing as well. It may very well be that the successful quality strategy isn’t down the path of adding more bells and whistles that market research and focus groups suggest customers want. Rather, it may be that improvements in existing features and services are more desirable.

Focus on being clear about whether or not potential customers value the offered solution and how they define value. When following an Agile approach to product development, leveraging minimum viable product definitions can help bring clarity to the effort. With customer-centric benchmarks for quality in hand, companies are better served by first defining quality in terms of “good enough” in the eyes of their customers and then setting the internal goal a little higher. This will maximize internal resources (usually time and money) and deliver a product or service that satisfies the customer’s idea of “quality.”

Case in point: Several months back, I was assembling several bar clamps and needed a set of cutting tools used to put the thread on the end of metal pipes – a somewhat exotic tool for a woodworker’s shop. Shopping around, I could easily drop $300 for a five star “professional” set or $35 for a set that was rated to be somewhat mediocre. I’ve gone high end on many of the tools in my shop, but in this case the $35 set was the best solution for my needs. Most of the negative reviews revolved around issues with durability after repeated use. My need was extremely limited and the “valuable and good enough” threshold was crossed at $35. The tool set performed perfectly and more than paid for itself when compared with the alternatives, whether that be a more expensive tool or my time to find a local shop to thread the pipes for me. This would not have been the case for a pipefitter or someone working in a machine shop.

By understanding where the “good enough and valuable” line is, project and organization leaders are in a better position to evaluate the benefits of incremental improvements to core products and services that don’t break the bank or burn out the people tasked with delivering the goods. Of course, determining what is “good enough” depends on the end goal. Sending a rover to Mars, “good enough” had better be as near to perfection as possible. Threading a dozen pipes for bar clamps used in a wood shop can be completed quite successful with low quality tools that are “good enough” to get the job done.

References

1Volume 1 of Principia Mathematica by Alfred North Whitehead and Bertrand Russell (Cambridge University Press, page 379). The proof was actually not completed until Volume 2.

(This article cross-posted at LinkedIn.)

Relative Team Expertise and Story Sizing

In Parkinson’s Law of Triviality and Story Sizing, I touched on the issue of relative expertise among team members during collaborative efforts to size story cards. I’d like to expand on that idea by considering several types of team compositions.

Team 1 is a tight knit band of four software developers represented in Figure 1.

Figure 1 - Team 1
Figure 1 – Team 1

Their preferred domain and depth of experience is represented by the color and area of their respective circles. While they each have their own area of expertise, there is a significant overlap in common knowledge. All four of them understand the underlying architecture, common coding practices, and fundamental coding principles. Furthermore, there is a robust amount of inter-domain expertise. When needed, the HTML5/CSS developer can probably help out with JavaScript issues, for example. The probability of this team successfully working together to size the stories in the product backlog is high.

Team 1 represents a near-ideal team composition for a typical software related project. However, the real world isn’t so generous in it’s allocation of near-ideal, let alone ideal, teams. A typical team for a software related project is more likely to resemble Team 2, as represented in Figure 2.

Figure 2 - Team 2
Figure 2 – Team 2

In Team 2, the JavaScript developer is fresh out of college,  new to the company and new to the business. His real-world experience is limited so his circle of expertise is smaller relative to his teammates. The HTML5/CSS developer has been working for the company for 10 years and knows the business like the back of her hand. So she has a much wider view of how her work impacts the company and product development. As a team, there is much less overlap and options for helping each other through a sprint is diminished.  As for collaborative story sizing efforts, the HTML5/CSS and C# developers are likely to dominate the conversation while the JavaScript developer agrees with just about anything not JavaScript related.

As Agile practices become more ubiquitous in the business world, team composition beings to resemble Team 3, as shown in Figure 3.

Figure 3 - Team 3
Figure 3 – Team 3

The mix now includes non-technical people – content developers and editors, strategists, and designers. Even assuming an equal level of experience in their respective domains, the company, and the business environment, there is very little overlap. Arriving at a consensus during a story sizing exercise now becomes a significant challenge. But again, the real world isn’t even so kind as this. We are increasingly more likely to encounter teams that resemble Team 4 as shown in Figure 4.

Figure 4 - Team 4
Figure 4 – Team 4

As before, the relative circle of expertise among team members can vary quite a bit. When a team resembles the composition of Team 4, the software developers (HTML5/CSS and C#) will have trouble understanding what the Learning Strategist is asking for while the Learning Strategist may not understand why what he wants the software developers to deliver isn’t possible.

When I’ve attempted to facilitate story sizing sessions with teams that resemble Team 4 they either become quite contentious (and therefore time consuming) or team members that don’t have the expertise to understand a particular card simply accept the opinion of the stronger voices. Neither one of these situations is desirable.

To counteract these possibilities, I’ve found it much more effective to have the card assignee determine the card size (points and time estimate) and work to have the other team members ask questions about the work described on the card such that the assignee and the team better understand the context in which the card is positioned. The team members that lack domain expertise, it turns out, are in a good position to help craft good acceptance criteria.

  • Who will consume the work product that results from the card? (dependencies)
  • What cards need to be completed before a particular card can be worked on? (dependencies)
  • Is everything known about what a particular card needs before it can be completed? (dependencies, discovery, exploration)

At the end of a brief conversation where the entire team is working to evaluate the card for anything other than level of effort (time) and complexity (points), it is not uncommon for the assignee to reconsider their sizing, break the card into multiple cards, or determine the card shouldn’t be included in the sprint backlog. In short, it ends up being a much more productive conversation if teammates aren’t haggling over point distinctions or passively accepting what more experienced teammates are advocating. The benefit to the product owner is that they now have additional information that will undoubtedly influence the product backlog prioritization.

Minimum Viable Product – It’s What You Don’t See

Take a moment or two to gaze at the image below. What do you see?

Do you see white dots embedded within the grid connected by diagonal white lines? If you do, try and ignore them. Chances are, your brain won’t let you even though the white circles and diagonal lines don’t exist. Their “thereness” is created by the thin black lines. By carefully drawing a simple repetitive pattern of black lines, your brain has filled in the void and enhanced the image with white dots and diagonal white lines. You cannot not do this. This cognitive process is important to be aware of if you are a product owner because both your agile delivery team members and clients will run this program without fail.

Think of the black lines as the minimum viable product definition for one of your sprints. When shown to your team or your client, they will naturally fill the void for what’s next or what’s missing. Maybe as a statement, most likely as a question. But what if the product owner defined the minimum viable product further and presented, metaphorically, something like this:

By removing the white space from the original image there are fewer possibilities for your team and the client to explore. We’ve reduced their response to our proposed solution to a “yes” or “no” and in doing so have started moving down the path of near endless cycles of the product owner guessing what the client wants and the agile delivery team guessing what the product owner wants. Both the client and the team will grow increasingly frustrated at the lack of progress. Played out too long, the client is likely to doubt our skills and competency at finding a solution.

On the other hand, by strategically limiting the information presented in the minimum viable product (or effort, if you like) we invite the client and the agile delivery team to explore the white space. This will make them co-creators of the solution and more fully invested in its success. Since they co-created the solution, they are much more likely to view the solution as brilliant, perfect, and the shiniest of shiny objects.

I can’t remember where I heard or read this, but in the first image the idea is that the black lines are you talking and the white spaces are you listening.