OnAgile 2017

Attended the Agile Alliance‘s OnAgile 2017 Conference yesterday. This is always an excellent conference and you can’t beat the price!

I created a mind map throughout the course of the conference as a way to organize my thoughts and key points. This doesn’t capture all the points by any means. Just those that stood out as important or new for me. I missed most of the first session due to connectivity issues.

Clouds and Windmills

I recently resigned from the company I had been employed by for over 5 years. The reason? It was time.

During my tenure1 I had the opportunity to re-define my career several times within the organization in a way that added value and kept life productive, challenging, and rewarding. Each re-definition involved a rather extensive mind mapping exercises with hundreds of nodes to described what was working, what wasn’t working, what needed fixing, and where I believed I could add the highest value.

This past spring events prompted another iteration of this process. It began with the question “What wouldn’t happen if I didn’t go to work today?”2 This is the flip of asking “What do I do at work?” The latter is a little self-serving. We all want to believe we are adding value and are earning our pay. The answer is highly filtered through biases, justifications, excuses, and rationalizations. But if in the midsts of a meeting you ask yourself, “What would be different if I were not present or otherwise not participating?”, the answer can be a little unsettling.

This time around, in addition to mind mapping skills, I was equipped with the truly inspiring work of Tanmay Vora and his sketchnote project. Buy me a beer some day and I’ll let you in on a few of my discoveries. Suffice it to say, the overall picture wasn’t good. I was getting the feeling this re-definition cycle was going to include a new employer.

A cascade of follow-on questions flowed from this iteration’s initial question. At the top:

  1. Why am I staying?
  2. Is this work aligned with my purpose?
  3. Have my purpose and life goals changed?

The answers:

  1. The paycheck
  2. No.
  3. A little.

Of course, it wasn’t this simple. The organization changed, as did I, in a myriad of ways. While exploring these questions, I was reminded of a story my Aikido teacher, Gaku Homma, would tell when describing his school. He said it was like a rope. In the beginning, it had just a few threads that joined with him to form a simple string. Not very strong. Not very obvious. But very flexible. Over time, more and more students joined his school and wove their practice into Nippon Kan’s history. Each new thread subtlety changed the character of the emerging rope. More threads, more strength, and more visibility. Eventually, an equilibrium emerges. Some of those threads stop after a few short weeks of classes, other’s (like mine) are 25 years long before they stop, and for a few their thread ends in a much more significant way.

Homma Sensi has achieved something very difficult. The threads that form Nippon Kan’s history are very strong, very obvious, and yet remain very flexible. Even so, there came a time when the right decision for me was to leave, taking with me a powerful set of skills, many good memories, and friendships. The same was true for my previous employer. Their rope is bending in a way that is misaligned with my purpose and goals. Neither good nor bad. Just different. Better to leave with many friendships intact and a strong sense of having added value to the organization during my tenure.

The world is full of opportunities. And sometimes you have to deliberately and intentionally clear all the collected clutter from your mental workspace so those opportunities have a place to land. Be attentive to moments like this before your career is remembered only as someone who yells at clouds and tilts at windmills.


1 By the numbers…

1,788 Stand-ups
1,441 Wiki/Knowledgebase Contributions
311 Sprint/Release Planning Sessions
279 Reviews
189 Retrospectives
101 Projects
31 Internal Meet-ups
22 Agile Cafés
10 Newsletters
5.5 Years
3 Distinct Job Titles
1 Wild Ride

2 My thanks to colleague Lennie Noiles and his presentation on Powerful Questions. While Lennie didn’t ask me this particular question, it was inspired by his presentation.

Agile Team Composition: Generalists versus Specialists

Estimating levels of effort for a set of tasks by a group of individuals well qualified to complete those tasks can efficiently and reliable be determined with a collaborative estimation process like planning poker. Such teams have a good measure of skill overlap. In the context of the problem set, each of the team members are generalist in the sense  it’s possible for any one team member to work on a variety of cross functional tasks during a sprint. Differences in preferred coding language among team members, for example, is less an issue when everyone understands advanced coding practices and the underlying architecture for the solution.

With a set of complimentary technical skills it’s is easier agree on work estimates. There are other benefits that flow from well-matched teams. A stable sprint velocity emerges much sooner. There is greater cross functional participation. And re-balancing the work load when “disruptors” occur – like vacations, illness, uncommon feature requests, etc. – is easier to coordinate.

Once the set of tasks starts to include items that fall outside the expertise of the group and the group begins to include cross functional team members, a process like planning poker becomes increasingly less reliable. The issue is the mismatch between relative scales of expertise. A content editor is likely to have very little insight into the effort required to modify a production database schema. Their estimation may be little more than a guess based on what they think it “should” be. Similarly for a coder faced with estimating the effort needed to translate 5,000 words of text from English to Latvian. Unless, of course, you have an English speaking coder on your team who speaks fluent Latvian.

These distinctions are easy to spot in project work. When knowledge and solution domains have a great deal of overlap, generalization allows for a lot of high quality collaboration. However, when an Agile team is formed to solve problems that do not have a purely technical solution, specialization rather than generalization has a greater influence on overall success. The risk is that with very little overlap specialized team expertise can result in either shallow solutions or wasteful speculation – waste that isn’t discovered until much later. Moreover, re-balancing the team becomes problematic and most often results in delays and missed commitments due to the limited ability for cross functional participation among team mates.

The challenge for teams where knowledge and solution domains have minimal overlap is to manage the specialized expertise domains in a way that is optimally useful, That is, reliable, predictable, and actionable. Success becomes increasingly dependent on how good an organization is at estimating levels of effort when the team is composed of specialists.

One approach I experimented with was to add a second dimension to the estimation: a weight factor to the estimator’s level of expertise relative to the nature of the card being considered. The idea is that with a weighted expertise factor calibrated to the problem and solution contexts, a more reliable velocity emerges over time. In practice, was difficult to implement. Teams spent valuable time challenging what the weighted factor should be and less experienced team members felt their opinion had been, quite literally, discounted.

The approach I’ve had the most success with on teams with diverse expertise is to have story cards sized by the individual assigned to complete the work. This still happens in a collaborative refinement or planning session so that other team members can contribute information that is often outside the perspective of the work assignee. Dependencies, past experience with similar work on other projects, missing acceptance criteria, or a refinement to the story card’s minimum viable product (MVP) definition are all examples of the kind of information team members have contributed. This invariably results in an adjustment to the overall level of effort estimate on the story card. It also has made details about the story card more explicit to the team in a way that a conversation focused on story point values doesn’t seem to achieve. The conversation shifts from “What are the points?” to “What’s the work needed to complete this story card?”

I’ve also observed that by focusing ownership of the estimate on the work assignee, accountability and transparency tend to increase. Potential blockers are surfaced sooner and team members communicate issues and dependencies more freely with each other. Of course, this isn’t always the case and in a future post we’ll explore aspects of team composition and dynamics that facilitate or prevent quality collaboration.

Story Points and Fuzzy Bunnies

The scrum framework is forever tied to the language of sports in general and rugby in particular. We organize our project work around goals, sprints, points, and daily scrums. An unfortunate consequence of organizing projects around a sports metaphor is that the language of gaming ends up driving behavior. For example, people have a natural inclination to associate the idea of story points to a measure of success rather than an indicator of the effort required to complete the story. The more points you have, the more successful you are. This is reflected in an actual quote from a retrospective on things a team did well:

We completed the highest number of points in this sprint than in any other sprint so far.

This was a team that lost sight of the fact they were the only team on the field. They were certain to be the winning team. They were also destine to be he losing team. They were focused on story point acceleration rather than a constant, predictable velocity.

More and more I’m finding less and less value in using story points as an indicator for level of effort estimation. If Atlassian made it easy to change the label on JIRA’s story point field, I’d change it to “Fuzzy Bunnies” just to drive this idea home. You don’t want more and more fuzzy bunnies, you want no more than the number you can commit to taking care of in a certain span of time typically referred to as a “sprint.” A team that decides to take on the care and feeding of 50 fuzzy bunnies over the next two weeks but has demonstrated – sprint after sprint – they can only keep 25 alive is going to lose a lot of fuzzy bunnies over the course of the project.

It is difficult for people new to scrum or Agile to grasp the purpose behind an abstract idea like story points. Consequently, they are unskilled in how to use them as a measure of performance and improvement. Developing this skill can take considerable time and effort. The care and feeding of fuzzy bunnies, however, they get. Particularly with teams that include non-technical domains of expertise, such as content development or learning strategy.

A note here for scrum masters. Unless you want to exchange your scrum master stripes for a saddle and spurs, be wary of your team turning story pointing into an animal farm. Sizing story cards to match the exact size and temperament from all manner of animals would be just as cumbersome as the sporting method of story points. So, watch where you throw your rope, Agile cowboys and cowgirls.

(This article cross-posted at LinkedIn)


Image credit: tsaiproject (Modified in accordance with Creative Commons Attribution 2.0 Generic license)

Parkinson’s Law of Perfection

C. Northcote Parkinson is best known for, not surprisingly, Parkinson’s Law:

Work expands so as to fill the time available for its completion.

But there are many more gems in “Parkinson’s Law and Other Studies in Administration.” The value of re-reading classics is that what was missed on a prior read becomes apparent given the accumulation of a little more experience and the current context. On a re-read this past week, I discovered this:

It is now  known  that  a  perfection  of  planned  layout  is  achieved  only  by institutions  on  the   point  of  collapse.  This   apparently  paradoxical conclusion is based upon a wealth of archaeological and historical research, with the  more esoteric details of  which we need not concern  ourselves. In general  principle, however, the method pursued has been to  select and date the buildings  which  appear to have been perfectly  designed for  their purpose. A study and comparison of these has tended to prove that perfection of planning is a symptom of decay. During a  period of exciting discovery or progress there is  no time  to  plan the perfect headquarters.  The time for that comes  later, when all the important work has been done. Perfection, we know, is finality; and finality is death.

Several years back my focus for the better part of a year was on mapping out software design processes for a group of largely non-technical instructional designers. If managing software developers is akin to herding cats, finding a way to shepherd non-technical creative types such as instructional designers (particularly old school designers) can be likened to herding a flock of canaries – all over the place in three dimensions.

What made this effort successful was framing the design process as a set of guidelines that were easy to track and monitor. The design standards and common practices, for example, consisted of five bullet points. Building just enough fence to keep everyone in the same area while limiting free range behaviors to specific places was important. These were far from perfect, but they allowed for the dynamic vitality suggested by Parkinson. If the design standards and common practices document ever grew past something that could fit on one page, it would suggest the company was moving toward over specialization and providing services to a narrow slice of the potential client pie. In the rapidly changing world of adult eduction, this level of perfection would most certainly suggest decay and risk collapse as client needs change.

(This article cross-posted on LinkedIn.)

How to Know You Have a Well Defined Minimum Viable Product

Conceptually, the idea of a minimum viable product (MVP) is easy to grasp. Early in a project, it’s a deliverable that reflects some semblance to the final product such that it’s barely able to stand on it’s own without lots of hand-holding and explanation for the customer’s benefit. In short, it’s terrible, buggy, and unstable. By design, MVPs lack features that may eventually prove to be essential to the final product. And we deliberately show the MVP to the customer!

We do this because the MVP is the engine that turns the build-measure-learn feedback loop. The key here is the “learn” phase. The essential features to the final product are often unclear or even unknown early in a project. Furthermore, they are largely undefinable or unknowable without multiple iterations through the build-measure-learn feedback cycle with the customer early in the process.

So early MVPs aren’t very good. They’re also not very expensive. This, too, is by design because an MVP’s very raison d’être is to test the assumptions we make early on in a project. They are low budget experiments that follow from a simple strategy:

  1. State the good faith assumptions about what the customer wants and needs.
  2. Describe the tests the MVP will satisfy that are capable of measuring the MVP’s impact on the stated assumptions.
  3. Build an MVP that tests the assumptions.
  4. Evaluate the results.

If the assumptions are not stated and the tests are vague, the MVP will fail to achieve it’s purpose and will likely result in wasted effort.

The “product” in “minimum viable product” can be almost anything: a partial or early design flow, a wireframe, a collection of simulated email exchanges, the outline to a user guide, a static screen mock-up, a shell of screen panels with placeholder text that can nonetheless be navigated – anything that can be placed in front of a customer for feedback qualifies as an MVP. In other words, a sprint can contain multiple MVPs depending on the functional groups involved with the sprint and the maturity of the project. As the project progresses, the individual functional group MVPs will begin to integrate and converge on larger and more refined MVPs, each gaining in stability and quality.

MVPs are not an end unto themselves. They are tangible evidence of the development process in action. The practice of iteratively developing MVPs helps develop to skill of rapid evaluation and learning among product owners and agile delivery team members. A buggy, unstable, ugly, bloated, or poorly worded MVP is only a problem if it’s put forward as the final product. The driving goal behind iterative MVPs is not perfection, rather it is to support the process of learning what needs to be developed for the optimal solution that solves the customer’s problems.

“Unlike a prototype or concept test, an MVP is designed not just to answer product design or technical questions. Its goal is to test fundamental business hypotheses.” – Eric Ries, The Lean Startup

So how might product owners and Agile teams begin to get a handle on defining an MVP? There are several questions the product owner and team can ask of themselves, in light of the product backlog, that may help guide their focus and decisions. (Use of the following term “stakeholders” can mean company executives or external customers.)

  • Identify the likely set of stakeholders who will be attending the sprint review. What will these stakeholders need to see so that they can offer valuable feedback? What does the team need to show in order to spark the most valuable feedback from the stakeholders?
  • What expectations have been set for the stakeholders?
  • Is the distinction clear between what the stakeholders want vs what they need?
  • Is the distinction clear between high and low value? Is the design cart before the value horse?
  • What are the top two features or functions the stakeholders  will be expecting to see? What value – to the stakeholders – will these features or functions deliver?
  • Will the identified features or functions provide long term value or do they risk generating significant rework down the road?
  • Are the identified features or functions leveraging code, content, or UI/UX reuse?

Recognizing an MVP – Less is More

Since an MVP can be almost anything,  it is perhaps easier to begin any conversation about MVPs by touching on the elements missing from an MVP.

An MVP is not a quality product. Using any generally accepted definition of “quality” in the marketplace, an MVP will fail on all accounts. Well, on most accounts. The key is to consider relative quality. At the beginning of a sprint, the standards of quality for an MVP are framed by the sprint goals and objectives. If it meets those goals, the team has successfully created a quality MVP. If measured against the external marketplace or the quality expectations of the customer, the MVP will almost assuredly fail inspection.

Your MVPs will probably be ugly, especially at first. They will be missing features. They will be unstable. Build them anyway. Put them in front of the customer for feedback. Learn. And move on to the next MVP. Progressively, they will begin to converge on the final product that is of high quality in the eyes of the customer. MVPs are the stepping stones that get you across the development stream and to the other side where all is sunny, beautiful, and stable. (For more information on avoiding the trap of presupposing what a customer means by quality and value, see “The Value of ‘Good Enough’“)

An MVP is not permanent. Agile teams should expect to throw away several, maybe even many, MVPs on their way to the final product. If they aren’t, then it is probable they are not learning what they need to about what the customer actually wants. In this respect, waste can be a good, even important thing. The driving purpose of the MVP is to rapidly develop the team’s understanding of what the customer needs, the problems they are expecting to have solved, and the level of quality necessary to satisfy each of these goals.

MVPs are not the truth. They are experiments meant to get the team to the truth. By virtue of their low-quality, low-cost nature, MVPs quickly shake out the attributes to the solution the customer cares about and wants. The solid empirical foundation they provide is orders of magnitude more valuable to the Agile team than any amount of speculative strategy planning or theoretical posturing.

(This article cross-posted on LinkedIn.)

The Value of “Good Enough”

Any company interested in being successful, whether offering a product or service, promises quality to its customers. Those that don’t deliver, die away. Those that do, survive. Those that deliver quality consistently, thrive. Seems like easy math. But then, 1 + 1 = 2 seems like easy math until you struggle through the 350+ pages Whitehead and Russell1 spent on setting up the proof for this very equation. Add the subjective filters for evaluating “quality” and one is left with a measure that can be a challenge to define in any practical way.

Math aside, when it comes to quality, everyone “knows it when they see it,” usually in counterpoint to a decidedly non-quality experience with a product or service. The nature of quality is indeed chameleonic – durability, materials, style, engineering, timeliness, customer service, utility, aesthetics – the list of measures is nearly endless. Reading customer reviews can reveal a surprising array of criteria used to evaluate the quality for a single product.

The view from within the company, however, is even less clear. Businesses often believe they know quality when they see it. Yet that belief is often predicate on how the organization defines quality, not how their customers define quality. It is a definition that is frequently biased in ways that accentuate what the organization values, not necessarily what the customer values.

Organization leaders may define quality too high, such that their product or service can’t be priced competitively or delivered to the market in a timely manner. If the high quality niche is there, the business might succeed. If not, the business loses out to lower priced competitors that deliver products sooner and satisfy the customer’s criteria for quality (see Figure 1).

Figure 1. Quality Mismatch I
Figure 1. Quality Mismatch I

Certainly, there is a case that can be made for providing the highest quality possible and developing the business around that niche. For startups and new product development, this may not be be best place to start.

On the other end of the spectrum, businesses that fall short of customer expectations for quality suffer incremental, or in some cases catastrophic, reputation erosion. Repairing or rebuilding a reputation for quality in a competitive market is difficult, maybe even impossible (see Figure 2).

Figure 2. Quality Mismatch II
Figure 2. Quality Mismatch II

The process for defining quality on the company side of the equation, while difficult, is more or less deliberate. Not so on the customer side. Customers often don’t know what they mean by “quality” until they have an experience that fails to meet their unstated, or even unknown, expectations. Quality savvy companies, therefore, invest in understanding what their customers mean by “quality” and plan accordingly. Less guess work, more effort toward actual understanding.

Furthermore, looking to what the competition is doing may not be the best strategy. They may be guessing as well. It may very well be that the successful quality strategy isn’t down the path of adding more bells and whistles that market research and focus groups suggest customers want. Rather, it may be that improvements in existing features and services are more desirable.

Focus on being clear about whether or not potential customers value the offered solution and how they define value. When following an Agile approach to product development, leveraging minimum viable product definitions can help bring clarity to the effort. With customer-centric benchmarks for quality in hand, companies are better served by first defining quality in terms of “good enough” in the eyes of their customers and then setting the internal goal a little higher. This will maximize internal resources (usually time and money) and deliver a product or service that satisfies the customer’s idea of “quality.”

Case in point: Several months back, I was assembling several bar clamps and needed a set of cutting tools used to put the thread on the end of metal pipes – a somewhat exotic tool for a woodworker’s shop. Shopping around, I could easily drop $300 for a five star “professional” set or $35 for a set that was rated to be somewhat mediocre. I’ve gone high end on many of the tools in my shop, but in this case the $35 set was the best solution for my needs. Most of the negative reviews revolved around issues with durability after repeated use. My need was extremely limited and the “valuable and good enough” threshold was crossed at $35. The tool set performed perfectly and more than paid for itself when compared with the alternatives, whether that be a more expensive tool or my time to find a local shop to thread the pipes for me. This would not have been the case for a pipefitter or someone working in a machine shop.

By understanding where the “good enough and valuable” line is, project and organization leaders are in a better position to evaluate the benefits of incremental improvements to core products and services that don’t break the bank or burn out the people tasked with delivering the goods. Of course, determining what is “good enough” depends on the end goal. Sending a rover to Mars, “good enough” had better be as near to perfection as possible. Threading a dozen pipes for bar clamps used in a wood shop can be completed quite successful with low quality tools that are “good enough” to get the job done.

References

1Volume 1 of Principia Mathematica by Alfred North Whitehead and Bertrand Russell (Cambridge University Press, page 379). The proof was actually not completed until Volume 2.

(This article cross-posted at LinkedIn.)

Minimum Viable Product – It’s What You Don’t See

Take a moment or two to gaze at the image below. What do you see?

Do you see white dots embedded within the grid connected by diagonal white lines? If you do, try and ignore them. Chances are, your brain won’t let you even though the white circles and diagonal lines don’t exist. Their “thereness” is created by the thin black lines. By carefully drawing a simple repetitive pattern of black lines, your brain has filled in the void and enhanced the image with white dots and diagonal white lines. You cannot not do this. This cognitive process is important to be aware of if you are a product owner because both your agile delivery team members and clients will run this program without fail.

Think of the black lines as the minimum viable product definition for one of your sprints. When shown to your team or your client, they will naturally fill the void for what’s next or what’s missing. Maybe as a statement, most likely as a question. But what if the product owner defined the minimum viable product further and presented, metaphorically, something like this:

By removing the white space from the original image there are fewer possibilities for your team and the client to explore. We’ve reduced their response to our proposed solution to a “yes” or “no” and in doing so have started moving down the path of near endless cycles of the product owner guessing what the client wants and the agile delivery team guessing what the product owner wants. Both the client and the team will grow increasingly frustrated at the lack of progress. Played out too long, the client is likely to doubt our skills and competency at finding a solution.

On the other hand, by strategically limiting the information presented in the minimum viable product (or effort, if you like) we invite the client and the agile delivery team to explore the white space. This will make them co-creators of the solution and more fully invested in its success. Since they co-created the solution, they are much more likely to view the solution as brilliant, perfect, and the shiniest of shiny objects.

I can’t remember where I heard or read this, but in the first image the idea is that the black lines are you talking and the white spaces are you listening.

False Barriers to Implementing Scrum

When my experience with scrum began to transition from developer to scrum master and on to mentor and coach, early frustrations could have been summed up in the phrase, “Why can’t people just follow a simple framework?” The passage of time and considerable experience has greatly informed my understanding of what may inhibit or prevent intelligent and capable people from picking up and applying a straightforward framework like scrum.

At the top of this list of insights has to be the tendency of practitioners to place elaborate decorations around their understanding of scrum. In doing so, they make scrum practices less accessible. The framework itself can make this a challenge. Early on, while serving in the role of mentor, I would introduce scrum with an almost clinical textbook approach: define the terms, describe the process, and show the obligatory recursive work flow diagrams. In short order, I’d be treading water (barely) in endlessly circuitous debates on topics like the differences between epics and stories. I wrote about this phenomenon in a previous post as it relates to story points. So how can we avoid being captured by Parkinson’s law of triviality and other cognitive traps?

Words Matter

I discovered that the word “epic” brought forth fatigue inducing memories of Homer’s Iliad and Odyssey, the Epic of Gilgamesh, and Shakespeare. Instant block. Solution out of reach. It was like putting a priceless, gold-plated, antique picture frame around the picture postcard of a jackalope your cousin Eddie sent you on his way through Wyoming. Supertanker loads of precious time were wasted in endless debates about whether or not something was an epic or a story. So, no more talk of epics. I started calling them “story categories.” Or “chapters.” Or “story bundles.” Whatever it took to get teams onto the idea that “epics” are just one of the dimensions to a story map or product backlog that helps the product owner and agile delivery team keep a sense of overall project scope. Story writing progress accelerated and teams were doing a decent job of creating “epics” without knowing they had done so. Fine tuning their understanding and use of formal scrum epics came later and with much greater ease.

“Sprint” is another unfortunate word in formal scrum. With few exceptions, the people that have been on my numerous scrum teams haven’t sprinted anywhere in decades. Sprinting is something one watches televised from some far away place every four years. Maybe. Given its fundamental tenets and principles, who’s to say a team can’t find a word for the concept of a “sprint” that makes sense to them. The salient rule, it would seem, is that whatever word they choose, the team fully understand that “it” is a time-boxed commitment for completing a defined set of work tasks. And if “tide,” “phase,” or “iteration” gets the team successfully through a project using scrum than who am I to wear a the badge of “Language Police?”

A good coach meets the novice at their level and then builds their expertise over time, structured in a way that matches and challenges the learner’s capacity to learn. I recall from my early Aikido practice the marked difference between instructors who stressed using the correct Japanese name for a technique over those that focused more on learning the physical techniques and described them in a language I could understand. Once I’d learned the physical patterns the verbal names came much more easily.

Full disclosure: this is not as easy when there are multiple scrum teams in the same organization that eventually rotate team members. Similarly, integrating new hires with scrum experience is much easier when the language is shared. But to start, if the block to familiarization with the scrum process revolves around semantic debates it makes sense to adapt the words so that the team can adopt the process then evolve the words to match more closely those reflected in the scrum framework.

Philosophy, System, Mindset, or Process

A similar fate awaited team members that had latched onto the idea that scrum or agile in general is a philosophy. I watched something similar happen in the late 1980’s when the tools and techniques of total quality management evolved into monolithic world views and corporate religions. More recently, I’ve attended meet-ups where conversations about “What is Agile?” include describing the scrum master as “therapist” or “spiritual guide.” Yikes! That’s some pretty significant mission creep.

I’m certain fields like philosophy and psychotherapy could benefit from many of the principles and practices found in agile. But it would be a significant category error to place agile at the same level as those fields of study. If you think tasking an agile novice with writing an “epic” is daunting, try telling them they will need to study and fully understand the “philosophy of agile” before they become good agile practitioners.

The issue is that it puts the idea of practicing agile essentially out of reach for the new practitioner or business leader thinking about adopting agile. The furthest up this scale I’m willing to push agile is that it is a mindset. An adaptive way of thinking about how work gets done. From this frame I can leverage a wide variety of common, real-life experiences that will help those new to agile understand how it can help them succeed in their work life.

Out in the wild, best to work the system and process angles if you want meaningful work to actually get done.

Parkinson’s Law of Triviality and Story Sizing

From  Infogalactic: Parkinson’s law of triviality

Parkinson observed and illustrated that a committee whose job was to approve plans for a nuclear power plant spent the majority of its time on discussions about relatively trivial and unimportant but easy-to-grasp issues, such as what materials to use for the staff bike-shed, while neglecting the non-trivial proposed design of the nuclear power plant itself, which is far more important but also a far more difficult and complex task to criticise constructively.

I see this phenomenon in play during team story sizing exercises in the following scenarios.

  1. In the context of the story being sized, the relative expertise of each of the team members is close to equal in terms of experience and depth of knowledge. The assumption is that if everyone on the team is equally qualified to estimate the effort and complexity of a particular story then the estimation process should move along quickly. With a skilled team, this does, indeed, occur. If it is a newly formed team or if the team is new to agile principles and practices, Parkinson’s Law of Triviality can come into play as the effort quickly gets lost in the weeds.
  2. In the context of the story being sized, the relative expertise of the team members is not near parity and yet each of the individual team members has a great deal of expertise in the context of their respective functional areas. What I’ve observed happening is that the team members least qualified to evaluate the particular story feel the need to assert their expertise and express an opinion. I recall an instance where a software developer estimated it would take 8 hours of coding work to place a “Print This” button on a particular screen. The credentialed learning strategist (who asked for the print button and has no coding experience) seemed incredulous that such an effort would require so much time. A lengthy and unproductive argument ensued.

To prevent this I focus my coaching efforts primarily on the product owner as they will be interacting with the team on this effort during product backlog refinement session more frequently than I. They need to watch for:

  1. Strong emotional response by team members when a size or time estimate is proposed.
  2. Conversations that drop further and further into design details.
  3. Conversations that begin to explore multiple “what if” scenarios.

The point isn’t to prevent each of these behaviors from occurring. Rather manage them. If there is a strong emotional response, quickly get to the “why” behind that response. Does the team member have a legitimate objection or does their response lack foundation?

Every team meeting is an opportunity to clarify the bigger picture for the team so a little bit of conversation around design and risks is a good thing. It’s important to time box those conversations and agree to take the conversation off-line from the backlog refinement session.

When coaching the team, I focus primarily on the skills needed to effectively size an effort. Within this context I can also address the issue of relative expertise and how to leverage and value the opinions expressed by team member who may not entirely understand the skill needed to complete a particular story.

(John Cook cites another interesting example of Parkinson’s Law of Triviality (a.k.a. the bike shed principle) from Michael Beirut’s book How to involving the design of several logos.)