Estimating Effort – An Explicitly Implicit Approach

It is difficult to make predictions, especially about the future.Unknown

Sage advice.

So why bother estimating the amount of work needed to complete a product backlog item? After all, since estimates are about the future the probability is high that they will be wrong. Actually, they may very well be guaranteed to be wrong. It’s just that some of the guesses happened to match what the effort ended up to be and just look like they were “right.”

I’ve written in the past expressing my thoughts about estimating the effort needed to complete product backlog items, particularly with respect to story points. I believe working to find a relative gauge to how well teams are estimating work is important. Without them, cognitive biases such as the optimism bias and planning fallacy can significantly distort a project delivery timeline. However, the phrase “story point” is burdened with a lot of baggage. It has been abused and misused such that invoking the phrase often causes more harm than good.

I’ve been experimenting recently with a different approach to estimating effort. The method I’ll describe in this post got a bit of a boost after listening to a recent interview with Psychologist and Nobel laureate Daniel Kahneman. In this interview, Kahneman describes an experience he had while serving in the Israeli army some sixty years ago. He was assigned the job of setting up an interview process that would determine how well a recruit would do as a combat soldier. For this process, he selected six traits and instructed the interviewers to ask questions designed to evaluate each trait independently and score them. The interviewers were not happy with this approach. As a compromise, Kahneman instructed the interviewers, when they were finished asking about the six traits, to close their eyes and just jot down a number they felt matched how good a soldier the recruit might be. What he discovered:

When we validated the results of the interview, it was a big improvement on what had gone on before. But the other surprise was that the final intuitive judgments added, it was good. It was as good as the average of the six traits, and not the same. It added information, so actually we ended up with a score that was half determined by the specific ratings, and the intuition got half the weight. That, by the way, stayed in the Israeli army for well over 50 years.Daniel Kahneman

This intuitive evaluation made by the interviewers is similar to what Agile methods ask of development teams when determining a value for “story points.” T-shirt sizes, planning poker, dot voting, affinity mapping and many similar techniques are all designed to elicit an intuitive sense of the effort involved. If there is a dependency between team members, than a dialog follows to understand what that discrepancy is all about. This continues until there is alignment on what the team believes the effort to be. When it works, it works well.

So on to the details of the approach I’ve been experimenting with. (It doesn’t have a name yet.) The result of this approach is a number I call the “effort value.” The word “value” is a reference to the actual elementary mathematics value being derived. Much like the answer to the question “What value results from adding 2 and 2?” Answer: 4. The word “value” also suggests an intrinsic worth, something beyond a hard number. My theory is that this will help teams think beyond the mere number and think also about the value they are delivering to stakeholders. The word “point” correlates to a hard number and lacks any association to intrinsic worth or value.

Changing the words introduces a simple and small shift that nonetheless has a significant impact. With the change, teams are more open to considering a different approach to determining estimates.

So how is the effort value derived?

I begin by having the team define 4-5 characteristics or attributes that, to them, describe what they mean by “effort.” It is important for the team to define these attributes. By doing so, they own the definition and it becomes much harder for them to dismiss the attributes as “someone else’s” and thereby object to their use in deriving an effort value. These attributes can be anything that is meaningful to the team. Examples:

  • Complexity – Is the work straightforward (e.g. code a bubble sort function) or does it involve interrelated systems (e.g. code a predictive inventory control algorithm)?
  • Dependencies – How dependent is the product backlog item on other backlog items or other teams?
  • Familiarity – Is this work very similar to work the team has done in the past or something quite new?
  • Information – Is the detail in the product backlog item complete? Are the acceptance criteria and definition of done clear?
  • Technical Debt Risk – Does the PBI require any refactoring of related code? Is any technical debt being incurred with the PBI?
  • Design Stability – Is there a lot of discovery and exploration needed to complete the PBI?
  • Confidence for Completing PBI within the Sprint – This category may roll up several categories.

The team can define any attribute they wish. However, there are a few criteria to consider:

  • Keep the list limited to 4-6 attributes. More than that risks turning the derivation of an effort value into the equivalent of a product backlog item navel-gazing exercise.
  • Time cannot be one of the attributes.
  • The attributes should be reasonable. Assessing a product backlog item’s effort value by evaluating it’s “aura” or the current position of the stars are generally not useful attributes. On the other hand, I’ve listened to arguments against evaluating estimates in terms of “complexity” as being similarly useless. I see the point of those arguments, but my view is that the attributes must first and foremost be meaningful to the entire team. In the end, it’s an educated guess and arguments about the definition of terms like “complexity” are counterproductive to the overall intent of deriving an effort value.

Each of these attributes is then given a scale, the same scale for each attribute – 1 to 10, 1 to 15 – whatever the team feels is most appropriate. The team then goes through each of these attributes and evaluates the product backlog item attribute on the scale. The low number on the scale represents very little impact. If dependency, for example, is one of the attributes then a 1 might mean that the product backlog item is entirely self-contained. A 10 might represent a case where the product backlog item is dependent on several other product backlog items or perhaps the output from other teams.

When this is done, ask the team where on the modified Fibonacci scale they think this particular product backlog item’s effort value should be. If they’re struggling you can do the math: find the average for all the attributes and match that number in the modified Fibonacci scale. If the average is a decimal, for example 3.1, match the value to the next highest modified Fibonacci scale number. In this case the value would be 5. Then ask the team if they feel that number it’s a good representation of the effort value for the product backlog item.

This may seem like a lot of unnecessary gyrations, but for technical people it’s a simple process they can understand. The bonus is a number they can calculate. The number isn’t what’s important here. What’s important is the conversation that happens around the attributes and what the team feels about the number that results from the conversation. This exercise is meant to develop their intuitive muscles for considering multiple aspects and dimensions behind the “effort” needed for them to get the work done.

Use this process enough times and eventually calculating the average can be dropped from the process. Continue using this process and eventually calculating the numbers for the individual attributes can be dropped from the process. I don’t know if it’s a good idea to drop the use of the attributes for generating the needed conversation around the effort needed, but it will certainly be valuable to reconsider the list of attributes from time to time so as to fine tune the list to match what the team feels is important.

With this approach I’m turning the estimation process on its head (or back on its feet, if Kahneman is right.) Rather than seek the intuitive response first (e.g. t-shirt size) and elicit details later if there is a mismatch between team members, this method seeks to better prime and develop the team’s intuition about the effort value by having them explicitly consider a list of self-selected attributes (or traits) for effort first and then include an intuitive evaluation for effort.

Don’t try to form an intuition quickly, which was what we normally do. Focus on the separate points, and then when you have the whole profile, then you can have an intuition and it’s going to be better. Because people form intuitions too quickly, and the rapid intuitions are not particularly good. If you delay intuition until you have more information, it’s going to be better.Daniel Kahneman

The Pull of Well-Crafted Product Visions and Release Goals

There was even a trace of mild exhilaration in their attitude. At least, they had a clear-cut task ahead of them. The nine months of indecision, of speculation about what might happen, of aimless drifting with the pack were over. Now they simply had to get themselves out, however appallingly difficult that might be. [1]

In the early 20th Century, Sir Ernest Shackleton led an expedition attempting to cross the South Pole on foot. He was unsuccessful in that attempt. What he succeeded at, however, was something far more impressive. After nearly two years of battling conditions south of the Antarctic Circle, Shackleton saw to it that all 27 men of his crew made it safely home. As Alfred Lansing notes, “Though they had failed dismally even to come close to the expedition’s original objective, they knew now that somehow they had done much, much more than ever they set out to do.”

There is much I could write about the lessons from Shackleton, his crew, and the Endurance that apply to our own individual endeavors – personal and professional. For the moment, I wish to reflect on the sheer clarity of the goal 28 men had in 1915-1916: To survive, by any means and nothing short of complete dedicated effort.

To be sure, their goal was self-serving – no one can judge them for that – and no product team is ever likely to be placed in a situation of delivering in the face of such high stakes. Indeed, the lessons from Endurance are striking in their contrast to just how feeble the drama is that is often brought into product delivery schedules. We call them “death marches,” but we know not of what we speak.

One of the things we can learn from Endurance is the power of a clearly defined objective. Do or die. That’s pretty damn clear. Time and time again, Shackleton’s crew were faced with completing seemingly impossible tasks under the harshest of conditions with the barest of resources and vanishingly small chances for success.

What kept them going? Certainly, the will and desire to live. There were many other factors, too. What interests me in this post is reflected in the opening quote. The emergence of a well-defined task that cleared away the fog of speculation, indecision, and uncertainty. Episodes like this are described multiple times in Lansing’s book.

Why this is important to something like a product vision is that it clearly illustrates a phenomenon I learned about recently called “The Goal Gradient Hypothesis,” which basically says our efforts increase as we get closer to our goals. But here’s the rub. We have to know and understand what the goal is. “Do or die” is clear and leaves little room for misunderstanding. “Let’s go build a killer app,” not so much.

From the research:

We found that members of a café RP accelerated their coffee purchases as they progressed toward earning a free coffee. The goal-gradient effect also generalized to a very different incentive system, in which shorter goal distance led members to visit a song-rating Web site more frequently, rate more songs during each visit, and persist longer in the rating effort. Importantly, in both incentive systems, we observed the phenomenon of postreward resetting, whereby customers who accelerated toward their first reward exhibited a slowdown in their efforts when they began work (and subsequently accelerated) toward their second reward. [2]

Far away goals, like a product vision, are much less motivating than near-term goals, such as sprint goals. And yet it is the product vision that can, if well-crafted and well-communicated, pull a team forward during a postreward resetting period.

But perhaps the most important lesson from the research – as far as product development is concerned – is that incentives matter.  How an organization structures these is important. Since most people fail The Marshmallow Test, rewarding success on smaller goals that lead to a larger goal is likely to help teams stay focused and dedicated in the long run. Rather than one large post-product release celebration, smaller rewards after each successful sprint are more likely to keep teams engaged and productive.

References

[1] Lansing, A. (1957) Endurance: Shackleton’s Incredible Voyage, pg. 80

[2] Kivetz, R., Urminsky, O., Zheng, Y (2006) The Goal-Gradient Hypothesis Resurrected: Purchase Acceleration, Illusionary Goal Progress, and Customer Retention, Journal of Marketing Research, 39 Vol. XLIII (February 2006), 39–58

Center Point: The Product Backlog

If a rock wall is what is needed, but the only material available is a large boulder, how can you go about transforming the latter into the former?

Short answer: Work.

Longer answer: Lots of work.

Whether the task is to break apart a physical boulder into pieces suitable for building a wall or breaking up an idea into actionable tasks, there is a lot of work involved. Especially if a team is inexperienced or lacks the skilled needed to successfully complete such a process.

Large ideas are difficult to work with. They are difficult to translate into action until they are broken down into more manageable pieces. That is, descriptions of work that can be organized into manageable work streams.

We choose to go to the Moon in this decade and do the other things, not because they are easy, but because they are hard; because that goal will serve to organize and measure the best of our energies and skills, because that challenge is one that we are willing to accept, one we are unwilling to postpone, and one we intend to win, and the others, too.President John Kennedy

That’s a pretty big boulder. In fact, it’s so big it served quite well as a “product vision” for the effort that eventually got men safely to and from the moon in 1969. Even so, Kennedy’s speech called out the effort that it would take to realize the vision. Hard work. Exceptional energy and skill. What followed in the years after Kennedy’s speech in 1962 was a whole lot of boulder-busting activity

A user story is a brief, simple statement from the perspective of the product’s end user. It’s an invitation for a conversation about the what’s needed so that the story can meet all the user’s expectations.

In it’s simplest form, this is all a product backlog is. A collection of doable user stories derived from a larger vision for the product and ordered in a way that allows for a realistic path to completion to be defined. While this is simple, creating and maintaining such a thing is difficult.

Experience has taught me that the single biggest impediment to improving a team’s performance is the lack of a well-defined and maintained product backlog. Sprint velocities remain volatile if design changes or priorities are continually clobbering sprint scope. Team morale suffers if they don’t know what they’re going to be working on sprint-to-sprint or, even worse, if the work they have completed will have to be reworked or thrown out. The list of negative ripple effects from a poor quality product backlog is a long one.


Boulder image by pen_ash from Pixabay

Crafting a Product Vision

In his book, “Crossing the Chasm,” Geoffrey Moore offers a template of sorts for crafting a product vision:

For (target customer)

Who (statement of the need or opportunity)

The (product name) is a (product category)

That (key benefit, problem-solving capability, or compelling reason to buy)

Unlike (primary competitive alternative, internal or external)

Our product/solution (statement of primary differentiation or key feature set)

To help wire this in, the following guided exercise can be helpful. Consider the following product vision statement for a fictitious software program, Checkwriter 1.0:

For the bill-paying member of the family who also uses a home PC

Who is tired fo filling out the same old checks month after month

Checkwriter is a home finance program for the PC

That automatically creates and tracks all your check-writing.

Unlike Managing Your Money, a financial analysis package,

Our product/solution is optimized specifically for home bill-paying.

Ask the team to raise their hand when an item on the following list of potential features does not fit the product/solution vision and to keep it up unless they hear an item that they feel does fit the product/solution vision. By doing this, the team is being asked, “At what point does the feature list begin to move outside the boundaries suggested by the product vision?” Most hands should go up around item #4 or #5. All hands should be up by #9. A facilitated discussion related to the transition between “fits vision” and “doesn’t fit vision” is often quite effective after this brief exercise.

  1. Logon to bank checking account
  2. Synchronize checking data
  3. Generate reconciliation reports
  4. Send and receive email
  5. Create and manage personal budget
  6. Manage customer contacts
  7. Display tutorial videos
  8. Edit videos
  9. Display the local weather forecast for the next 5 days

It should be clear that one or more of the later items on the list do not belong in Checkwriter 1.0. This is how product visions work. They provide a filter through which potential features can be run during the life of the project to determine if they are inside or outside the project’s scope of work. As powerful as this is, the product vision will only catch the larger features that threaten the project work scope. To catch the finer grain threats to scope creep, a product road map needs to be defined by the product owner.

Show your work

A presentation I gave last week sparked the need to reach back into personal history and ask when I first programed a computer. That would be high school. On an HP 9320 using HP Educational Basic and an optical card reader. The cards looked like this:

(Click to enlarge)

What occurred to me was that in the early days – before persistent storage like cassette tapes, floppy disks, and hard drives – a software developer could actually hold their program in their hands. Much like a woodworker or a glass blower or a baker or a candlestick maker, we could actually show something to friends and family! Woe to the student who literally dropped their program in the hallway.

Then that went away. Keyboards soaked up our coding thoughts and stored them in places impossible to see. We could only tell people about what we had created, often using lots of hand waving and so much jargon that it undoubtedly must have seemed as if we were speaking a foreign language. In fact, the effort pretty much resembled the same fish-that-got-away story told by Uncle Bert every Thanksgiving. “I had to parse a data file THIIIIIIIIIS BIG using nothing but Python as an ETL tool!”

Yawn.

This is at the heart of why it is I burned out on writing code as a profession. There was no longer anything satisfying about it. At least, not in the way one gets satisfaction from working with wood or clay or fabric or cooking ingredients. The first time I created a predictive inventory control algorithm was a lot of fun and satisfying. But there were only 4-5 people on the planet who could appreciate what I’d done and since it was proprietary, I couldn’t share it. And just how many JavaScript-based menu systems can you write before the challenge becomes a task and eventually a tedious chore.

Way bigger yawn.

I’ve found my way back into coding. A little. Python, several JavaScript libraries, and SQL are where I spend most of my time. What I code is what serves me. Tools for my use only. Tools that free up my time or help me achieve greater things in other areas of my life.

I can compare this to woodworking. (Something I very much enjoy and from which I derive a great deal of satisfaction.) If I’m making something for someone else, I put in extra effort to make it beautiful and functional. To do that, I may need to make a number of tools to support the effort – saw fences, jigs, and clamps. These hand-made tools certainly don’t look very pretty. They may not even be distinguishable from scrap wood to anybody but myself. But they do a great job of helping me achieve greater things. Things I can actually show and handle. And if the power goes down in the neighborhood, they’ll still be there when the lights come back on.

The Value of “Good Enough for Now”

I’ve been giving some more thought to the idea of “good enough” as one of the criteria for defining minimum viable/valuable products. I still stand by everything I wrote in my original “The Value of ‘Good Enough’” article. What’s different is that I’ve started to use the phrase “good enough for now.” Reason being, the phrase “good enough” seems to imply an end state. “Good enough” is an outcome. If it is early in a project, people generally have a problem with that. They have some version of an end state that is a significant mismatch with the “good enough” product today. The idea of settling for “good enough” at this point makes it difficult for them to know when to stop work on an interim phase and collect feedback.

“Good enough for now” implies there is more work to be done and the product isn’t in some sort of finished state that they’ll have to settle for. “Good enough for now” is a transitory state in the process. I’m finding that I can more easily gain agreement that a story is finished and get people to move forward to the next “good enough for now” by including the time qualifier.

The Practice of Sizing Spikes with Story Points

Every once and a while it’s good to take a tool out of it’s box and find out if it’s still fit for purpose. Maybe even find if it can be used in a new way. I recently did this with the practice of sizing spikes with story points. I’ve experienced a lot of different projects since last revisiting my thinking on this topic. So after doing a little research on current thinking, I updated an old set of slides and presented my position to a group of scrum masters to set the stage for a conversation. My position: Estimating spikes with story points is a vanity metric and teams are better served with time-boxed spikes that are unsized.

While several colleagues came with an abundance of material to support their particular position, no one addressed the points I raised. So it was a wash. My position hasn’t changed appreciably. But I did gain from hearing several arguments for how spikes could be used more effectively if they were to be sized with story points. And perhaps the feedback from this article will further evolve my thinking on the subject.

To begin, I’ll answer the question of “What is a spike?” by accepting the definition from agiledictionary.com:

Spike

A task aimed at answering a question or gathering information, rather than at producing shippable product. Sometimes a user story is generated that cannot be well estimated until the development team does some actual work to resolve a technical question or a design problem. The solution is to create a “spike,” which is some work whose purpose is to provide the answer or solution.

The phrase “cannot be well estimated” is suggestive. If the work cannot be well estimated than what is the value of estimating it in the first place? Any number placed on the spike is likely to be for the most part arbitrary. Any number greater than zero will therefore arbitrarily inflate the sprint velocity and make it less representative of the value being delivered. It may make the team feel better about their performance, but it tells the stakeholders less about the work remaining. No where can I find a stated purpose of Agile or scrum to be making the team “feel better.” In practice, by masking the amount of value being delivered, the opposite is probably true. The scrum framework ruthlessly exposes all the unhelpful and counterproductive practices and behaviors an unproductive team may be unconsciously perpetuating.

Forty points of genuine value delivered at the end of a sprint is 100% of rubber on the road. Forty points delivered of which 10 are points assigned to one or more spikes is 75% of rubber on the road. The spike points are slippage. If they are left unpointed then it is clear what is happening. A spike here and there isn’t likely to have a significant impact on the velocity trend over, for example, 8 or 10 sprints. One or more spikes per sprint will cause the velocity to sink and suggests a number of corrective actions – actions that may be missed if the velocity is falsely kept at a certain desired or expected value. In other words, pointing spikes hides important information that could very well impact the success of the project. Bad news can inspire better decisions and corrective action. Falsely positive news most often leads to failures of the epic variety.

Consider the following two scenarios.

Team A has decided to add story points to their spikes. Immediately they run into several significant challenges related to the design and the technology choices made. So they create a number of spikes to find the answers and make some informed decision. The design and technology struggles continue for the next 10 sprints. Even with the challenges they faced, the team appears to have quickly established a stable velocity.

The burndown, however, looks like this:

If the scrum master were to use just the velocity numbers it would appear Team A is going to finish their work in about 14 sprints. This might be true if Team A were to have no more spikes in the remaining sprints. The trend, however, strongly suggests that’s not likely to happen. If a team has been struggling with design and technical issues for 10 sprints, it is unlikely those struggles will suddenly stop at sprint 11 and beyond unless there have been deliberate efforts to mitigate that potential. By pointing spikes and generating a nice-looking velocity chart it is more probable that Team A is unaware of the extent to which they may be underestimating the amount of time to complete items in the backlog.

Team B finds themselves in exactly the same situation as Team A. They immediately run into several significant challenges related to the design and the technology choices made and create a number of spikes to find the answers and make some informed decision. However, they decided not to add story points to their spikes. The design and technology struggles continue for the next 10 sprints. The data show that Team B is clearly struggling to establish a stable velocity.

And the burndown looks like this, same as Team A after 10 sprints:

However, it looks like it’s going to take Team B 21 more sprints to complete the work. That they’re struggling isn’t good. That it’s clear they struggling is very good. This isn’t apparent with Team A’s velocity chart. Since it’s clear they are struggling it is much easier to start asking questions, find the source of the agony, and make changes that will have a positive impact. It is also much more probably that the changes will be effective because they will have been based on solid information as to what the issues are. Less guess work involved with Team B than with Team A.

However, any scrum master worth their salt is going to notice that the product backlog burndown doesn’t align with the velocity chart. It isn’t burning down as fast as the velocity chart suggests it should be. So the savvy Team A scrum master starts tracking the burndown of value-add points vs spike points. Doing so might look like the following burndown:

Using the average from the parsed burndown, it is much more likely that Team A will need 21 additional sprints to complete the work. And for Team B?

The picture of the future based on the backlog burndown is a close match to the picture from the velocity data, about 22 sprints to complete the work.

If you were a product owner, responsible for keeping the customer informed of progress, which set of numbers would you want to base your report on? Would you rather surprise the customer with a “sudden” and extended delay or would you rather communicate openly and accurately?

Summary

Leaving spikes unpointed…

  • Increases the probability that performance metrics will reveal problems sooner and thus allow for corrective actions to be taken earlier in a project.
  • The team’s velocity and backlog burndown is a more accurate reflection of value actually being created for the customer and therefore allows for greater confidence of any predictions based on the metrics.

I’m interested in hearing your position on whether or not spikes should be estimated with story points (or some other measure.) I’m particularly interested in hearing where my thinking described in this article is in need of updating.

[This article originally appeared on the Agile Alliance blog.]

Openness, Grapevines, and Strangleholds

If you truly value openness on your Agile teams, you must untangle them from the grapevine.

Openness is one of the core scrum values. As stated on Scrum.org:

“The scrum team and it’s stakeholders agree to be open about all the work and the challenges with performing the work.”

This is a very broad statement, encompassing not only openness around work products and processes, but also each individual’s responsibility for ensuring that any challenges related to overall team performance are identified, acknowledged, and resolved. In my experience, issues with openness related to work products or the processes that impact them are relatively straightforward to recognize and resolve. If a key tool, for example, is mis-configured or ill-suited to what the team needs to accomplish than the need to focus on issues with the tool should be obvious. If there is an information hoarder on the team preventing the free flow of information, this will reveal itself within a few sprints after a string of unknown dependencies or misaligned deliverables have had a negative impact on the team’s performance. Similarly, if a team member is struggling with a particular story card and for whatever reason lacks the initiative to ask for help, this will reveal itself in short order.

Satisfying the need for openness around individual and team performance, however, is a much more difficult behavior to measure. Everyone – and by “everyone” I mean everyone – is by nature very sensitive to being called out as having come up short in any way. Maybe it’s a surprise to them. Maybe it isn’t. But it’s always a hot button. As much as we’d like to avoiding treading across this terrain, it’s precisely this hypersensitivity that points to where we need to go to make the most effective changes that impact team performance.

At the top of my list of things to constantly scan for at the team level are the degrees of separation (space and time) between a problem and the people who are part of the problem. Variously referred to as “the grapevine”, back channeling, or triangulation, it can be one of the most corrosive behaviors to a team’s trust and their ability to collaborate effectively. From his research over the past 30 years, Joseph Grenny [1] has observed “that you can largely predict the health of an organization by measuring the average lag time between identifying and discussing problems.” I’ve found this to be true. Triangulation and back-channeling adds significantly to the lag time.

To illustrate the problem and a possible solution: I was a newly hired scrum master responsible for two teams, about 15 people in total. At the end of my first week I was approached by one of the other scrum masters in the company. “Greg,” they said in a whisper, “You’ve triggered someone’s PTSD by using a bad word.” [2]

Not an easy thing to learn, having been on the job for less than a week. Double so because I couldn’t for the life of me think of what I could have said that would have “triggered” a PTSD response. This set me back on my heels but I did manage to ask the scrum master to please ask this individual to reach out to me so I could speak with them one-to-one and apologize. At the very least, suggest they contact HR as a PTSD response triggered by a word is a sign that someone needs help beyond what any one of us can provide. My colleague’s responses was “I’ll pass that on to the person who told me about this.”

“Hold up a minute. Your knowledge of this issue is second hand?”

Indeed it was. Someone told someone who told the scrum master who then told me. Knowing this, I retracted my request for the scrum master to pass along my request. The problem here was the grapevine and a different tack was needed. I coached the scrum master to 1) never bring something like this to me again, 2) inform the person who told you this tale that you will not be passing anything like this along to me in the future, and 3) to coach that person to do the same to the person who told them. The person for whom this was an issue should either come to me directly or to my manager. I then coached my manager and my product owners that if anyone were to approach them with a complaint like this to listen carefully to the person, acknowledge that you heard them, and to also encourage them to speak directly with me.

This should be the strategy for anyone with complaints that do not rise to the level of needing HR intervention. The goal of this approach is to develop behaviors around personal complaints such that everyone on the team knows they have a third person to talk to and that the issue isn’t going to be resolved unless they talked directly to the person with whom they have an issue. It’s a good strategy for cutting the grapevines and short circuiting triangulation (or in my case the quadrangulation.) To seal the strategy, I gave a blanket apology to each of my teams the following Monday and let them know what I requested of my manager and product owners.

The objective was to establish a practice of resolving issues like this at the team level. It’s highly unlikely (and in my case 100% certain) that a person new to a job would have prior knowledge of sensitive words and purposely use language that upsets their new co-workers. The presupposition of malice or an assumption that a new hire should know such things suggested a number of systemic issues with the teams, something later revealed to be accurate. It wouldn’t be a stretch to say that in this organization the grapevine supplanted instant messaging and email as the primary communication channel. With the cooperation of my manager and product owners, several sizable branches to the grapevine had been cut away. Indeed, there was a marked increase in the teams attention at stand-ups and the retrospectives became more animated and productive in the weeks that followed.

Each situation is unique, but the intervention pattern is more broadly applicable: Reduce the number of node hops and associated lag time between the people directly involved with any issues around openness. This in and of itself may not resolve the issues. It didn’t in the example described above. But it does significantly reduce the barriers to applying subsequent techniques for working through the issues to a successful resolution. Removing the grapevine changes the conversation.

References

[1] Grenny, J. (2016, August 19). How to Make Feedback Feel Normal. Harvard Business Review, Retrieved from https://hbr.org/2016/08/how-to-make-feedback-feel-normal

[2] The “bad” word was “refinement.” The team had been using the word “grooming” to refer to backlog refinement and I had suggested we use the more generally accepted word. Apparently, a previous scrum master for the team had been, shall we say, overly zealous in pressing this same recommendation such that it was a rather traumatic experience for someone on the one of the teams. It later became known that this event was grossly exaggerated, “crying PTSD” as a variation of “crying wolf,” and that the reporting scrum master was probably working to establish a superior position. It would have worked, had I simply cowered and accepted the report as complete and accurate. The strategy described in this article proved effective at preventing this type of behavior.

OnAgile 2017

Attended the Agile Alliance‘s OnAgile 2017 Conference yesterday. This is always an excellent conference and you can’t beat the price!

I created a mind map throughout the course of the conference as a way to organize my thoughts and key points. This doesn’t capture all the points by any means. Just those that stood out as important or new for me. I missed most of the first session due to connectivity issues.

Clouds and Windmills

I recently resigned from the company I had been employed by for over 5 years. The reason? It was time.

During my tenure1 I had the opportunity to re-define my career several times within the organization in a way that added value and kept life productive, challenging, and rewarding. Each re-definition involved a rather extensive mind mapping exercises with hundreds of nodes to described what was working, what wasn’t working, what needed fixing, and where I believed I could add the highest value.

This past spring events prompted another iteration of this process. It began with the question “What wouldn’t happen if I didn’t go to work today?”2 This is the flip of asking “What do I do at work?” The latter is a little self-serving. We all want to believe we are adding value and are earning our pay. The answer is highly filtered through biases, justifications, excuses, and rationalizations. But if in the midsts of a meeting you ask yourself, “What would be different if I were not present or otherwise not participating?”, the answer can be a little unsettling.

This time around, in addition to mind mapping skills, I was equipped with the truly inspiring work of Tanmay Vora and his sketchnote project. Buy me a beer some day and I’ll let you in on a few of my discoveries. Suffice it to say, the overall picture wasn’t good. I was getting the feeling this re-definition cycle was going to include a new employer.

A cascade of follow-on questions flowed from this iteration’s initial question. At the top:

  1. Why am I staying?
  2. Is this work aligned with my purpose?
  3. Have my purpose and life goals changed?

The answers:

  1. The paycheck
  2. No.
  3. A little.

Of course, it wasn’t this simple. The organization changed, as did I, in a myriad of ways. While exploring these questions, I was reminded of a story my Aikido teacher, Gaku Homma, would tell when describing his school. He said it was like a rope. In the beginning, it had just a few threads that joined with him to form a simple string. Not very strong. Not very obvious. But very flexible. Over time, more and more students joined his school and wove their practice into Nippon Kan’s history. Each new thread subtlety changed the character of the emerging rope. More threads, more strength, and more visibility. Eventually, an equilibrium emerges. Some of those threads stop after a few short weeks of classes, other’s (like mine) are 25 years long before they stop, and for a few their thread ends in a much more significant way.

Homma Sensi has achieved something very difficult. The threads that form Nippon Kan’s history are very strong, very obvious, and yet remain very flexible. Even so, there came a time when the right decision for me was to leave, taking with me a powerful set of skills, many good memories, and friendships. The same was true for my previous employer. Their rope is bending in a way that is misaligned with my purpose and goals. Neither good nor bad. Just different. Better to leave with many friendships intact and a strong sense of having added value to the organization during my tenure.

The world is full of opportunities. And sometimes you have to deliberately and intentionally clear all the collected clutter from your mental workspace so those opportunities have a place to land. Be attentive to moments like this before your career is remembered only as someone who yells at clouds and tilts at windmills.


1 By the numbers…

1,788 Stand-ups
1,441 Wiki/Knowledgebase Contributions
311 Sprint/Release Planning Sessions
279 Reviews
189 Retrospectives
101 Projects
31 Internal Meet-ups
22 Agile Cafés
10 Newsletters
5.5 Years
3 Distinct Job Titles
1 Wild Ride

2 My thanks to colleague Lennie Noiles and his presentation on Powerful Questions. While Lennie didn’t ask me this particular question, it was inspired by his presentation.