The Emotional Bumblebee

I finished Lisa Feldman Barrett’s book, “How Emotions Are Made: The Secret Life of the Brain,” this past week. It’s the latest exploration in my decades-long journey to better understand myself and others. There’s a lot in this book and it’s been a paradigm shift for me personally. I expect the effects from the insights gained from Dr. Barrett’s work in my professional life will be equally seismic.

As part of this exploration and effort to understand what Dr. Barrett and others are discovering, I’ve been experimenting with different ways to organize and assimilate information. For years I’ve used mind mapping and its served me well. I continue to use this approach almost daily. Ah, but the relentless advancement of technology has resulted in new tools. My current favorite (meaning the one that so far matches how my brain seems to work) is a tool called Obsidian. It’s new and is evolving quickly. I’ve been using Obsidian to organize and study cognitive biases in a way similar to Buster Benson’s work. This past weekend I began a similar process with emotions based on Dr. Barrett’s work.

It’s early but it has already yielded many important insights and benefits. I began by collecting as many words I could find (currently, 514) that are used to describe emotional states or patterns. I then entered them into Obsidian, each connected to a single node, “Words that express emotion.” Here’s a partial screen capture of the Obsidian graph:

The graph is too big to fit on a single screen and have the words show. And Obsidian does not yet have an export feature for graphs into a standard image file. So I’m limited by screen real estate. Where I take this next…I’m not sure, actually. Probably a cycle of refinement and deep dive into definitions and descriptions. I can foresee the creation of a real-time tool for assessing emotional states using a circumplex. Lots of experimentation ahead.

There is a dynamic quality to the graphs in Obsidian that is part of the fun and path-to-insights with the information. I’ve created a video to show the effect and set it to Nikolai Riminsky-Korsakov’s orchestral interlude “Flight of Bumblebee.” If/when you read Dr. Barrett’s book, you will understand why I selected the bee theme. It’s a virtual emotional bee hive inside our heads and bodies. Be sure to expand the video to full screen for maximum effect. Enjoy!

Team Composition

When a potter begins to throw a pot, she picks up a lump of clay, shapes it into a rough sphere, and throws it onto the spinning potter’s wheel. It may land off-center, and she must carefully begin to shape it until, it is a smooth cylinder. Then she works the clay, stretching and compressing it as it turns. First it is a tower, then it is like a squat mushroom. Only after bringing it up and down several times does she slowly squeeze the revolving clay until its walls rise from the wheel. She cannot go on too long, for the clay will begin to “tire” and then sag. She gives it the form she imagines, then sets it aside. The next day, the clay will be leather hard, and she can turn it over to shape the foot. Some decoration may be scratched into the surface. Eventually, the bowl will be fired, and then the only options are the colors applied to it; its shape cannot be changed.

This is how we shape all the situations in our lives. We must give them rough shape and then throw them down into the center of our lives. We must stretch and compress, testing the nature of things. As we shape the situation, we must be aware of what form we want things to take. The closer something comes to completion, the harder and more definite it becomes. Our options become fewer, until the full impact of our creation is all that there is. Beauty or ugliness, utility or failure, comes from the process of shaping.Deng Ming-Dao, '365 Tao - Daily Meditations'

Building a high-performance team from scratch is just as difficult as turning a low-performing team into a high-performing team. However, there are very different reasons why each of these scenarios are difficult.

Like the potter beginning with a lump of clay, when forming a new team we must understand what we have to work with and have a clear idea of the outcomes we want. As we shape the team, we have to be mindful of how the individuals on the team are changing – or not – and whether those changes are moving toward the outcome. If not, we either need to change the desired outcome or alter the material we have to work with, that is, change out one or more people so that the shape of the team is better suited to reaching the desired outcome. It is also important to monitor the speed at which the team is formed or shaped. Too fast, and the team may not coalesce in a way that is healthy or productive. Too slow and they may not coalesce at all, they may “tire” of the slow pace and disengage.

With existing teams, we may have a limited range of options to change the roster. This is more like the an existing piece of pottery that has been fully set.

Intuition and Effort Estimates

In his book, “Blink,” Malcom Gladwell describes an interview between Gary Klein and a fire department commander. A lieutenant at the time, the firemen were attempting to put out a kitchen fire that didn’t “behave” like a kitchen fire should. The lieutenant ordered his men out of the house moments before the floor collapsed due to the fire being in the basement, not the kitchen. Klein later deconstructed the event with the commander and revealed a surprisingly rich set of experienced-based characteristics about that event the commander used to quickly evaluate the situation and respond. The lieutenant’s quick and well-calibrated-to-the-situation intuition undoubtedly saved them from serious injury or worse.

Intuition, however, is domain-specific. This same experienced-based intuition most probably wouldn’t have served the commander well if he suddenly found himself in a different situation – at the helm of a sailboat in rough water, for example, assuming the commander had never been on a sailboat before.

In the context of a software development environment, a highly experienced individual may have very good intuition on the amount of work needed to complete a specific piece of work assigned to them. But that intuition breaks down when the work effort necessarily includes several people or an entire team. So while intuition can serve a useful role in estimating work effort, that value is generally over-estimated, particularly when it needs to be a team estimate.

Consider work effort estimates when framed by Danial Kahneman’s work with System One and System Two thinking. System One is fast, based on experiences, and automatic. However, it isn’t very flexible and it’s difficult to train. This is the source of intuition. System Two, however, is analytical, methodical, intentional, deliberate, and slower. Also, it’s more trainable. It’s when the things that are trained in System Two sink into System One that new behaviors become automatic. With work effort estimates, we must first deliberately train our System Two using a method that is more deliberate about estimating before we can comfortably rely on our System One abilities.

Once calibrated, any number of changes could signal the need to re-calibrate by employing the deliberate process. Change the team composition and the team will need some measure of re-training of System One via System Two. Change a team’s project and the same re-training will need to occur.

The trained intuition approach to estimating effort develops what Kahneman called “disciplined intuition.” Begin with a deliberate, statistical approach to thinking about work effort. Establish a base rate using the value ranges for the effort characteristics. With experience, the team can begin to integrate their intuition later in the project process. If teams lead with their intuition (as is the case with planning poker and t-shirt sizes), they will filter for things that confirm their System One evaluation. With experience and a track record of success from training their intuition, teams can eventually lead with an intuitive approach. But it isn’t a very effective way to begin.

This method also leverages the work of Anders Ericsson and deliberate practice. The key here is the notion of increasing feedback into the process of estimating work effort. The deliberate action of working through a conversation that evaluates each of the work effort characteristics introduces more and better feedback loops that help the team evaluate the quality of their decision. Over time, they get better and better at correcting course and internalizing the lessons.

It’s like learning to drive a car. A new driver will leverage System Two heavily before they can comfortably rely on System One while driving. This is good enough for most driving situations. However, it wouldn’t be good enough if that same driver who is competent at driving in city traffic was suddenly placed on a NASCAR track in a powerful machine going 200 miles per hour.

A NASCAR track might be where we would go look for expert drivers but not where we would look for competent delivery truck drivers. For work estimates on software projects, we’re looking for a level of good enough that’s a reasonable match for the project work at hand. And we’re looking for better than untrained intuitive guesses.

Strategies for Remote Interviews with Team Candidates

In a recent New York Times column, Adam Grant wrote:

Credentials are overrated, and motivation is underrated. It doesn’t matter how much experience people have if they lack the drive to think creatively, work collaboratively and keep on learning. We’re not just hiring people to do a job today — we’re hiring them to make their team and their organization better tomorrow.

Once upon a time – last century, actually – employers could rely on the conferring of a college degree as evidence of a certain level of competence in the degree subject. In some areas, this is probably still true. Generally speaking, this would apply to the scientific areas of study: chemistry, physics, mathematics, etc. Unfortunately, even these area are becoming suspect as academic rigor is eroded in the interests of removing perceived barriers to this or that special interest group. To be very clear, I’m referring to the importance of thorough and complete understanding of the subject. The mine field that academia has become is indeed rife with self-inflicted and often insurmountable barriers to learning. The egregious rise in the cost of tuition, grade inflation, and credential dilution are but a few examples.

There are other factors in play. The speed at which society moves in the 21st Century is simply too fast for the four-year degree to to have any hope of staying relevant, let alone keeping up. Almost every major university offers free courses in a wide variety of subjects so it is possible for a high school graduate to craft the equivalent of a Bachelors or Masters and complete it for a fraction of the cost and in half the time. Ah, but without having completed the paper chase, how can such an industrious individual establish for a potential employer that they have the requisite competence?

Adam Grant has it right. Credentials are overrated. So how can we assess the quality and potential of team candidates? Grant identifies three key mistakes interviewers make in the interview process.

  1. They ask they wrong kinds of questions.
  2. They focus on the wrong criteria.
  3. They’re overly influenced by the best talkers.

If, as Grant suggests, job interviews are broken than conducting remote job interviews in the midst of a pandemic are significantly more challenging. In this post, I wish to speak to the second mistake identified by Grant and write about what we can do to identify our criteria, what we can do during an interview to elicit information about the candidate’s qualifications, and a strategy for improving the efficacy of remote job interviews.

Identify Important Criteria

For the sake of example, we’ll engage in a little time travel into the future and imagine having hired the perfect product owner candidate. What tasks encountered in your work day are no longer an issue with the new candidate on board? Is the product backlog now well-maintained and in a healthy state? Does the sprint runway extend out 4-5 (or more) sprints? Has a stable sprint velocity emerged (suggesting that the user stories are of higher quality and understood better by the team)? Do conflicts between areas of the business occur less frequently than in the past? Are stakeholders pleased with the results they see at sprint and increment reviews?

If our example were for a scrum master candidate, we would ask ourselves different questions for eliciting important criteria for the position. Is there less conflict among team members? Does the team understand the purpose and value for determining the effort involved to complete a user story? And again, has a stable sprint velocity emerged?

In addition to considering what hasn’t been working well (and therefore illuminating what skills you want a candidate to bring to the table) it is also important to include what has been working. It will not serve the organization if one set of problems are swapped for another. Perhaps, for example, the previous product owner was well liked by the team and helped the team maintain a positive morale, but had a poorly maintained product backlog that prevented a good approximation for a release date. It wouldn’t be much of an improvement if the new product owner kept a healthy product backlog but did so by driving the team as a tyrant might.

Test for Matching Skills

With a good feel for the criteria needed to hire the best candidate you can then craft a strategy for determining how well the candidate’s abilities satisfy your criteria. Prepare tasks for the candidate that will verify congruity between what a candidate says they can do and what they can actually do. One approach, which I use frequently, is to present the candidate with a series of scenarios, each designed to build on how the candidate responded to the previous scenario. While I may only present a candidate 3-4 scenarios, I have several dozen in the queue and present the sequence based on how well the candidates responses to the challenge.

For example, for a scrum master role – a high-touch role that requires consummate communication skills, flexibility, and the ability to solve people problems – I may present an initial scenario as follows:

“I’m going to give you several scenarios. You are free to ask any questions you wish about the scenario and state any assumptions you are making in your responses.

You are being considered for a position as scrum master for a team that is developing a healthcare related web application for use in hospitals. This team is responsible for developing the UI/UX components and works closely with another team responsible for much of the database and middle tier components. As a new scrum master, what questions would you ask of anyone in the organization to help you quickly understand what you need to do to become effective as a scrum master for your team?”

There are many things I would hope to hear in the candidate’s answer. To mention a few, I’d like to hear that they want to speak to the product owner, the stakeholders, and, of course, each of the team members. I’d like to hear that they plan to spend time in information gathering mode rather than work immediately to shape the team into some version of teams they’ve worked with at other jobs. I’d like to hear questions from them about what kinds of metrics does the team use and what have they shown.

There are no right and wrong answers to a scenario like this. Just answers that are better than others. And I don’t expect the candidate to deliver an exhaustively thorough response.

From their responses, I might learn that they are a recipe follower or that they are flexible in adapting to the needs of the business while working to establish good scrum practices. I might learn that they really don’t know scrum at all and are only good at parroting text book examples and jargon. I might hear how they would attempt to leverage several things from previous experience while acknowledging those attempts would be experiments and subject to adaptation based on feedback.

Assuming the candidate responded to the first scenario in a way that scores high marks for satisfying my criteria, I might offer the next scenario as follows:

“Assume you have been serving successfully as scrum master for this team for six months now. The product owner calls the team together and says ‘I need to swap out some of the stories in the sprint for work that marketing wants done before the end of the week.’ As scrum master, how would you respond to this development?”

As with the previous scenario, the candidate’s response would be measured against the criteria I have established for the position. Depending on what I’ve heard, I may continue to offer additional scenarios that build on the candidates developing experience with the scenario scrum team.

This strategy is pursued until I’m satisfied the candidate knows what they claim to know or not. A short interview does not bode well for the candidate. A long interview does.

(I would be interested in hearing about any questions, comments, or creative ways you’ve applied this strategy.)

Friends, Guides, Coaches, and Mentors

The “conscious competence” model for learning is fairly well known. If not explicitly, than at least implicitly. Most people can recognize when someone is operating at a level of unconscious incompetence even if they can’t quite put their finger on why it is such a person makes the decisions they do. Recognizing when we ourselves are at the level of unconscious incompetence is a bit more problematic.

A robust suite of cognitive biases that normally help us navigate an increasingly complex world seem to conspire against us and keep us in the dark about our own shortcomings and weaknesses. Confirmation bias, selective perception, the observer bias, the availability heuristic, the Ostrich effect, the spotlight effect and many others all help us zero in on the shiny objects that confirm and support our existing memories and beliefs. Each of these tissue-thin cognitive biases layer up to form a dense curtain, perhaps even an impenetrable wall, between the feedback the world is sending and our ability to receive the information.

There is a direct relationship between the density of the barrier and the amount of energy needed to drive the feedback through the barrier. People who are introspective as well as receptive to external feedback generally do quite well when seeking to improve their competencies. For those with a dense barrier it may require an intense experience to deliver the message that there are things about themselves that need to change. For some a poorly received business presentation may be enough to send them on their way to finding out how to do better next time. For others it may take being passed over for a promotion. Still others may not get the message until they’ve been fired from their job.

However it happens, if you’ve received the message that there are some changes you’d like to make in your life and it’s time to do the work, an important question to ask yourself is “Am I searching for something or am I lost?”

If you are searching for something, the answer may be found in a conversation over coffee with a friend or peer who has demonstrated they know what you want to know. It maybe that what you’re looking for – improve your presentation skills, for example – requires a deeper dive into a set of skills and it makes sense to find a guide to help you. Perhaps this involves taking a class or hiring a tutor.

If you are lost you’ll want to find someone with a much deeper set of skills, experience, and wisdom. A first time promotion into a management position is a frequent event that either exposes someone’s unconscious incompetence (i.e. the Peter Principle) or challenges someone to double their efforts at acquiring the skills to successfully manage people. Finding a coach or a mentor is the better approach to developing the necessary competencies for success when the stakes are higher and the consequences when failing are greater.

A couple of examples may help.

When I was first learning to program PCs I read many programming books cover to cover. It was a new world for me and I had very little sense of the terrain or what I was really interested in doing. So I studied everything. Over time I became more selective of the books I bought or read. Eventually, I stopped buying books altogether because there was often just a single chapter of interest. Today, I can’t remember the last time I picked up a software development book. This was a progression from being lost at the start – when I needed coaches and mentors in the form of books and experienced software developers – to needing simple guidance from articles and peers and eventually to needing little more than a hint or two toward the end of my software development career.

A more recent example is an emergent need to learn photography – something I don’t particular enjoy. Yet for pragmatic reasons, it’s become worth my time to learn how to take a particular kind of photograph. I need a coach or a mentor because this is entirely new territory for me. So I hired a professional photographer with an established reputation for taking this type of photograph I’m interesting in. My photography coach is teaching me what I need to know. (He is teaching me how to fish, in other words, rather then me paying him for a fish every time I need one.)

Unlike the experience of learning how to program – where I really didn’t know what I wanted to do – my goal with photography is very specific. The difference has a significant influence on who I choose for guides and mentors. For software development, I sought out everyone and anyone who knew more than I. For photography, I sought a very specific set of skills. I didn’t want to sit through hours of classes learning how to take pictures of barn owls 1,000 meters away in the dark. I didn’t want to suffer through a droning lecture on the history of camera shutters. Except in a very roundabout way, none of this serves my goal for learning how to use a camera for a very specific purpose.

Depending on what type of learner you are, working with a mentor who really, really knows their craft about a specific subject you want to learn can be immensely more satisfying and enjoyable. Also, less expensive and time consuming. If it expands into something more, than great. With this approach you will have the opportunity to discover a greater interest without a lot of upfront investment in time and money.

Time Out!

In Estimating Effort – An Explicitly Implicit Approach I stated that time cannot be one of the attributes the team uses to describe what they mean by “effort.” The importance of this warrants the need for a deeper dive into the rationale behind this rule and how excluding time can lead to better predictability for team performance.

The primary objective for coaching teams to think about effort independent of time constraints is so that they can improve their skills for thinking about the actual work involved. Certainly they will spend time completing the work. But the simple passage of time won’t get the work done. Someone has to actually DO something. That something is the effort.

For example, maybe someone on the team says the product backlog item requires a lot of documentation. It isn’t complex and there aren’t any dependencies, it’s just going to take a lot of time – 7 days, maybe. So they want to give that PBI an effort value of 5 or 8 (or 5 or 8 story points, if that’s what you’re using) because it’s going to take a lot of time.

Remember, the purpose of these criteria is to generate a conversation around what the actual effort is. The criteria are just a set of guideposts that help the team hold a meaningful conversation about the effort.  So when someone on a team insists that they estimate using time, I ask them “What are you doing as the time you’ve estimated is passing? Are you just sitting there, watching the seconds tick away?” Of course they aren’t just sitting there. I’m asking the questions to elicit a comment about the actual work they are doing. Maybe they answer with something a little less vague, like “typing words.” That’s good. “What’s the difference between typing those words in a word processor and typing code in Vim?”

Continuing down this line of inquiry usually leads to the realization that typing documentation has many similar traits to coding. It can be complex. It may have dependencies.  It may require research for accuracy and it certainly will need a lot of debugging (professional writers call this “editing.”) Coders typically don’t like writing documentation. To them it’s just about the tedium of banging something out that’s not as fun as code. Sussing out the effort like this will lead to better acceptance criteria and definition of done associated with the PBI.

The downside of time estimates is that they hide all manner of sins and rabbit holes. The planning fallacy, precision bias, availability heuristic, and survivorship bias are just a few of the mental obstacles guaranteed to reduce the accuracy of time estimates. Or you may have to deal with a team member who wants to estimate using time because they know full well it offers the opportunity to hide slow work. (Gamers gotta game.) When teams have run the gauntlet of effort criteria, they are more likely to end up with a better picture of how much work they are being asked to do when time is excluded from the conversation. Effort criteria force the team to be more explicit about the activities they are engaged with as the clock ticks.

The investment in identifying time-independent effort criteria yields further benefits in the retrospective. Was the team unable to complete a PBI in the sprint? Was all the work finished two days early? Have a look at the effort criteria and ask which of them were a factor in making the PBIs a bigger or smaller effort than initially estimated. This is how teams learn and improve their skill at estimating. The better they are at estimating the more predictable their productivity.

OK, so let’s say you have a team doing a great job of determining the effort needed to complete a PBI and they do so without including time. No doubt, management will be unimpressed. They want time estimates. Good news! We can give them time estimates…in two week increments.

With the team focused on figuring out time independent effort values for every PBI in the backlog and an ongoing experience of how much effort they can reliably complete in two week increments, product owners can provide a reasonable forecast for when the release or project will be complete. The team focuses on accurate time independent effort estimates. The scrum master and product owner worry about the performance metrics and time projections.

It’s surprising how hard of a sell this can be for teams. They are hard wired to think in terms of time because that’s what traditional project management has hounded them for since before coding was a thing. I tell teams, “With Agile and scrum, you no longer have to worry about time. That’s the product owner’s job. But you do have to develop very good skills at estimating effort.” It’s common for them to have a hard time adjusting to the new paradigm.

Estimating Effort – An Explicitly Implicit Approach

It is difficult to make predictions, especially about the future.Unknown

Sage advice.

So why bother estimating the amount of work needed to complete a product backlog item? After all, since estimates are about the future the probability is high that they will be wrong. Actually, they may very well be guaranteed to be wrong. It’s just that some of the guesses will be more accurate than others. And if they happen to match what the effort ended up to be, they just look like they were “right.”

I’ve written in the past expressing my thoughts about estimating the effort needed to complete product backlog items, particularly with respect to story points. I believe working to find a relative gauge to how well teams are estimating work is important. Without them, cognitive biases such as the optimism bias and planning fallacy can significantly distort a project delivery timeline. However, the phrase “story point” is burdened with a lot of baggage. It has been abused and misused such that invoking the phrase often causes more harm than good.

I’ve been experimenting recently with a different approach to estimating effort. The method I’ll describe in this post got a bit of a boost after listening to a recent interview with Psychologist and Nobel laureate Daniel Kahneman. In this interview, Kahneman describes an experience he had while serving in the Israeli army some sixty years ago. He was assigned the job of setting up an interview process that would determine how well a recruit would do as a combat soldier. For this process, he selected six traits and instructed the interviewers to ask questions designed to evaluate each trait independently and score them. The interviewers were not happy with this approach. As a compromise, Kahneman instructed the interviewers, when they were finished asking about the six traits, to close their eyes and just jot down a number they felt matched how good a soldier the recruit might be. What he discovered:

When we validated the results of the interview, it was a big improvement on what had gone on before. But the other surprise was that the final intuitive judgments added, it was good. It was as good as the average of the six traits, and not the same. It added information, so actually we ended up with a score that was half determined by the specific ratings, and the intuition got half the weight. That, by the way, stayed in the Israeli army for well over 50 years.Daniel Kahneman

This intuitive evaluation made by the interviewers is similar to what Agile methods ask of development teams when determining a value for “story points.” T-shirt sizes, planning poker, dot voting, affinity mapping and many similar techniques are all designed to elicit an intuitive sense of the effort involved. If there is a disagreement between team members, than a dialog follows to understand what the discrepancy is all about. This continues until there is alignment on what the team believes the effort to be. When it works, it works well.

So on to the details of the approach I’ve been experimenting with. (It doesn’t have a name yet.) The result of this approach is a number I call the “effort value.” The word “value” is a reference to the actual elementary mathematics value being derived. Much like the answer to the question “What value results from adding 2 and 2?” Answer: 4. The word “value” also suggests an intrinsic worth, something beyond a hard number. My theory is that this will help teams think beyond the mere number and think also about the value they are delivering to stakeholders. The word “point” correlates to a hard number and lacks any association to intrinsic worth or value.

Changing the words introduces a simple and small shift that nonetheless has a significant impact. With the change, teams are more open to considering a different approach to determining estimates.

So how is the effort value derived?

I begin by having the team define 4-5 characteristics or attributes that, to them, describe what they mean by “effort.” It is important for the team to define these attributes. By doing so, they own the definition and it becomes much harder for them to dismiss the attributes as “someone else’s” and thereby object to their use in deriving an effort value. These attributes can be anything that is meaningful to the team. Examples:

  • Complexity – Is the work straightforward (e.g. code a bubble sort function) or does it involve interrelated systems (e.g. code a predictive inventory control algorithm)?
  • Dependencies – How dependent is the product backlog item on other backlog items or other teams?
  • Familiarity – Is this work very similar to work the team has done in the past or something quite new? Tasking a coder with documenting a piece of straightforward code may actually be a difficult effort because the coding language they spend most of their day with is familiar whereas writing clear sentences that non-technical people can understand is unfamiliar.
  • Information – Is the detail in the product backlog item complete? Are the acceptance criteria and definition of done clear?
  • Technical Debt Risk – Does the PBI require any refactoring of related code? Is any technical debt being incurred with the PBI?
  • Design Stability – Is there a lot of discovery and exploration needed to complete the PBI?
  • Confidence for Completing a PBI within the Sprint – This category may roll up several categories.
  • Tedium – Perhaps the effort involves a lot of repetitive copy and paste that nonetheless requires careful attention to avoid simple mistakes.

The team can define any attribute they wish. However, there are a few criteria to consider:

  • Keep the list limited to 4-6 attributes. More than that risks turning the derivation of an effort value into the equivalent of a product backlog item navel-gazing exercise.
  • Time cannot be one of the attributes.
  • The attributes should be reasonable. Assessing a product backlog item’s effort value by evaluating it’s “aura” or the current position of the stars are generally not useful attributes. On the other hand, I’ve listened to arguments against evaluating estimates in terms of “complexity” as being similarly useless. I see the point of those arguments, but my view is that the attributes must first and foremost be meaningful to the entire team. In the end, it’s an educated guess and arguments about the definition of terms like “complexity” are counterproductive to the overall intent of deriving an effort value.

Each of these attributes is then given a scale, the same scale for each attribute – 1 to 10, 1 to 15 – whatever the team feels is most appropriate. The team then goes through each of these attributes and evaluates the product backlog item attribute on the scale. (NB: After nine months of Plan-Do-Check-Adapt, a better approach for scoring the attributes has been determined.) The low number on the scale represents very little impact. If dependency, for example, is one of the attributes then a 1 might mean that the product backlog item is entirely self-contained. A 10 might represent a case where the product backlog item is dependent on several other product backlog items or perhaps the output from other teams.

When this is done, ask the team where on the modified Fibonacci scale they think this particular product backlog item’s effort value should be. If they’re struggling you can do the math: find the average for all the attributes and match that number in the modified Fibonacci scale. If the average is a decimal, for example 3.1, match the value to the next highest modified Fibonacci scale number. In this case the value would be 5. Then ask the team if they feel that number it’s a good representation of the effort value for the product backlog item.

This may seem like a lot of unnecessary gyrations, but for technical people it’s a simple process they can understand. The bonus is a number they can calculate. The number isn’t what’s important here. What’s important is the conversation that happens around the attributes and what the team feels about the number that results from the conversation. This exercise is meant to develop their intuitive muscles for considering multiple aspects and dimensions behind the “effort” needed for them to get the work done.

Use this process enough times and eventually calculating the average can be dropped from the process. Continue using this process and eventually calculating the numbers for the individual attributes can be dropped from the process. I don’t know if it’s a good idea to drop the use of the attributes for generating the needed conversation around the effort needed, but it will certainly be valuable to reconsider the list of attributes from time to time so as to fine tune the list to match what the team feels is important.

With this approach I’m turning the estimation process on its head (or back on its feet, if Kahneman is right.) Rather than seek the intuitive response first (e.g. t-shirt size) and elicit details later if there is a mismatch between team members, this method seeks to better prime and develop the team’s intuition about the effort value by having them explicitly consider a list of self-selected attributes (or traits) for effort first and then include an intuitive evaluation for effort.

Don’t try to form an intuition quickly, which was what we normally do. Focus on the separate points, and then when you have the whole profile, then you can have an intuition and it’s going to be better. Because people form intuitions too quickly, and the rapid intuitions are not particularly good. If you delay intuition until you have more information, it’s going to be better.Daniel Kahneman

Update

See Time Out! and Determining Effort Value – Tactics for additional information on this technique.

How to Frame Team Development Challenges

When working with teams or organizations new to Agile and scrum, it’s common for scrum masters to face varying degrees of resistance to the new methods and processes. The resistance can take many forms ranging from passive-aggressive behaviors to overt aggression and even sabotage.

There are two things to consider when looking for ways to resolve this type of resistance.

  1. The specific issues are typically not Agile problems in the sense they won’t be solved by any specific Agile techniques, methods, or frameworks. Rather, they are people problems; issues with how people’s behavior is driven by their values and beliefs. We have to resolve the people problems in concert with implementing Agile or Agile will never be successfully implemented. We also have to be sure not to confuse the two.
  2. We need to look at these challenges as opportunities.

It’s the second point I want to focus on in this post.

To simply paint the often unpleasant experiences we have with coaching our teams in the ways of Agile and scrum as “opportunities” isn’t much of a solution. It’s weak tea and about as useful as “Let’s all just think positive thoughts and eventually it’ll get better.” Nor do I suggest we sugar coat the unpleasantness by sprinkling “It’s an opportunity!” language on our conversations. Losing your job or breaking your leg may be one of those “wonderful opportunities” born from adversity, but only after you’ve found that next better job or your leg has healed. Hustling for new work or sitting idle while in pain and healing is decidedly unpleasant.

I had something else in mind for thinking about the challenges we face as “opportunities.” It’s in the midst of the unpleasant phase where the opportunities are found that lead to success. Seth Godin speaks to this in his book “The Dip.”

The Dip is the long slog between starting and mastery. The Dip is the combination of bureaucracy and busywork you must deal with in order to get certified in scuba diving. The Dip is the difference between the easy “beginner” technique and the more useful “expert” approach in skiing or fashion design. The Dip is the long stretch between beginner’s luck and real accomplishment.

It’s the classic “things will get worse before they get better.” But as Zig Ziglar put it, “Anything worth doing is worth doing poorly–until you can learn to do it well.”

It’s important to recognize and acknowledge when you’re in The Dip. Not just as an individual scrum master on a particular team, but perhaps the entire organization as well. Solving the issues you’re encountering today is exactly what you need to do in order to be successful in the long term. The Dip is inevitable and unavoidable. Part of the scrum master’s purpose is to raise the awareness of this fact so that the underlying issues that need to be resolved can be amplified.

This is what can make serving in the scrum master role particularly unpleasant at times. It’s when you earn your pay. In general, people don’t like to look at themselves in the Agile mirror that scrum masters are charged with holding up in front of them.

The Dip is another way to describe Shalloway’s Corollary applied to teams and organizations. Unlike losing a job or breaking a leg, what we’re dealing with is actually something we most definitely should expect. The system was always going to push back. Now we’re discovering exactly how that’s going to happen. The system is showing us what needs to change in order to become a more Agile organization. No more guess work. It’s a gift. Knowing this should be cause for optimism and viewing the tasks ahead as an opportunity. The way is known. There is less ambiguity. Doesn’t mean the path ahead is easy, just better known. That alone is incredibly useful.

A final thought. “The System” that’s been in place at any organization is what it is. For better or worse, it’s been working, perhaps for decades. Anything that challenges the status quo is going to receive push back. It just happens that Agile is the current challenger. As scrum masters, we have to continually evaluate our own “system” in a way that prevents it from becoming the next version of the problem.

  • Is a particular tool, process, or method fit for purpose?
  • What problem are we trying to solve?
  • Are there aspects of the “old system” that actually make sense to keep in place?
  • Are the frustrations we’re experiencing due to the “old system” pushing back or are they the result of our own ossification around out dated or misapplied beliefs?

Improving the Signal to Noise Ratio – Coda

In a Scientific American column delightfully named “The Artful Amoeba” there is an article on a little critter called the “fire chaser” beetle: How a Half-Inch Beetle Finds Fires 80 Miles Away – Fire chaser beetles’ ability to sense heat borders on the spooky

Why a creature would choose to enter a situation from which all other forest creatures are enthusiastically attempting to exit is a compelling question of natural history. But it turns out the beetle has a very good reason. Freshly burnt trees are fire chaser beetle baby food. Their only baby food.

Fire chaser beetles are thus so hell bent on that objective that they have been known to bite firefighters, mistaking them, perhaps, for unusually squishy and unpleasant-smelling trees.

This part is interesting:

A flying fire chaser beetle appears to be trying to give itself up to the authorities. Its second set of legs reach for the sky at what appears to be an awkward and uncomfortable angle.

But the beetle has a good reason. It’s getting its legs out of the way of its heat eyes, pits filled with infrared sensors tucked just behind its legs.

A strategy suggested by the fire beetle life cycle is if you want to maximize a signal to noise ratio, iterate through three simple things:

  1. Work to develop a super well defined signal/goal/objective.
  2. Remove every possible barrier to receiving information about that signal – mental, emotional, even physical – that you can think of or that you discover over time.
  3. Repeat

Also, the “Way of the Amoeba” is now the “Way of the Artful Amoeba.” Update your phrase books accordingly.