Concave, Convex, and Nonlinear Fragility

Nassim Nicholas Taleb’s book, “Antifragile,” is a wealth of information. I’ve returned to it often since first reading it several years ago. My latest revisit has been to better understand his ideas about representing the nonlinear and asymmetric aspects of fragile/antifragile in terms of “concave” and “convex.” My first read of this left me a bit confused, but I got the gist of it and moved on. Taleb is a very smart guy so I need to understand this.

The first thing I needed to sort out on this revisit was Taleb’s use of language. The fragile/antifragile comparison is variously described in his book as:

  • Concave/Convex
  • Slumped solicitor/Humped solicitor
  • Curves inward/Curves outward
  • Frown/Smile
  • Negative convexity effects/Positive convexity effects
  • Pain more than gain/Gain more than pain
  • Doesn’t “like” volatility (presumable)/”Likes” volatility

Tracking his descriptions is made a little more challenging by reversals in reference when writing of both together (concave and convex then convex and concave) and mis-matches between the text and illustrations. For example:

Nonlinearity comes in two kinds: concave (curves inward), as in the case of the king and the stone, or its opposite, convex (curves outward). And of course, mixed, with concave and convex sections. (note the order: concave / convex) Figures 10 and 11 show the following simplifications of nonlinearity: the convex and the concave resemble a smile and a frown, respectively. (note the order: convex / concave)

Figure 10 shows:

So, “convex, curves outward” is illustrated as an upward curve and “concave, curves inward” is illustrated as a downward curve. Outward is upward and inward is downward. It reads like a yoga pose instruction or a play-by-play call for a game of a Twister.

After this presentation, Taleb simplifies the ideas:

I use the term “convexity effect” for both, in order to simplify the vocabulary, saying “positive convexity effects” and “negative convexity effects.”

This was helpful. The big gain is when Taleb gets to the math and graphs what he’s talking about. Maybe the presentation to this point is helpful to non-math thinkers, but for me it was more obfuscating than illuminating. My adaptation of the graphs presented by Taleb:

With this picture, it’s easier for me to understand the non-linear relationship between a variable’s volatility and fragility vs antifragility. The rest of the chapter is easier to understand with this picture of the relationships in mind.

Estimating Effort – Adaptation

I’ve been running the informed intuition (or if you prefer, “disciplined intuition”)  approach to estimating effort for close to nine months now. For the most part, it has gone very well. The primary objective – inspire and support a conversation around the effort needed to complete a story – has most definitely been realized. Along the way the process has shifted to better support both the conversation and the team’s ability to internalize the process.

Originally, it was proposed that teams rate each of the effort characteristics on a sliding scale – 1 to 10 or 1 to 15, or whatever the team decided was most useful. Feedback from the teams lead to the discovery that it is easier to evaluate each effort characteristic using the modified Fibonacci scale rather than a sliding scale. This provides continuity across the method in that everything about a story’s effort value is considered using the same scale. It also reinforces the rationale behind the use of the Fibonacci scale and seems to facilitating the team’s ability to internalize the method. They are moving more quickly when deriving effort values.

A second adaptation is the use of several sets of characteristics, depending on the type of story, the predominant functional area represented by the team, and the nature of the work. For example, a story that involves the development of a computer board has a different set of criteria from stories that involve the creation of firmware for the board or the UI/UX features of the hardware product. The sets usually contain 3 or 4 common characteristics, such as “complexity” or “dependencies.” However, the hardware board may include something like “part sourcing” or “compliance testing.” This illustrates the importance of having the team deconstruct what “effort” means in the context of their world. When they determine the characteristics, the follow-on conversations about the effort are much more robust and meaningful.

In essence, this method is a reflection of the product owner’s responsibility for the “what” of the story and the team’s responsibility for figuring out the “how” of the story. “What I want,” says the product owner, “is an estimate of the effort involved to complete this story.” The teams effort criteria demonstrate to the product owner how they arrive at any particular value.

Thinking Agile about the Pandemic

[Note: The fact that the SARS-CoV-2 virus pandemic is on everyone’s mind has given me an opportunity to expand the scope of topics I may consider on The Agile Fieldbook. Specifically, relating current events to Agile and systems thinking.]

 

I’ve been thinking a bit deeper on the frequent comparison of flu deaths with highway traffic deaths, total US deaths in the Vietnam War, or any variety of raw number comparisons. I’m  working to get at something that feels to be an underlying mis-match in such comparisons.

Part of the challenge is that self-proclaimed epidemiology experts are popping up like Spring daffodils, busy asserting themselves as consummate experts in statistics and government policy while asserting themselves as enforcement authorities. And the Internet has been an amplifier for the echo chambers created by rabble. In short, finding the signal in the noise has become much harder. I can’t recall a time when there has been this much manufacturing and shoveling of confirmation bias around the world. Alas, it’s one supply chain that has grown significantly more robust.

At the heart of the raw number comparisons is a category mistake. Stopping at an equivalence of mortality across all categories for cause of death gives rise the category mistake. Not all causes of death should be considered equal when searching for a course of action that will affect millions – in the case of COVID-19, billions – of people. There are many differentiating factors that could be considered in the case of viral pandemics and traffic deaths. The principle one, in my view, is agency.

I can choose a robust and enjoyable lifestyle that significantly lowers my risk to death due to highway accidents (to use that number for my analysis.) In fact, I have done exactly that. A four mile commute to the office, all on local streets where the highest speed limit is 45 MPH…for exactly 3 blocks. To those that declare “But, many people can’t do this.” my reply is “Maybe.” There will certainly be outliers for a variety of reasons. But in many of these cases, the individuals are nonetheless making choices. Perhaps they don’t want to move or they don’t want to change jobs or they don’t want to up-skill or… There are likely a confluence of many choices in the mix that make it appear they are stuck or trapped. Frequently, even in the outlier cases, when circumstances press hard enough, they “find” opportunities and make changes, perhaps even subsidized by local and federal governments. But that’s a topic I’ll leave for much more qualified bloggers to tackle.

I can make other choices in the form of the car I drive or the route I drive to my destination. I can chose the time of day I drive for errands or the frequency with which I need to run them. I can chose whether to use my smart phone while driving or engage in some other distraction while driving. Or I can choose not to drive at all and take the bus, train, bike, walk, or a combination of any of those.

With a viral infection – as we are learning now – there is virtually no personal agency. The only way to avoid the adverse consequences is to severely curtail our lifestyle. Now. There’s no easing into it. No evening classes at the college annex to up-skill our ability to dodge the virus. No Ecopass that lets us leave the breathing up to someone else. Not much of any choice for replacing a stalled lifestyle with a different one because they’re all stalled.

Which gets me to the thinking behind “Mass transit kills.” It does so because its an efficient vector for transmitting biological infections. The early studies show how quickly COVID-19 spread due to air travel followed by trains, taxis, and buses in crowed urban settings. A fatal car accident, however, is a local event. First responders and surrounding communities are not at risk of death due to the now static car accident. A viral or bacterial outbreak is dynamic and spreads just by virtue of people moving around. Globally, how long would humans have to drive cars before the death toll matched that of the number of deaths that have been attributed to plagues and pandemics throughout history? And historically, plagues and pandemics moved at the speed of rats, mosquitoes, and ox carts. Today, they can move just shy the speed of sound.

Having read close to a couple dozen COVID-19 related research papers (surprisingly, none of them authored by CNN/MSNBC/FOX/CBS/ABC/NBC/BBC et. al.), the chances that we’re approaching a pandemic that won’t offer much of a lead time are increasing. The growth of human population has greatly increased the adjacent possible for animal virus’ to make the jump to humans. The probability of an asymptomatic contagious period combined with lethal morbidity increases as the adjacent possible horizon expands. If such a viral combination were to occur, mass transit will be that virus’ best friend.

My thinking is probably incomplete on this matter, so I welcome your comments.

Determining Effort Value – Tactics

While the concept and practice is straightforward, shifting a team from intuitive guesses about story points to a more deliberate approach for determining effort value (a.k.a. story points) can be a challenge at first. The following approach may help start the process.

  1. Begin by focusing on product backlog items (PBIs) that the team has estimated using their previous approach that are at a 5 or greater. There isn’t much to be gained by applying this approach to PBIs estimated at 1 or 2. PBIs that the team knows are a bigger effort but may not be able to articulate why that is the case are good candidates for learning how to apply this technique.
  2. Ask the team how much time it may take to complete a PBI. While I have written before about the importance of excluding time criteria when determining effort values, this can be a good place to start. It is what teams are most familiar with – for better or worse. Teams usually have not problem throwing out a time: 8 hours, 16 hours, etc.
  3. With the time estimate in hand ask the team:

“If you sit in front of your computer and start the clock, will the PBI be done if you do nothing and the estimated time elapses?”

I would hope the team would answer “No.”

  1. With the answer to the first question in hand, ask the following question:

If the passage of time alone won’t get the PBI work completed, what will you be doing (actions and behaviors) to complete the work?

The conversation that follows from this questions is the basis for determining the effort criteria the team needs to better describe what they will be doing on their way to completing the PBI. The techniques around establishing effort criteria are described in an earlier post.

Transparency, Source Code Quality, and Metrics

I’ve been reading “Hello World: Being Human in the Age of Algorithms” by Hannah Fry. She relates this story:

In 2012, a number of disabled people in Idaho were informed that their Medicaid assistance was being cut. Although they all qualified for benefits, the state was slashing their financial support – without warning – by as much as 30 per cent, leaving them struggling to pay for their care. This wasn’t a political decision; it was the result of a new ‘budget tool’ that had been adopted by the Idaho Department of Health and Welfare – a piece of software that automatically calculated the level of support that each person should receive.

Unable to understand why their benefits had been reduced, or to effectively challenge the reduction, the residents turned to the American Civil Liberties Union (ACLU) for help.

[The ACLU] began by asking for details on how the algorithm worked, but the Medicaid team refused to explain their calculations. They argued that the software that assessed the cases was a ‘trade secret’ and couldn’t be shared. Fortunately, the judge presiding over the case disagreed. The budget tool that wielded so much power over the residents was then handed over, and revealed to be – not some sophisticated AI, not some beautifully crafted mathematical model, but an Excel spreadsheet.

Within the spreadsheet, the calculations were supposedly based on historical cases, but the data was so badly riddled with bugs and errors that it was, for the most part, entirely useless. Worse, once the ACLU team managed to unpick the equations, they discovered ‘fundamental statistical flaws in the way that the formula itself was structured’. The budget tool had effectively been producing random results for a huge number of people. The algorithm – if you can call it that – was of such poor quality that the court would eventually rule it unconstitutional.

My first thoughts were, “How bad a spreadsheet hack do you gotta be to have your work be declared unconstitutional? And just how many hacks does it take to build an unconstitutional spreadsheet?”

To be fair, math is hard. Government is complex. And I’m comfortable with the assumption that everyone who had a hand in building this spreadsheet had good intentions. Venturing a guess, the breakdown happened at the manager/politician/lawyer level.

It is probable that the complexity of the task quickly overtook the abilities of the spreadsheet author(s) and the capabilities of the tool. Eventually, no single person understood how the whole thing worked. Consequently, making a change in one place affected how the spreadsheet worked in n other places and no one was capable of regression testing the beast. But the manager/politician/lawyer types knew what to do: Hide behind the “trade secret” smoke.

There are many lessons from this story. Plenty of points of failure. What I’m interested in writing about is the importance of transparency and how a good set of performance metrics can help in maintaining transparency.

The externally facing opacity in this story is readily apparent. What we don’t (and probably never will) see is the lack of transparency prevalent internally to the Idaho Department of Health and Welfare and whomever designed and built the spreadsheet tool. I’d bet a round of drinks that neither has heard of Agile much less employed its principles and practices. These by themselves – when actually practiced long term – go a long way toward establishing a culture of transparency. This is the key. Long term practice. A period of time is needed to change behaviors, mindsets, attitudes, beliefs, and when necessary, personnel. Even over the long term, implementing an Agile methodology isn’t improvisational theater. A strategy and a way to measure progress is needed.

Which gets me to metrics.

Selecting metrics and tuning them over time is critical to measuring team performance and developing improvement plans. Metrics that inform meaningful actions are the goal. Leave the vanity metrics that verify what managers want to hear or already “know” to the competition.

I’ve encountered my share of overly complex ways to measure the performance of individuals and teams. Often the metrics taken from machine-like task work (for example, assembly line work) are applied to creative or intellectual/knowledge tasks. This type of re-purposing results in, for example, counting lines of code or the number of source code check-ins as an indicator of software developer productivity. It never ends well.

When working to define a set of metrics to track an individual or team’s performance it is more effective to begin by asking several questions.

  • What problems are you trying to solve?
  • What questions will your chosen metrics answer?
  • What questions will your chosen metrics not answer?
  • How, specifically, will you know you can trust you metrics? How will you know when they are right and how will you know when they are wrong?
  • How well do your metrics compliment each other? That is, by combining them do you end up with a much better picture of individual or team performance the you do by considering individual metrics?
  • Do your metrics support any planned actions for improvement? Are you collecting actionable metrics or vanity metrics?

Finally, it is important to understand the limits of performance metrics. Displaying velocity charts that have fractions of story points implies an accuracy that simply isn’t there. Significantly adjusting project timelines based on the first three sprints worth of velocity data can have adverse secondary effects on the project.

There is no perfect set of metrics, no divine set of measures that match an impossible standard of perfect objectivity and fairness. The best possible set of metrics is one that supports useful decisions rather than simply instructs managers where to apply the stick. They should help show the way to performance improvement rather than simply report results.

I work to have 3-5 metrics, depending on the individual, the team, and the project. Less than 3 and the picture starts to look rather flat. More then 5 and the task of performance monitoring can become overly complicated and cumbersome. Keep it lean and manageable. That way, it’s easier to tell when things aren’t working and your metrics are much less likely to violate your team’s constitutional rights.

Time Out!

In Estimating Effort – An Explicitly Implicit Approach I stated that time cannot be one of the attributes the team uses to describe what they mean by “effort.” The importance of this warrants the need for a deeper dive into the rationale behind this rule and how excluding time can lead to better predictability for team performance.

The primary objective for coaching teams to think about effort independent of time constraints is so that they can improve their skills for thinking about the actual work involved. Certainly they will spend time completing the work. But the simple passage of time won’t get the work done. Someone has to actually DO something. That something is the effort.

For example, maybe someone on the team says the product backlog item requires a lot of documentation. It isn’t complex and there aren’t any dependencies, it’s just going to take a lot of time – 7 days, maybe. So they want to give that PBI an effort value of 5 or 8 (or 5 or 8 story points, if that’s what you’re using) because it’s going to take a lot of time.

Remember, the purpose of these criteria is to generate a conversation around what the actual effort is. The criteria are just a set of guideposts that help the team hold a meaningful conversation about the effort.  So when someone on a team insists that they estimate using time, I ask them “What are you doing as the time you’ve estimated is passing? Are you just sitting there, watching the seconds tick away?” Of course they aren’t just sitting there. I’m asking the questions to elicit a comment about the actual work they are doing. Maybe they answer with something a little less vague, like “typing words.” That’s good. “What’s the difference between typing those words in a word processor and typing code in Vim?”

Continuing down this line of inquiry usually leads to the realization that typing documentation has many similar traits to coding. It can be complex. It may have dependencies.  It may require research for accuracy and it certainly will need a lot of debugging (professional writers call this “editing.”) Coders typically don’t like writing documentation. To them it’s just about the tedium of banging something out that’s not as fun as code. Sussing out the effort like this will lead to better acceptance criteria and definition of done associated with the PBI.

The downside of time estimates is that they hide all manner of sins and rabbit holes. The planning fallacy, precision bias, availability heuristic, and survivorship bias are just a few of the mental obstacles guaranteed to reduce the accuracy of time estimates. Or you may have to deal with a team member who wants to estimate using time because they know full well it offers the opportunity to hide slow work. (Gamers gotta game.) When teams have run the gauntlet of effort criteria, they are more likely to end up with a better picture of how much work they are being asked to do when time is excluded from the conversation. Effort criteria force the team to be more explicit about the activities they are engaged with as the clock ticks.

The investment in identifying time-independent effort criteria yields further benefits in the retrospective. Was the team unable to complete a PBI in the sprint? Was all the work finished two days early? Have a look at the effort criteria and ask which of them were a factor in making the PBIs a bigger or smaller effort than initially estimated. This is how teams learn and improve their skill at estimating. The better they are at estimating the more predictable their productivity.

OK, so let’s say you have a team doing a great job of determining the effort needed to complete a PBI and they do so without including time. No doubt, management will be unimpressed. They want time estimates. Good news! We can give them time estimates…in two week increments.

With the team focused on figuring out time independent effort values for every PBI in the backlog and an ongoing experience of how much effort they can reliably complete in two week increments, product owners can provide a reasonable forecast for when the release or project will be complete. The team focuses on accurate time independent effort estimates. The scrum master and product owner worry about the performance metrics and time projections.

It’s surprising how hard of a sell this can be for teams. They are hard wired to think in terms of time because that’s what traditional project management has hounded them for since before coding was a thing. I tell teams, “With Agile and scrum, you no longer have to worry about time. That’s the product owner’s job. But you do have to develop very good skills at estimating effort.” It’s common for them to have a hard time adjusting to the new paradigm.

Root Causes

The sage business guru Willie Sutton might answer the question “Why must we work so hard at digging to finding the causes to our problems?” by observing “Because that’s where the roots are.”
Digging to find root causes is hard work. They’re are rarely obvious and there’s never just one. Occasionally, you might get lucky and trip over an obvious root cause (obvious once you’ve tripped over it.) Most often, it’ll require some unknown amount of exploration and experimentation.

Even so, I’ve watch as people work very hard to avoid the hard work needed to find root causes or fail to acknowledge them even when they are wrapped around their ankles. It’s an odd form of bikeshedding whereby the seemingly obvious major issues are ignored in favor of issues that are much easier to identify, explain, or understand.

One thing is certain, you’ll know you’ve found a root cause when one of two things happen: You implement a change meant to correct the issue and a whole lot of other things get fixed as a result or there is noisy and aggressive resistance to change.

Poor morale, for example, is often a presenting symptom mistaken for a root cause. The inexperienced (or lazy) will throw fixes at poor morale like money, happy hours, or other trinkets. These work in the very short term and have their place in a manager’s toolbox, but eventually more money becomes the new low pay and more alcohol has it’s own very steep downside.

Morale is best understood as a signal for measuring the health of the underlying system. Poor morale is a signal that a whole lot of things are going wrong and that they’ve been going wrong for an extended period of time. By leveraging a system dynamics approach, it’s relatively easy to make some educated guesses about where the root causes may be. That’s the easy part.

The hard work lies with figuring out what interventions to implement and determining how to measure whether or not the changes are having the desired effect. A positive shift in morale would certainly be one of the indicators. But since it is a lagging indicator on the scale of months, it would be important to include several other measures that are more closely associated with the selected interventions.

There are other systemic symptoms that are relatively easy to identify and track. Workforce turnover, rework, and delays in delivery of high dependency work products are just a couple of examples. Each of these would suggest a different approach needed to resolve the underlying issues and restore balance to the system dynamics behind a team or organization’s performance.

Estimating Effort – An Explicitly Implicit Approach

It is difficult to make predictions, especially about the future.Unknown

Sage advice.

So why bother estimating the amount of work needed to complete a product backlog item? After all, since estimates are about the future the probability is high that they will be wrong. Actually, they may very well be guaranteed to be wrong. It’s just that some of the guesses will be more accurate than others. And if they happen to match what the effort ended up to be, they just look like they were “right.”

I’ve written in the past expressing my thoughts about estimating the effort needed to complete product backlog items, particularly with respect to story points. I believe working to find a relative gauge to how well teams are estimating work is important. Without them, cognitive biases such as the optimism bias and planning fallacy can significantly distort a project delivery timeline. However, the phrase “story point” is burdened with a lot of baggage. It has been abused and misused such that invoking the phrase often causes more harm than good.

I’ve been experimenting recently with a different approach to estimating effort. The method I’ll describe in this post got a bit of a boost after listening to a recent interview with Psychologist and Nobel laureate Daniel Kahneman. In this interview, Kahneman describes an experience he had while serving in the Israeli army some sixty years ago. He was assigned the job of setting up an interview process that would determine how well a recruit would do as a combat soldier. For this process, he selected six traits and instructed the interviewers to ask questions designed to evaluate each trait independently and score them. The interviewers were not happy with this approach. As a compromise, Kahneman instructed the interviewers, when they were finished asking about the six traits, to close their eyes and just jot down a number they felt matched how good a soldier the recruit might be. What he discovered:

When we validated the results of the interview, it was a big improvement on what had gone on before. But the other surprise was that the final intuitive judgments added, it was good. It was as good as the average of the six traits, and not the same. It added information, so actually we ended up with a score that was half determined by the specific ratings, and the intuition got half the weight. That, by the way, stayed in the Israeli army for well over 50 years.Daniel Kahneman

This intuitive evaluation made by the interviewers is similar to what Agile methods ask of development teams when determining a value for “story points.” T-shirt sizes, planning poker, dot voting, affinity mapping and many similar techniques are all designed to elicit an intuitive sense of the effort involved. If there is a disagreement between team members, than a dialog follows to understand what the discrepancy is all about. This continues until there is alignment on what the team believes the effort to be. When it works, it works well.

So on to the details of the approach I’ve been experimenting with. (It doesn’t have a name yet.) The result of this approach is a number I call the “effort value.” The word “value” is a reference to the actual elementary mathematics value being derived. Much like the answer to the question “What value results from adding 2 and 2?” Answer: 4. The word “value” also suggests an intrinsic worth, something beyond a hard number. My theory is that this will help teams think beyond the mere number and think also about the value they are delivering to stakeholders. The word “point” correlates to a hard number and lacks any association to intrinsic worth or value.

Changing the words introduces a simple and small shift that nonetheless has a significant impact. With the change, teams are more open to considering a different approach to determining estimates.

So how is the effort value derived?

I begin by having the team define 4-5 characteristics or attributes that, to them, describe what they mean by “effort.” It is important for the team to define these attributes. By doing so, they own the definition and it becomes much harder for them to dismiss the attributes as “someone else’s” and thereby object to their use in deriving an effort value. These attributes can be anything that is meaningful to the team. Examples:

  • Complexity – Is the work straightforward (e.g. code a bubble sort function) or does it involve interrelated systems (e.g. code a predictive inventory control algorithm)?
  • Dependencies – How dependent is the product backlog item on other backlog items or other teams?
  • Familiarity – Is this work very similar to work the team has done in the past or something quite new? Tasking a coder with documenting a piece of straightforward code may actually be a difficult effort because the coding language they spend most of their day with is familiar whereas writing clear sentences that non-technical people can understand is unfamiliar.
  • Information – Is the detail in the product backlog item complete? Are the acceptance criteria and definition of done clear?
  • Technical Debt Risk – Does the PBI require any refactoring of related code? Is any technical debt being incurred with the PBI?
  • Design Stability – Is there a lot of discovery and exploration needed to complete the PBI?
  • Confidence for Completing a PBI within the Sprint – This category may roll up several categories.
  • Tedium – Perhaps the effort involves a lot of repetitive copy and paste that nonetheless requires careful attention to avoid simple mistakes.

The team can define any attribute they wish. However, there are a few criteria to consider:

  • Keep the list limited to 4-6 attributes. More than that risks turning the derivation of an effort value into the equivalent of a product backlog item navel-gazing exercise.
  • Time cannot be one of the attributes.
  • The attributes should be reasonable. Assessing a product backlog item’s effort value by evaluating it’s “aura” or the current position of the stars are generally not useful attributes. On the other hand, I’ve listened to arguments against evaluating estimates in terms of “complexity” as being similarly useless. I see the point of those arguments, but my view is that the attributes must first and foremost be meaningful to the entire team. In the end, it’s an educated guess and arguments about the definition of terms like “complexity” are counterproductive to the overall intent of deriving an effort value.

Each of these attributes is then given a scale, the same scale for each attribute – 1 to 10, 1 to 15 – whatever the team feels is most appropriate. The team then goes through each of these attributes and evaluates the product backlog item attribute on the scale. (NB: After nine months of Plan-Do-Check-Adapt, a better approach for scoring the attributes has been determined.) The low number on the scale represents very little impact. If dependency, for example, is one of the attributes then a 1 might mean that the product backlog item is entirely self-contained. A 10 might represent a case where the product backlog item is dependent on several other product backlog items or perhaps the output from other teams.

When this is done, ask the team where on the modified Fibonacci scale they think this particular product backlog item’s effort value should be. If they’re struggling you can do the math: find the average for all the attributes and match that number in the modified Fibonacci scale. If the average is a decimal, for example 3.1, match the value to the next highest modified Fibonacci scale number. In this case the value would be 5. Then ask the team if they feel that number it’s a good representation of the effort value for the product backlog item.

This may seem like a lot of unnecessary gyrations, but for technical people it’s a simple process they can understand. The bonus is a number they can calculate. The number isn’t what’s important here. What’s important is the conversation that happens around the attributes and what the team feels about the number that results from the conversation. This exercise is meant to develop their intuitive muscles for considering multiple aspects and dimensions behind the “effort” needed for them to get the work done.

Use this process enough times and eventually calculating the average can be dropped from the process. Continue using this process and eventually calculating the numbers for the individual attributes can be dropped from the process. I don’t know if it’s a good idea to drop the use of the attributes for generating the needed conversation around the effort needed, but it will certainly be valuable to reconsider the list of attributes from time to time so as to fine tune the list to match what the team feels is important.

With this approach I’m turning the estimation process on its head (or back on its feet, if Kahneman is right.) Rather than seek the intuitive response first (e.g. t-shirt size) and elicit details later if there is a mismatch between team members, this method seeks to better prime and develop the team’s intuition about the effort value by having them explicitly consider a list of self-selected attributes (or traits) for effort first and then include an intuitive evaluation for effort.

Don’t try to form an intuition quickly, which was what we normally do. Focus on the separate points, and then when you have the whole profile, then you can have an intuition and it’s going to be better. Because people form intuitions too quickly, and the rapid intuitions are not particularly good. If you delay intuition until you have more information, it’s going to be better.Daniel Kahneman

Update

See Time Out! and Determining Effort Value – Tactics for additional information on this technique.

The Practice of Sizing Spikes with Story Points

Every once and a while it’s good to take a tool out of it’s box and find out if it’s still fit for purpose. Maybe even find if it can be used in a new way. I recently did this with the practice of sizing spikes with story points. I’ve experienced a lot of different projects since last revisiting my thinking on this topic. So after doing a little research on current thinking, I updated an old set of slides and presented my position to a group of scrum masters to set the stage for a conversation. My position: Estimating spikes with story points is a vanity metric and teams are better served with time-boxed spikes that are unsized.

While several colleagues came with an abundance of material to support their particular position, no one addressed the points I raised. So it was a wash. My position hasn’t changed appreciably. But I did gain from hearing several arguments for how spikes could be used more effectively if they were to be sized with story points. And perhaps the feedback from this article will further evolve my thinking on the subject.

To begin, I’ll answer the question of “What is a spike?” by accepting the definition from agiledictionary.com:

Spike

A task aimed at answering a question or gathering information, rather than at producing shippable product. Sometimes a user story is generated that cannot be well estimated until the development team does some actual work to resolve a technical question or a design problem. The solution is to create a “spike,” which is some work whose purpose is to provide the answer or solution.

The phrase “cannot be well estimated” is suggestive. If the work cannot be well estimated than what is the value of estimating it in the first place? Any number placed on the spike is likely to be for the most part arbitrary. Any number greater than zero will therefore arbitrarily inflate the sprint velocity and make it less representative of the value being delivered. It may make the team feel better about their performance, but it tells the stakeholders less about the work remaining. No where can I find a stated purpose of Agile or scrum to be making the team “feel better.” In practice, by masking the amount of value being delivered, the opposite is probably true. The scrum framework ruthlessly exposes all the unhelpful and counterproductive practices and behaviors an unproductive team may be unconsciously perpetuating.

Forty points of genuine value delivered at the end of a sprint is 100% of rubber on the road. Forty points delivered of which 10 are points assigned to one or more spikes is 75% of rubber on the road. The spike points are slippage. If they are left unpointed then it is clear what is happening. A spike here and there isn’t likely to have a significant impact on the velocity trend over, for example, 8 or 10 sprints. One or more spikes per sprint will cause the velocity to sink and suggests a number of corrective actions – actions that may be missed if the velocity is falsely kept at a certain desired or expected value. In other words, pointing spikes hides important information that could very well impact the success of the project. Bad news can inspire better decisions and corrective action. Falsely positive news most often leads to failures of the epic variety.

Consider the following two scenarios.

Team A has decided to add story points to their spikes. Immediately they run into several significant challenges related to the design and the technology choices made. So they create a number of spikes to find the answers and make some informed decision. The design and technology struggles continue for the next 10 sprints. Even with the challenges they faced, the team appears to have quickly established a stable velocity.

The burndown, however, looks like this:

If the scrum master were to use just the velocity numbers it would appear Team A is going to finish their work in about 14 sprints. This might be true if Team A were to have no more spikes in the remaining sprints. The trend, however, strongly suggests that’s not likely to happen. If a team has been struggling with design and technical issues for 10 sprints, it is unlikely those struggles will suddenly stop at sprint 11 and beyond unless there have been deliberate efforts to mitigate that potential. By pointing spikes and generating a nice-looking velocity chart it is more probable that Team A is unaware of the extent to which they may be underestimating the amount of time to complete items in the backlog.

Team B finds themselves in exactly the same situation as Team A. They immediately run into several significant challenges related to the design and the technology choices made and create a number of spikes to find the answers and make some informed decision. However, they decided not to add story points to their spikes. The design and technology struggles continue for the next 10 sprints. The data show that Team B is clearly struggling to establish a stable velocity.

And the burndown looks like this, same as Team A after 10 sprints:

However, it looks like it’s going to take Team B 21 more sprints to complete the work. That they’re struggling isn’t good. That it’s clear they struggling is very good. This isn’t apparent with Team A’s velocity chart. Since it’s clear they are struggling it is much easier to start asking questions, find the source of the agony, and make changes that will have a positive impact. It is also much more probably that the changes will be effective because they will have been based on solid information as to what the issues are. Less guess work involved with Team B than with Team A.

However, any scrum master worth their salt is going to notice that the product backlog burndown doesn’t align with the velocity chart. It isn’t burning down as fast as the velocity chart suggests it should be. So the savvy Team A scrum master starts tracking the burndown of value-add points vs spike points. Doing so might look like the following burndown:

Using the average from the parsed burndown, it is much more likely that Team A will need 21 additional sprints to complete the work. And for Team B?

The picture of the future based on the backlog burndown is a close match to the picture from the velocity data, about 22 sprints to complete the work.

If you were a product owner, responsible for keeping the customer informed of progress, which set of numbers would you want to base your report on? Would you rather surprise the customer with a “sudden” and extended delay or would you rather communicate openly and accurately?

Summary

Leaving spikes unpointed…

  • Increases the probability that performance metrics will reveal problems sooner and thus allow for corrective actions to be taken earlier in a project.
  • The team’s velocity and backlog burndown is a more accurate reflection of value actually being created for the customer and therefore allows for greater confidence of any predictions based on the metrics.

I’m interested in hearing your position on whether or not spikes should be estimated with story points (or some other measure.) I’m particularly interested in hearing where my thinking described in this article is in need of updating.

[This article originally appeared on the Agile Alliance blog.]

Agile Money

In a recent conversation with colleagues we were debating the merits of using story point velocity as a metric for team performance and, more specifically, how it relates to determining a team’s predictability. That is to say, how reliable the team is at completing the work they have promised to complete. At one point, the question of what is a story point came up and we hit on the idea of story points not being “points” at all. Rather, they are more like currency. This solved a number of issues for us.

First, it interrupts the all too common assumption that story points (and by extension, velocities) can be compared between teams. Experienced scrum practitioners know this isn’t true and that nothing good can come from normalizing story points and sprint velocities between teams. And yet this is something non-agile savvy management types are want to do. Thinking of a story’s effort in terms of currency carries with it the implicit assumption that one team’s “dollars” are not another team’s “rubles” or another teams “euros.” At the very least, an exchange evaluation would need to occur. Nonetheless, dollars, rubles, and euros convey an agreement of value, a store of value that serves as a reliable predictor of exchange. X number of story points will deliver Y value from the product backlog.

The second thing thinking about effort as currency accomplished was to clarify the consequences of populating the product backlog with a lot of busy work or non-value adding work tasks. By reducing the value of the story currency, the measure of the level of effort becomes inflated and the ability of the story currency to function as a store of value is diminished.

There are a host of other interesting economics derived thought experiments that can be played out with this frame around story effort. What’s the effect of supply and demand on available story currency (points)? What’s the state of the currency supply (resource availability)? Is there such a thing as counterfeit story currency? If so, what’s that look like? How might this mesh with the idea of technical or dark debt?

Try this out at your next backlog refinement session (or whenever it is you plan to size story efforts): Ask the team what you would have to pay them in order to complete the work. Choose whatever measure you wish – dollars, chickens, cookies – and use that as a basis for determining the effort needed to complete the story. You might also include in the conversation the consequences to the team – using the same measures – if they do not deliver on their promise.