Assessing and Tracking Team Performance – Part 2: Work, Work, Work…

…work, work, work. It’s what we do.

“I have to go to work.”

“What do you do for work?”

“OK, team. Let’s get to work.”

“Where do you work?

“Is that working for you?”

“That’ll never work.”

“Let’s work together.”

“Time to roll up our sleeves and get to work.”

The notion of work is so pervasive it underpins my belief that Agile principles and practices can be applied to a variety of human endeavors beyond the narrow focus on software development. In fact, the case can be made that Agile principles and practices have been around for millennia and only very recently were codified for a software development context. Agile simply feels more natural, more aligned with how humans think and interact to solve problems. From the way we explore and learn as children to the way we solve problems at home as adults, it’s much easier to recognize Agile patterns than waterfall patterns. Somehow, when we go to work we’re subject to the behaviors and measures of machines and Taylorism.

It doesn’t have to be that way. Agile has been shown to more effective at increasing productivity and decreasing costs in contexts beyond software. So why isn’t it practiced everywhere all the time?

I can think of a couple of broad generalizations that answer this question. First, Agile isn’t a panacea. Nothing is. To paraphrase Winston Churchill, Agile is the worst form of project management, except for all the others. Second, in the light of Conway’s Law and Shalloway’s Corollary, the systemic monster pushing back on change is a formidable one.

I have no aspirations of making Agile a panacea and will never claim it to be one. But until something more promising comes along, I can work to improve the practices for applying Agile values and principles. As for the systemic monster, that’s what this series of articles are about.

Monsters are scary because we don’t know them, we can’t see them, they’re hidden from us, they’re “out there, somewhere.” We’ll begin the process of understanding the systemic workplace monster by shining a light on work. What is it? How do we define it?

With each new day, in one form or another, we face a newly filled box of Work to Do. On the far side of the day, there is an empty box of Work Done.

In a perfect world, by the end of the day, Work to Do is empty and Work Done is full.

This transition doesn’t happen by itself. Magic won’t get work moved to done. There’s effort involved. More effort means more progress. Less effort, less progress. On Agile software projects, Work to Do is described in the product backlog and Work Done manifests as a deliverable product or service.

Typically, there is some form of measure on progress toward the goal of getting work to done. In scrum, this might be story points completed or business value delivered.

But we don’t live in a perfect world. Whatever the endeavor, errors and mistakes are part of the work effort. Instructions were unclear or incomplete, time constraints caused the work to be rushed, the person doing the work was apathetic or otherwise unfocused – there are thousands of reasons for why some of the work fails to meet expectations.

Since our efforts to complete work are always less than perfect by some percentage, part of the effort that creates progress is also an effort that generates errors. Anyone managing a project – especially a technical project – should expect that there is a box of Undiscovered Rework hiding somewhere. How big that box is or how fast it’s filling are unknown. All we know at this point is the box of Undiscovered Rework exists. In software development, the contents of this box are referred to as defects or bugs.

We know the box of Undiscovered Rework is there somewhere. So now we need a deliberate effort aimed at discovering that rework. This is the job of quality assurance and testing professionals. Their efforts at rework discovery bring the defects and errors to light so that they can be documented and added to the flow of Work to Do.

This is the work loop.1 Human interactions and behaviors aimed at achieving some larger goal provide the energy for driving this loop. The quality of those interactions determine how fast work moves through this loop.

In subsequent posts, we’ll begin to explore several specific human interactions and behaviors the can either support or inhibit the flow of work through this loop. But first, a sidebar to learn how to read the diagrams that follow. We’ll cover that in the next post of this series.

Previous article in the series: Assessing and Tracking Team Performance – Part 1: The Revenge of Frankenagile

Next article in the series: Assessing and Tracking Team Performance – Part 3: System Dynamics and Causal Loop Diagrams 101

References

1The core of the model I use to assess team and organization health is based on the work of James Lyneis and David Ford: System Dynamics Applied to Project Management, System Dynamics Review Volume 23 Number 2/3 Summer/Fall 2007

Assessing and Tracking Team Performance – Part 1: The Revenge of Frankenagile

The ubiquitous employee handbook is filled with rules, regulations, and descriptions of how employees are expected to behave. The larger the organization, the thicker the handbook. Handbooks and associated policies like this have been described as “corporate scar tissue.” Someone somewhere at sometime made a serious mistake, whether intentional or not, and so a policy was created to prevent that something from ever happening again. The same effect can be seen with many consumer products that have lengthy warning labels and manual pages stating things like “This toaster is not suitable as a flotation device in the event of a boating accident.”

In an effort to prevent a re-occurrence, the policies effectively limit – much like physical scar tissue – the ability of people within the organization to adapt, improvise, and innovate. They limit an employee’s range of motion within the organization’s solution space. In an effort to save the organization from human error and create the perfect business machine, excessive policies condemn the organization to a slow but certain death.

Anyone who has worked within a large organization recognizes this. For some, it’s a comfort. Knowing the rules. Knowing where the fences are. And knowing where to place blame. The less ambiguity around how a situation can be interpreted the better. For others, maybe after an attempt to change things, the environment becomes too stifling and they leave for greener, wider pastures.

Given enough time, the policies become the document of record for the organization’s culture. Any attempts to change the way work gets done within an organization that has deep scar tissue will have to confront Shalloway’s Corollary:

When development groups change how their development staff are organized, their current application architecture will work against them.

I’ve learned this corollary is not limited to software companies. In every case I’ve experienced, whether in a software company or not, the system will push back. Hard. Every Agile practitioner needs to know and respect this. Riding into work on a unicorn with a bag of rainbows and pixie dust is a gig that will not end well. At best, the organization will have made an incomplete effort at implementing Agile and “Frankenagile” will be roaming the halls – a collection of project management methodological parts that by themselves served a valuable purpose in a larger or different context, but have been stitched together to form a monster in Agile form only.

In a small company, particularly if they are working to create a software product, the monster may be small. So performing corrective surgery, while still a lot of work, is quite possible in a relatively short amount of time. For larger organizations, particularly those with deep roots in traditional project management, it can be a scary sized beast indeed. Something not to be trifled with, rather something that needs a well thought out strategy and plan of action.

It is the latter scenario I’d like to address in this series of posts (this being Part 1,  the introduction) over the next several week. What I’d like to present is a method I’ve used quite successfully over the past 10+ years for assessing the extent to which Conway’s Law and Shalloway’s Corollary are in play. It is a method for determining both team and organization health within the larger management context. The extent to which Agile can be successfully implemented in an organization is dependent on how aware management, the Agile coach, and scrum masters are of the system dynamics driving organizational behavior.

Next post in the series: Assessing and Tracking Team Performance – Part 2: Work, Work, Work…

What’s in YOUR manual?

You go to see a movie with a friend. You sit side-by-side and watch the same movie projected on the screen. Afterward, in discussing the movie, you both disagree on the motives of the lead character and even quibble over the sequence of events in the movie you just watched together.

How is it that two people having just watched the same movie could come to different conclusions and even disagree over the sequence of events that – objectively speaking – could have only happened in one way?

It’s what brains do. Memory is imperfect and every one of us has a unique set of filters and lenses through which we view the world. At best, we have a mostly useful but distorted model of the world around us. Not everyone understands this. Perhaps most people don’t understand this. It’s far more common for people – especially smart people – to believe and behave as if their model of the world is 1) accurate and 2) shared with everybody else on the planet.

Which gets me to the notion of the user manuals we all carry around in our heads about OTHER people.

Imagine a tall stack of books, some thin others very thick. On the spine of each book is the name of someone you know. The book with your partner’s name on it is particularly thick. The book with the name of your favorite barista on the spine is quite a bit thinner. Each of these books represents a manual that you have written on how the other person is supposed to behave. Your partner, for example, should know what they’re supposed to be doing to seamlessly match your model of the world. And when they don’t follow the manual, there can be hell to pay.

Same for your coworkers, other family members, even acquaintances. The manual is right there in plain sight in your head. How could they not know that they’re supposed to return your phone call within 30 minutes? It’s right there in the manual!

It seems cartoonish. But play with this point of view for a few days. Notice how many things – both positive and negative – you project onto others that are based on your version of how they should be behaving. What expectations do you have, based on the manual you wrote, for how they’re supposed to behave?

Now ask yourself, in that big stack of manuals you’ve authored for how others’ brains should work, where is your manual? If you want to improve all your relationships, toss out all of those manuals and keep only one. The one with your name on the spine. Now focus on improving that one manual.

The Value of “Good Enough for Now”

I’ve been giving some more thought to the idea of “good enough” as one of the criteria for defining minimum viable/valuable products. I still stand by everything I wrote in my original “The Value of ‘Good Enough’” article. What’s different is that I’ve started to use the phrase “good enough for now.” Reason being, the phrase “good enough” seems to imply an end state. If it is early in a project, people generally have a problem with that. They have some version of an end state that is a significant mismatch with the “good enough” product today. The idea of settling for “good enough” at this point makes it difficult for them to know when to stop work on an interim phase and collect feedback.

“Good enough for now” implies there is more work to be done and the product isn’t in some sort of finished state that they’ll have to settle for. I’m finding that I can more easily gain agreement that a story is finished and get people to move forward to the next “good enough for now” by including the time qualifier.

The Practice of Sizing Spikes with Story Points

Every once and a while it’s good to take a tool out of it’s box and find out if it’s still fit for purpose. Maybe even find if it can be used in a new way. I recently did this with the practice of sizing spikes with story points. I’ve experienced a lot of different projects since last revisiting my thinking on this topic. So after doing a little research on current thinking, I updated an old set of slides and presented my position to a group of scrum masters to set the stage for a conversation. My position: Estimating spikes with story points is a vanity metric and teams are better served with time-boxed spikes that are unsized.

While several colleagues came with an abundance of material to support their particular position, no one addressed the points I raised. So it was a wash. My position hasn’t changed appreciably. But I did gain from hearing several arguments for how spikes could be used more effectively if they were to be sized with story points. And perhaps the feedback from this article will further evolve my thinking on the subject.

To begin, I’ll answer the question of “What is a spike?” by accepting the definition from agiledictionary.com:

Spike

A task aimed at answering a question or gathering information, rather than at producing shippable product. Sometimes a user story is generated that cannot be well estimated until the development team does some actual work to resolve a technical question or a design problem. The solution is to create a “spike,” which is some work whose purpose is to provide the answer or solution.

The phrase “cannot be well estimated” is suggestive. If the work cannot be well estimated than what is the value of estimating it in the first place? Any number placed on the spike is likely to be for the most part arbitrary. Any number greater than zero will therefore arbitrarily inflate the sprint velocity and make it less representative of the value being delivered. It may make the team feel better about their performance, but it tells the stakeholders less about the work remaining. No where can I find a stated purpose of Agile or scrum to be making the team “feel better.” In practice, by masking the amount of value being delivered, the opposite is probably true. The scrum framework ruthlessly exposes all the unhelpful and counterproductive practices and behaviors an unproductive team may be unconsciously perpetuating.

Forty points of genuine value delivered at the end of a sprint is 100% of rubber on the road. Forty points delivered of which 10 are points assigned to one or more spikes is 75% of rubber on the road. The spike points are slippage. If they are left unpointed then it is clear what is happening. A spike here and there isn’t likely to have a significant impact on the velocity trend over, for example, 8 or 10 sprints. One or more spikes per sprint will cause the velocity to sink and suggests a number of corrective actions – actions that may be missed if the velocity is falsely kept at a certain desired or expected value. In other words, pointing spikes hides important information that could very well impact the success of the project. Bad news can inspire better decisions and corrective action. Falsely positive news most often leads to failures of the epic variety.

Consider the following two scenarios.

Team A has decided to add story points to their spikes. Immediately they run into several significant challenges related to the design and the technology choices made. So they create a number of spikes to find the answers and make some informed decision. The design and technology struggles continue for the next 10 sprints. Even with the challenges they faced, the team appears to have quickly established a stable velocity.

The burndown, however, looks like this:

If the scrum master were to use just the velocity numbers it would appear Team A is going to finish their work in about 14 sprints. This might be true if Team A were to have no more spikes in the remaining sprints. The trend, however, strongly suggests that’s not likely to happen. If a team has been struggling with design and technical issues for 10 sprints, it is unlikely those struggles will suddenly stop at sprint 11 and beyond unless there have been deliberate efforts to mitigate that potential. By pointing spikes and generating a nice-looking velocity chart it is more probable that Team A is unaware of the extent to which they may be underestimating the amount of time to complete items in the backlog.

Team B finds themselves in exactly the same situation as Team A. They immediately run into several significant challenges related to the design and the technology choices made and create a number of spikes to find the answers and make some informed decision. However, they decided not to add story points to their spikes. The design and technology struggles continue for the next 10 sprints. The data show that Team B is clearly struggling to establish a stable velocity.

And the burndown looks like this, same as Team A after 10 sprints:

However, it looks like it’s going to take Team B 21 more sprints to complete the work. That they’re struggling isn’t good. That it’s clear they struggling is very good. This isn’t apparent with Team A’s velocity chart. Since it’s clear they are struggling it is much easier to start asking questions, find the source of the agony, and make changes that will have a positive impact. It is also much more probably that the changes will be effective because they will have been based on solid information as to what the issues are. Less guess work involved with Team B than with Team A.

However, any scrum master worth their salt is going to notice that the product backlog burndown doesn’t align with the velocity chart. It isn’t burning down as fast as the velocity chart suggests it should be. So the savvy Team A scrum master starts tracking the burndown of value-add points vs spike points. Doing so might look like the following burndown:

Using the average from the parsed burndown, it is much more likely that Team A will need 21 additional sprints to complete the work. And for Team B?

The picture of the future based on the backlog burndown is a close match to the picture from the velocity data, about 22 sprints to complete the work.

If you were a product owner, responsible for keeping the customer informed of progress, which set of numbers would you want to base your report on? Would you rather surprise the customer with a “sudden” and extended delay or would you rather communicate openly and accurately?

Summary

Leaving spikes unpointed…

  • Increases the probability that performance metrics will reveal problems sooner and thus allow for corrective actions to be taken earlier in a project.
  • The team’s velocity and backlog burndown is a more accurate reflection of value actually being created for the customer and therefore allows for greater confidence of any predictions based on the metrics.

I’m interested in hearing your position on whether or not spikes should be estimated with story points (or some other measure.) I’m particularly interested in hearing where my thinking described in this article is in need of updating.

[This article originally appeared on the Agile Alliance blog.]

Busting Assumptions

The video in this post is one I show when talking about the need to question assumptions while working to integrate Agile principles and practices into an organization. It was taken with the dash camera in my car. The drama seems to make it easier for people to see the different points of view and associated assumptions in play. (The embedded video is a lower resolution, adapted for the web, but it still shows most of what I wish to point out.)

First off, no one was injured in this event beyond a few sets of rattled nerves, including mine. This happened fast, however, there were signals that immediately preceded the event which suggested something strange was about to happen. The key moment is replayed at the end of the video at 1/4 speed for a second chance to notice what happened.

  1. The truck ahead of me was slowing down. Unusual behavior when the expectation is that traffic would be flowing.
  2. The driver in the truck was signaling that they intended to move to the left, either to switch lanes or turn left.
  3. This activity was happening as we approached an intersection.

Something didn’t seem right to me so I had started to slow down. That’s why it looks like the driver of the Jeep appears to be speeding up.

So what were the assumptions that can be guessed?

An important piece of information is that the road in the video is a two lane one way street. The driver of the Jeep clearly understood this and assumed everyone else on the road would be following the rules of the road. The driver of the truck appears to be assuming he is driving on a two lane two way street and so prepared to turn left onto a side street. His signaling and subsequent behavior suggest this. So the driver of the truck was assuming everyone else on the road was operating under this incorrect understanding. So when he began his left hand turn he wasn’t expecting the need to check the left hand lane for cars coming up from behind him. One second difference, literally, in the timing and this could have ended badly for several people.

Assumptions are unconscious and everyone has them. By design they never represent the full picture. Yet we almost always act as if they do and, more importantly, that they are shared by everyone around us. Events like those in the video clearly demonstrate that is not the case. If it was, there would be far fewer road accidents.

Organizations that are seeking to implement Agile principles and practices are guaranteed to be operating under a mountain of assumptions for how work can or “should” be done. They’re easy to spot based on how strongly people react when someone fails to follow the rules. It’s important to examine these assumptions so they can be either validated, updated, or retired.

Agile Money

In a recent conversation with colleagues we were debating the merits of using story point velocity as a metric for team performance and, more specifically, how it relates to determining a team’s predictability. That is to say, how reliable the team is at completing the work they have promised to complete. At one point, the question of what is a story point came up and we hit on the idea of story points not being “points” at all. Rather, they are more like currency. This solved a number of issues for us.

First, it interrupts the all too common assumption that story points (and by extension, velocities) can be compared between teams. Experienced scrum practitioners know this isn’t true and that nothing good can come from normalizing story points and sprint velocities between teams. And yet this is something non-agile savvy management types are want to do. Thinking of a story’s effort in terms of currency carries with it the implicit assumption that one team’s “dollars” are not another team’s “rubles” or another teams “euros.” At the very least, an exchange evaluation would need to occur. Nonetheless, dollars, rubles, and euros convey an agreement of value, a store of value that serves as a reliable predictor of exchange. X number of story points will deliver Y value from the product backlog.

The second thing thinking about effort as currency accomplished was to clarify the consequences of populating the product backlog with a lot of busy work or non-value adding work tasks. By reducing the value of the story currency, the measure of the level of effort becomes inflated and the ability of the story currency to function as a store of value is diminished.

There are a host of other interesting economics derived thought experiments that can be played out with this frame around story effort. What’s the effect of supply and demand on available story currency (points)? What’s the state of the currency supply (resource availability)? Is there such a thing as counterfeit story currency? If so, what’s that look like? How might this mesh with the idea of technical or dark debt?

Try this out at your next backlog refinement session (or whenever it is you plan to size story efforts): Ask the team what you would have to pay them in order to complete the work. Choose whatever measure you wish – dollars, chickens, cookies – and use that as a basis for determining the effort needed to complete the story. You might also include in the conversation the consequences to the team – using the same measures – if they do not deliver on their promise.

How To Run an Agile Death March

Found on the Internet…

An experienced scrum master describes their work cycles as going “from being very busy during sprint end/start weeks to be [sic] very bored.” While this scrum master works very hard to fill in the gaps with 1:1’s with the team members and providing regular training opportunities, they nonetheless ask the question, “Does anyone have any suggestions of things I am maybe not doing that I should be doing?” One response included the following:

“Now, it could be that you have worked to create a hyper-performing team and there is no further room for improvement. A measure of this is that velocity (or similar metric) has increased by an order of magnitude in the last year.

However, the most likely scenario is that you and your team have become ‘comfortable’ and velocity has not increased significantly in the last few Sprints and/or there is a high variance in velocity.”

This reflects a common misunderstanding of “velocity” and its confusion with “acceleration.” (It also reflects the “more is better” and “winners vs losers” thinking derived from the scrum sports metaphor and points as a way of keeping score. I’ve written about that elsewhere.) Neither does the commenter understand what “order of magnitude” means. A velocity that increases by an order of magnitude in a year isn’t a velocity, it’s an acceleration. That’s a bad thing. This wouldn’t be a “hyper-performing” team. This would be a team headed for a crash as a continual acceleration in story points completed is untenable. More and more points each sprint isn’t the goal of scrum. A product owner cannot predict when their team might complete a feature or a project if the delivery of work is accelerating throughout the project.

Assuming a typical project, something that continues for a year or more, the team and the project will eventually crash as they’ve been pressured to work more and more hours and cut more and more corners in the interests of completing more and more points. The accumulation of bugs, small and large, will slow progress. Team fatigue will increase and moral decrease, resulting in turn-over and further delays. In common parlance, this is referred to as a “death march.”

Strictly speaking, velocity is some displacement over time. In the case of scrum, it is the number of story points completed in a sprint. We’ve “displace” some number of story points from being “not done” to “done.” By itself, a single sprint’s velocity isn’t particularly useful. Looking at the velocity of a number of successive sprints, however, is useful. There are two pieces of information from looking at successive sprint velocities that, when considered together, can reveal useful aspects of how well a team is performing or not. The first is the average over the previous 5 to 8 sprints, a rolling average. As a yard stick, this can provide a measure of predictability. Using this average, a product owner can make a rough calculation for how many sprints remain before completing components or the project based on the story point information in the product backlog.

The measure of confidence for this prediction would come from an analysis of the variance demonstrated in the sprint velocity values over time. Figures 1 and 2 show the distinction between the value provided by a rolling average and the value provided by the variance in values over time.

Figure 1

Figure 2

In both cases the respective teams have an average velocity of 21 points per sprint. However, the variability in the values over time show that the team in Figure 1 would have a much higher level of confidence in any predictions based on their past performance than the team shown in Figure 2.

What matters is the trend, each sprint’s velocity over a number of sprints. The steady completion of story points (i.e. work) sprint to sprint is the desirable goal. Another way to say this is that a steady velocity makes it possible to predict project delivery dates. In real life, there will be a variance (up and down) of sprint velocity over time and the goal is to guide the project such that this variance is within a manageable range.

If a team were to set as its goal an increase in the number of story points completed from sprint to sprint then their performance chart might initially look like Figure 3.

Figure 3

Such a pace is unsustainable and eventually the team burns out. Fatigue, decreased moral, and overall dissatisfaction with the project cause team members to quit and progress grinds to a halt. The fallout of such a collapse is likely to include the buildup of significant technical debt and code errors as the run-up to the crescendo forced team members to cut corners, take shortcuts, and otherwise compromise the quality of their effort. [1] The resulting performance chart would look something like Figure 4.

Figure 4

All that said, I grant that there is merit in coaching teams to make reasonable improvements in their overall sprint performance. An increase in the overall average velocity might be one way to measure this. However, to press the team into achieving an order of magnitude increase in performance is a fools errand and more than likely to end in disaster for the team and the project.

References

[1] Lyneis, J.M, Ford, D.N. (2007). System dynamics applied to project management: a survey, assessment, and directions for future research. System Dynamics Review, 23 (2/3), 157-189.