Drive for Teams

I recently re-read Daniel Pink’s book, “Drive: The Surprising Truth About What Motivates Us.” I read it when it was first published and I was still managing technical teams. Super brief summary: The central idea of the book is that people are mostly driven by intrinsic motivation based on three aspects:

  • Autonomy — The desire to be self directed.
  • Mastery — The urge to improve skills.
  • Purpose — The desire to engage with work that has meaning and purpose.

I find this holds true for individuals. However, when applied to teams optimizing for these three aspects can be problematic. If an individual on a team seeks to maximize autonomy, they are likely to come into conflict with the objectives of the team. For example, a software team that is tasked with developing a component that is expected to interact with several other components developed by other teams. If a single developer, in the interests of maximizing their individual autonomy, has decided to develop the component according to standards, design principles, and tools that are different from those of teammates and other teams (essentially, a local optimization,) then the result is likely to be sub-optimal overall.

Some individual autonomy must necessarily be sacrificed in the interests of effective collaboration. It’s possible, even desirable, that individual pursuits of mastery and purpose can be maintained. However, it may be necessary for an individual to focus on mundane tasks and the objectives of the team for periods of time. Finding ways to maintain a healthy balance between the intrinsic motivators and the purpose of the team is no small task and, when found, requires constant attention to maintain.

Perhaps it is possible to attach the team’s or organization’s purpose to the interests of the individual. Or sort for hiring people who have a personal purpose that is in-line with the organization’s purpose.

The Changeability Decision Matrix

Responding to change over following a planThe Agile Manifesto

That’s one of the four values to the Agile Manifesto. It’s also one of the values that is commonly plucked from the context of three other values and twelve principles. Once isolated, it’s exaggerated and inflated to some form of “We can’t define scope before we start work! There’s too much discovery work to be done first! We don’t know what we don’t know! Scope (and requirements) are emergent!” That bends the intent of the Manifesto and disregards the context from which a single value has been extracted.

I don’t believe Agile practices ever meant for software development to be a free-for-all, a never ending saga of finding and implementing better and better ways to code something before a product can be released. Projects run like this never see the light of day, let alone a shelf to languish on waiting for a long since departed market opportunity.

What isn’t in the Agile Manifesto, but is implicit in the Agile methodologies I’ve worked with is the notion of decision points. These are the points around which change, to a small or large degree, is not allowed. At least not for a while. Decision points bring stability to the development process from which Agile teams can move forward with a stable set of assumptions. If subsequent discoveries inform the team that they need to revisit a decision, than they must do so. The key element is that the work subsequent to the decision is what generates the need to revisit the decision. It isn’t done arbitrary, on a hunch, or with minimal information.

There are numerous decision points that exist within Scrum and SAFe, for example. Stories are decisions. “We need to create this thing.” Acceptance criteria, definitions of ready and done, sprint duration, feature and epic definitions, milestones, minimum viable/valuable products are also examples of decisions. Some of these can be quite changeable. Stories, for example, can be refined many times prior to and during sprint planning. The description, acceptance criteria, definition of done, and effort estimation can change many times before a story is committed to a sprint. And there’s the decision point. When the team agrees that a story can be brought into a sprint and they commit to completing it before the sprint is over, they have made a decision and the story shouldn’t change on its way to being completed by the team. (As noted previously, the work on the story may reveal a need to change something about the story – maybe even indicate that work on the story should stop – but that should be an edge case and not part of common practice.)

To help teams understand these distinctions, I’ve developed a 2X2 matrix called the Changeability Decision Matrix. Its purpose is to help teams evaluate the effects of changing work in the queue. The horizontal axis goes from “Small Impact” to “Big Impact.” The vertical axis goes from “Few Changes” to “Many Changes.”

The two questions the team needs to ask when thinking about changing a decision they’ve made (acceptance criteria, story description, MVP, etc.) are:

  • Will this change have a small or big impact? They may consider any number of variables: cost, time, productivity, effort, etc.
  • Will this change require a few or many changes (lines of code, documentation updates, other components that consume the code, budgets, release dates, etc.)

Where the proposed change resides on the grid may be dependent on where the team is on the project timeline. Consider the Epic, feature, and story hierarchy: Early in the project – during the design phase, for example – there may be little more than features in the backlog. As placeholders for ideas, they may be quite volatile as new marketing information enters the conversation or obvious technical issues become apparent. So changing an epic or a feature may have a relatively small impact on the project and involve few changes. Most probably there won’t be any code involved at this point.

As the project progress and backlog refinement continues, epics and features will be broken up into large stories. More detail is added to the backlog and more time and money has been invested in the design so the epics and features are less changeable. If any changes are needed, it is probably that the impact of those changes and the number of things that need to change will be greater than it would have been during the design phase.

Eventually, as the project moves into high gear, the backlog will become populated with more and more smaller stories that can be easily estimated and planned into sprints and increments.

For the duration of the project, it’s likely most of the stories in the backlog can and should be responsive to multiple changes…right up to the point the decision is made to drop the story into a sprint.

The Changeability Decision Matrix is an easy way to evaluate whether or not an Agile team is pondering undoing a small or large decision by forcing the conversation around the consequences of making the change. If either of these two axis are not a good fit for your organization or what you consider important to consider, then change them to something that makes more sense to your project.

See also:

Update 2020.11.07

Here is a representation of these phases on a hypothetical project timeline.

Concave, Convex, and Nonlinear Fragility

Nassim Nicholas Taleb’s book, “Antifragile,” is a wealth of information. I’ve returned to it often since first reading it several years ago. My latest revisit has been to better understand his ideas about representing the nonlinear and asymmetric aspects of fragile/antifragile in terms of “concave” and “convex.” My first read of this left me a bit confused, but I got the gist of it and moved on. Taleb is a very smart guy so I need to understand this.

The first thing I needed to sort out on this revisit was Taleb’s use of language. The fragile/antifragile comparison is variously described in his book as:

  • Concave/Convex
  • Slumped solicitor/Humped solicitor
  • Curves inward/Curves outward
  • Frown/Smile
  • Negative convexity effects/Positive convexity effects
  • Pain more than gain/Gain more than pain
  • Doesn’t “like” volatility (presumable)/”Likes” volatility

Tracking his descriptions is made a little more challenging by reversals in reference when writing of both together (concave and convex then convex and concave) and mis-matches between the text and illustrations. For example:

Nonlinearity comes in two kinds: concave (curves inward), as in the case of the king and the stone, or its opposite, convex (curves outward). And of course, mixed, with concave and convex sections. (note the order: concave / convex) Figures 10 and 11 show the following simplifications of nonlinearity: the convex and the concave resemble a smile and a frown, respectively. (note the order: convex / concave)

Figure 10 shows:

So, “convex, curves outward” is illustrated as an upward curve and “concave, curves inward” is illustrated as a downward curve. Outward is upward and inward is downward. It reads like a yoga pose instruction or a play-by-play call for a game of a Twister.

After this presentation, Taleb simplifies the ideas:

I use the term “convexity effect” for both, in order to simplify the vocabulary, saying “positive convexity effects” and “negative convexity effects.”

This was helpful. The big gain is when Taleb gets to the math and graphs what he’s talking about. Maybe the presentation to this point is helpful to non-math thinkers, but for me it was more obfuscating than illuminating. My adaptation of the graphs presented by Taleb:

With this picture, it’s easier for me to understand the non-linear relationship between a variable’s volatility and fragility vs antifragility. The rest of the chapter is easier to understand with this picture of the relationships in mind.

Friends, Guides, Coaches, and Mentors

The “conscious competence” model for learning is fairly well known. If not explicitly, than at least implicitly. Most people can recognize when someone is operating at a level of unconscious incompetence even if they can’t quite put their finger on why it is such a person makes the decisions they do. Recognizing when we ourselves are at the level of unconscious incompetence is a bit more problematic.

A robust suite of cognitive biases that normally help us navigate an increasingly complex world seem to conspire against us and keep us in the dark about our own shortcomings and weaknesses. Confirmation bias, selective perception, the observer bias, the availability heuristic, the Ostrich effect, the spotlight effect and many others all help us zero in on the shiny objects that confirm and support our existing memories and beliefs. Each of these tissue-thin cognitive biases layer up to form a dense curtain, perhaps even an impenetrable wall, between the feedback the world is sending and our ability to receive the information.

There is a direct relationship between the density of the barrier and the amount of energy needed to drive the feedback through the barrier. People who are introspective as well as receptive to external feedback generally do quite well when seeking to improve their competencies. For those with a dense barrier it may require an intense experience to deliver the message that there are things about themselves that need to change. For some a poorly received business presentation may be enough to send them on their way to finding out how to do better next time. For others it may take being passed over for a promotion. Still others may not get the message until they’ve been fired from their job.

However it happens, if you’ve received the message that there are some changes you’d like to make in your life and it’s time to do the work, an important question to ask yourself is “Am I searching for something or am I lost?”

If you are searching for something, the answer may be found in a conversation over coffee with a friend or peer who has demonstrated they know what you want to know. It maybe that what you’re looking for – improve your presentation skills, for example – requires a deeper dive into a set of skills and it makes sense to find a guide to help you. Perhaps this involves taking a class or hiring a tutor.

If you are lost you’ll want to find someone with a much deeper set of skills, experience, and wisdom. A first time promotion into a management position is a frequent event that either exposes someone’s unconscious incompetence (i.e. the Peter Principle) or challenges someone to double their efforts at acquiring the skills to successfully manage people. Finding a coach or a mentor is the better approach to developing the necessary competencies for success when the stakes are higher and the consequences when failing are greater.

A couple of examples may help.

When I was first learning to program PCs I read many programming books cover to cover. It was a new world for me and I had very little sense of the terrain or what I was really interested in doing. So I studied everything. Over time I became more selective of the books I bought or read. Eventually, I stopped buying books altogether because there was often just a single chapter of interest. Today, I can’t remember the last time I picked up a software development book. This was a progression from being lost at the start – when I needed coaches and mentors in the form of books and experienced software developers – to needing simple guidance from articles and peers and eventually to needing little more than a hint or two toward the end of my software development career.

A more recent example is an emergent need to learn photography – something I don’t particular enjoy. Yet for pragmatic reasons, it’s become worth my time to learn how to take a particular kind of photograph. I need a coach or a mentor because this is entirely new territory for me. So I hired a professional photographer with an established reputation for taking this type of photograph I’m interesting in. My photography coach is teaching me what I need to know. (He is teaching me how to fish, in other words, rather then me paying him for a fish every time I need one.)

Unlike the experience of learning how to program – where I really didn’t know what I wanted to do – my goal with photography is very specific. The difference has a significant influence on who I choose for guides and mentors. For software development, I sought out everyone and anyone who knew more than I. For photography, I sought a very specific set of skills. I didn’t want to sit through hours of classes learning how to take pictures of barn owls 1,000 meters away in the dark. I didn’t want to suffer through a droning lecture on the history of camera shutters. Except in a very roundabout way, none of this serves my goal for learning how to use a camera for a very specific purpose.

Depending on what type of learner you are, working with a mentor who really, really knows their craft about a specific subject you want to learn can be immensely more satisfying and enjoyable. Also, less expensive and time consuming. If it expands into something more, than great. With this approach you will have the opportunity to discover a greater interest without a lot of upfront investment in time and money.

The Pull of Well-Crafted Product Visions and Release Goals

There was even a trace of mild exhilaration in their attitude. At least, they had a clear-cut task ahead of them. The nine months of indecision, of speculation about what might happen, of aimless drifting with the pack were over. Now they simply had to get themselves out, however appallingly difficult that might be. [1]

In the early 20th Century, Sir Ernest Shackleton led an expedition attempting to cross the South Pole on foot. He was unsuccessful in that attempt. What he succeeded at, however, was something far more impressive. After nearly two years of battling conditions south of the Antarctic Circle, Shackleton saw to it that all 27 men of his crew made it safely home. As Alfred Lansing notes, “Though they had failed dismally even to come close to the expedition’s original objective, they knew now that somehow they had done much, much more than ever they set out to do.”

There is much I could write about the lessons from Shackleton, his crew, and the Endurance that apply to our own individual endeavors – personal and professional. For the moment, I wish to reflect on the sheer clarity of the goal 28 men had in 1915-1916: To survive, by any means and nothing short of complete dedicated effort.

To be sure, their goal was self-serving – no one can judge them for that – and no product team is ever likely to be placed in a situation of delivering in the face of such high stakes. Indeed, the lessons from Endurance are striking in their contrast to just how feeble the drama is that is often brought into product delivery schedules. We call them “death marches,” but we know not of what we speak.

One of the things we can learn from Endurance is the power of a clearly defined objective. Do or die. That’s pretty damn clear. Time and time again, Shackleton’s crew were faced with completing seemingly impossible tasks under the harshest of conditions with the barest of resources and vanishingly small chances for success.

What kept them going? Certainly, the will and desire to live. There were many other factors, too. What interests me in this post is reflected in the opening quote. The emergence of a well-defined task that cleared away the fog of speculation, indecision, and uncertainty. Episodes like this are described multiple times in Lansing’s book.

Why this is important to something like a product vision is that it clearly illustrates a phenomenon I learned about recently called “The Goal Gradient Hypothesis,” which basically says our efforts increase as we get closer to our goals. But here’s the rub. We have to know and understand what the goal is. “Do or die” is clear and leaves little room for misunderstanding. “Let’s go build a killer app,” not so much.

From the research:

We found that members of a café RP accelerated their coffee purchases as they progressed toward earning a free coffee. The goal-gradient effect also generalized to a very different incentive system, in which shorter goal distance led members to visit a song-rating Web site more frequently, rate more songs during each visit, and persist longer in the rating effort. Importantly, in both incentive systems, we observed the phenomenon of postreward resetting, whereby customers who accelerated toward their first reward exhibited a slowdown in their efforts when they began work (and subsequently accelerated) toward their second reward. [2]

Far away goals, like a product vision, are much less motivating than near-term goals, such as sprint goals. And yet it is the product vision that can, if well-crafted and well-communicated, pull a team forward during a postreward resetting period.

But perhaps the most important lesson from the research – as far as product development is concerned – is that incentives matter.  How an organization structures these is important. Since most people fail The Marshmallow Test, rewarding success on smaller goals that lead to a larger goal is likely to help teams stay focused and dedicated in the long run. Rather than one large post-product release celebration, smaller rewards after each successful sprint are more likely to keep teams engaged and productive.

References

[1] Lansing, A. (1957) Endurance: Shackleton’s Incredible Voyage, pg. 80

[2] Kivetz, R., Urminsky, O., Zheng, Y (2006) The Goal-Gradient Hypothesis Resurrected: Purchase Acceleration, Illusionary Goal Progress, and Customer Retention, Journal of Marketing Research, 39 Vol. XLIII (February 2006), 39–58

Improving the Signal to Noise Ratio – Coda

In a Scientific American column delightfully named “The Artful Amoeba” there is an article on a little critter called the “fire chaser” beetle: How a Half-Inch Beetle Finds Fires 80 Miles Away – Fire chaser beetles’ ability to sense heat borders on the spooky

Why a creature would choose to enter a situation from which all other forest creatures are enthusiastically attempting to exit is a compelling question of natural history. But it turns out the beetle has a very good reason. Freshly burnt trees are fire chaser beetle baby food. Their only baby food.

Fire chaser beetles are thus so hell bent on that objective that they have been known to bite firefighters, mistaking them, perhaps, for unusually squishy and unpleasant-smelling trees.

This part is interesting:

A flying fire chaser beetle appears to be trying to give itself up to the authorities. Its second set of legs reach for the sky at what appears to be an awkward and uncomfortable angle.

But the beetle has a good reason. It’s getting its legs out of the way of its heat eyes, pits filled with infrared sensors tucked just behind its legs.

A strategy suggested by the fire beetle life cycle is if you want to maximize a signal to noise ratio, iterate through three simple things:

  1. Work to develop a super well defined signal/goal/objective.
  2. Remove every possible barrier to receiving information about that signal – mental, emotional, even physical – that you can think of or that you discover over time.
  3. Repeat

Also, the “Way of the Amoeba” is now the “Way of the Artful Amoeba.” Update your phrase books accordingly.

Improving the Signal to Noise Ratio – Revisited

Additional thoughts about signals and noise that have been rattling around in my brain since first posting on this topic.

At the risk of becoming too ethereal about all this, before there is signal and before there is noise, there is data. Cold, harsh, cruelly indifferent data. It is after raw data encounters some sort of filter or boundary, something that triggers a calculation to evaluate what that data means or whether it is relevant to whomever is on the other side of the filter, that it begins to be characterized as “signal” or “noise.”

Since we’re talking about humans in this series of posts, that filter is an amazingly complex system built from both physiological and psychological elements. The small amount of physical data that hits our senses and actually makes it to our brains is then filtered by beliefs, values, biases, attitudes, emotions, and those pesky unicorns that can’t seem to stop talking while I’m trying to think! It’s after all this processing that data has now been sorted according to “signal” (what’s relevant) and “noise” (what’s irrelevant) for any particular individual. Our individual systems of filters impart value judgments on the data such that each of us, essentially, creates “signal” and “noise” from the raw data.

That’s a long winded way to say:

data -> [filter] -> signal, noise

Now apply this to everyone on the planet.

data -> [filter 1] -> signal 1, noise 1

data -> [filter 2] -> signal 2, noise 2

data -> [filter n] -> signal n, noise n

As an example, Google, itself a filter, is a useful one. Let’s assume for a moment that Google is some naturally occurring phenomenon and not a filter created by humans with their own set of filters driving what it means to create a let’s be evil good search engine. To retrieve 1,000,000 pieces of information, my friend, Bob, entered search criteria of interest to him, i.e. “filter 1.” Maybe he searched for “healthy keto diet recipes”. Scanning those search results, I determine (using my “filter 2”) 100% of the search results are useless because my filter is “how do i force the noisy unicorns in my head to shut the hell up”. The Venn diagram of those two search results is likely to show a vanishingly small set of relationships between the two. (Disclaimer: I have no knowledge of the carbohydrate content of unicorns nor how tasty they may be when served with capers and a lemon dill sauce.)

Google may return 1,000,000 search results. But only a small subset is viewable at a time. What of the rest of the result set that I know nothing about? Is it signal? Is it noise? Is it just data that has yet to be subjected to anyone’s system of filters? Because Google found stuff, does that make it signal? Accepting all 1,000,000 search results as signal seems to require a willingness to believe that Google knows best when it comes to determining what’s important to me. This would apply to any filter not our own.

All systems for distinguishing signal from noise are imperfect and some of us on the Intertubes are seeking ways to better tune our particular systems. The system I use lets non-relevant data fall through the sieve so that the gold nuggets are easier to find. Perhaps at some future date I’ll unwittingly re-pan the same chunk of data through an experienced-refined sieve and a newly relevant gem will emerge from the dirt. But until that time, I’ll trust my filters, let the dirt go as noise, and lurch forward.

Improving the Signal to Noise Ratio – In Defense of Noise

[This post follows from Improving the Signal to Noise Ratio.]

All signal all the time may not be a good thing. So I’d like to offer a defense for noise: It’s needed.

Signal is signal because there is noise. Without the presence of noise we risk living in the proverbial echo chamber. When we know what’s bad, we are better equipped to recognize what’s good. I deliberately tune into the noise on occasion for no other reason than to subject my ideas to a bit of rough and tumble. Its why I blog. Its why I participate in several select forums. “Here’s what I think, world. What say you?”

Of course, noise is noise because there is signal. Once we’ve had an experience of “better” we are then more skilled at recognizing what’s bad. I remember the food I grew up on as being good, but today I view some of it as poison (Wonder Bread anyone?) And there are subjects for which I no longer check out the noise. The exposure is too harmful.

There are subjects for which I seem to be swimming in noise and casting around for any sort of signal that suggests “better.” I’m recalling a joke about the two young fish who swim past an older fish. The older fish says to the younger fish, “The water sure is nice today.” A little further on, one of the young fish asks the other, “What’s water?” I’m hoping to catch that older fish in my net. He knows something I don’t.

To understand what I mean by noise being necessary it is important to understand the metaphor I’m using, where it applies and where it doesn’t.

Taking the metaphor literally, in the domain of electrical engineering, for example, the signal to noise ratio is indeed an established measure with clear unit definitions as to what is reflected by the ratio – decibels, for example. In this domain the goal is to push always for maximum signal and minimum noise.

In the world of biological systems, however, noise is most definitely needed. One of many examples I can think of is related to an underlying driver to evolution: mutations. In an evolving organism, anything that would potentially upset the genetic status quo is a threat to survival. Indeed, most mutations are at best benign or at worst lethal such that the organism or it’s progeny never survive and the mutation is selected against as evolutionary “noise.”

However, some mutations are a net benefit to survival and add to the evolutionary “signal.” We, as 21st Century homo sapiens, are who we are because of an uncountable number of noisy mutations that we’ll never know about because they didn’t survive. Even so, surviving mutations are not automatically “pure” signal. There are “noisy” mutations, such as that related to sickle cell anemia. Biological systems can’t recognize a mutation as “noise” or “signal” before the mutation occurs, only after, when they’ve been tested by the rough and tumble of life. This is why I speak in terms of “net benefit.”

For humans trying to find our way in the messy, sloppy world of human interactions and thought, pure signal can be just as undesirable as pure noise. I’ll defer to John Cook, who I think expresses more succinctly the idea I was clumsily trying to convey:

If you have a crackly recording, you want to remove the crackling and leave the music. If you do it well, you can remove most of the crackling effect and reveal the music, but the music signal will be slightly diminished. If you filter too aggressively, you’ll get rid of more noise, but create a dull version of the music. In the extreme, you get a single hum that’s the average of the entire recording.

This is a metaphor for life. If you only value your own opinion, you’re an idiot in the oldest sense of the word, someone in his or her own world. Your work may have a strong signal, but it also has a lot of noise. Getting even one outside opinion greatly cuts down on the noise. But it also cuts down on the signal to some extent. If you get too many opinions, the noise may be gone and the signal with it. Trying to please too many people leads to work that is offensively bland.

The goal in human systems is NOT to push always for maximum signal and minimum noise. For example, this is reflected in Justice Brandeis’s comment: “If there be time to expose through discussion the falsehood and fallacies, to avert the evil by the process of education, the remedy to be applied is more speech, not enforced silence.” So my amended thesis is: In the domain of human interactions and thought, noise is needed by anyone seeking to both evaluate and improve the quality of the signal they are following.

A final thought…

If we were to press for eliminating as much “noise” as possible from human systems much like the goal for electrical noise, I’m left with the question “Who decides what qualifies as noise?”

Improving the Signal to Noise Ratio

A question was posed, “Why not be an information sponge?”

I’d have to characterize myself as more of an information amoeba – (IIRC, the amoeba is, by weight, the most vicious life form on earth) – on the hunt for information and after internalizing it, going into rest mode while I decompose and reassemble it into something of use to my understanding of the world. Yum.

More generally, to be an effective and successful consumer of information these days, the Way of the Sponge (WotS, passive, information washes through them and they absorb everything) is no longer tenable and the Way of the Amoeba (WotA, active, information washes over them and they hunt down what they need) is likely to be the more successful strategy. The WotA requires considerable energy but the rewards are commensurate with the effort. WotS…well, there’s your obsessive processed food eating TV binge-watcher right there. Mr. Square Bob Sponge Pants.

What’s implied by the WotA vs the WotS is that the former has a more active role in optimizing the informational signal to noise ratio than the latter. So a few thoughts to begin with on signals and noise.

Depending on the moment and the context, one person’s signal is another person’s noise. Across the moments that make up a lifetime, one person’s noise may become the same person’s signal and vice versa. When I was in high school, I found Frank Sinatra’s voice annoying and not something to be mingled with my collection of Mozart, Bach, and Vivaldi. Today…well, to disparage the Chairman of the Board is fightin’ words in my house. Over time, at least, noise can become signal and signal become noise.

But I’m speaking here of the signal quality and not it’s quantity (i.e. volume)

Some years ago I came across Stuart Kauffman’s idea of the adjacent possible:

It may be that biospheres, as a secular trend, maximize the rate of exploration of the adjacent possible. If they did it too fast, they would destroy their own internal organization, so there may be internal gating mechanisms. This is why I call this an average secular trend, since they explore the adjacent possible as fast as they can get away with it.

This has been interpreted in a variety of ways. I carry this around in my head as a distillation from several of the more faithful versions: Expand the edge of what I know by studying the things that are close by. Over time, there is an accumulation of loosely coupled ideas and facts that begin to coalesce into a deeper meaning, a signal, if you will, relevant to my life.

With this insight, I’ve been able to be more deliberate and directed about what I want or need to know. I’ve learned to be a good custodian of the edge and what I allow to occupy space on that edge. These are my “internal gating mechanisms.” It isn’t an easy task, but there are some easy wins. For starters, learning to unplug completely. Especially from social media and what tragically passes for “news reporting” or “journalism”these days.

The task is largely one of filtering. I very rarely directly visit information sources. Rather, I leverage RSS feeds and employ filtering rules. I pull information of interest rather than have it pushed at me by “news” web sites, cable or TV channels, or newspapers. While this means I will occasionally miss some cool stuff, it’s more than compensated by the boost in signal quality achieved by excluding all the sludge from the edge. I suspect I still get the cool stuff, just in a slightly different form or revealed by a different source that makes it through the filter. In this way, it’s a matter of modulating the quantity such that the signal is easier to find.

There is a caution to consider while optimizing a signal-to-noise ratio, something reflected in Kauffman’s comments around the rate of exploration for new ideas: “If they did it too fast, they would destroy their own internal organization…”

Before the Internet, before PCs were a commodity, before television was popular it was much, much easier to find time to think. In fact, it was never something that had to be looked for or sought out. I think that’s what is different today. It takes WORK to find a quiet space and time to think. While my humble little RSS filters do a great job of keeping a high signal-to-noise ratio with all things Internet, accomplishing the same thing in the physical world is becoming more and more difficult.

The “attention economy,” or whatever it’s being called today, is reaching a truly disturbing level of invasion. It seems I’ve used more electrician’s tape to cover up camera lenses and microphones in the past year than I’ve used on actual electrical wires. The number of appliances and gadgets in the home with glowing screens crying out for bluetooth or wifi access like leaches seeking blood are their own source of noise. This is my current battleground for finding the signal within the noise.

Enough about filtering. What about boundaries. Fences make for good neighbors, said someone wise and experienced. And there’s a good chance that applies to information organization, too. Keeping the spiritual information in my head separate from my shopping list probably helps me stop short of forming some sort of cult around Costco. ( “All praise ‘Bulk,’ the God of Stuff!)

An amoeba has a much more develop boundary between self and other than a sponge and that’s probably a net gain even with the drawback of extra energy required to fuel that arrangement. Intellectually, we have our beliefs and values that mark where those edges between self and other are defined.

So I’ll stop for now with the question, “What are the strategies and mental models that promote permeability for desired or needed information while keeping, as much as possible, the garbage ‘out there?’”

 

Agile and Changing Requirements or Design

I hear this (or some version) more frequently in recent years than in past:

Agile is all about changing requirements at anytime during a project, even at the very end.

I attribute the increased frequency to the increased popularity of Agile methods and practices.

That the “Responding to change over following a plan” Agile Manifesto value is cherry picked so frequently is probably due to a couple of factors:

  • It’s human nature for a person to resist being cornered into doing something they don’t want to do. So this value gets them out of performing a task.
  • The person doesn’t understand the problem or doesn’t have a solution. So this value buys them time to figure out how to solve the problem. Once they do have a solution, well, it’s time to change the design or the requirements to fit the solution. This reason isn’t necessary bad unless it’s the de facto solution strategy.

The intent behind the “Responding to change” value, and the way successful Agile is practiced, does not allow for constant and unending change. Taken to it’s logical conclusion, nothing would ever be completed and certainly nothing would ever be released to the market.

I’m not going to rehash the importance of the preposition in the value statement. Any need to explain the relativity implied by it’s use has become a useful signal for me to spend my energies elsewhere. But for those who are not challenged by the grammar, I’d like to say a few thing about how to know when change is appropriate and when it’s important to follow a plan.

The key is recognizing and tracking decision points. With traditional project management, decisions are built-in to the project plan. Every possible bit of work is defined and laid out on a Gantt chart, like the steel rails of a train track. Deviation from this path would be actively discouraged, if it were considered at all.

Using an Agile process, decision points that consider possible changes in direction are built into the process – daily scrums, sprint planning, backlog refinement, reviews and demonstrations at the end of sprints and releases, retrospectives, acceptance criteria, definitions of done, continuous integration – these all reflect deliberate opportunities in the process to evaluate progress and determine whether any changes need to be made. These are all activities that represent decisions or agreements to lock in work definitions for short periods of time.

For example, at sprint planning, a decision is made to complete a block of work in a specified period of time – often two weeks. After that, the work is reviewed and decisions are made as to whether or not that work satisfies the sprint goal and, by extension, the product vision. At this point, the product definition is specifically opened up for feedback from the stakeholders and any proposed changes are discussed. Except under unique circumstances, changes are not introduced mid-sprint and the teams stick to the plan.

Undoing decisions or agreements only happens if there is supporting information, such as technical infeasibility or a significant market shift. Undoing decisions and agreements doesn’t happen just because “Agile is all about changing requirements.” Agile supports changing requirements when there is good reason to do so, irrespective of the original plan. With traditional project management, it’s all about following the plan and change at any point is resisted.

This is the difference. With traditional project management, decisions are built-in to the project plan. With Agile they are adapted in.