Thursday, April 30, 2009

The Agility Cycle - Part 3

In Part 1 of this series we discussed the Business Agility Cycle and then in Part 2 we derived the Software Agility Cycle from that by applying "the people factor" of Agile development to the business-agility cycle.

That "people factor" of Agile development essentially boils down to the notion of emergent behavior/structure through self-organization of collaborative "agents." The resulting discussion used of lot of jargon from complexity science and wasn't particularly easy to follow. Feedback from one reader even suggested the resulting "steps" in the agility-cycle came across as so much Zen/Yoga mumbo-jumbo.

To make matters worse, I wasn't exactly ultra-consistent in how I characterized the cycle:
  • one description had six steps (sense, see, socialize, swarm, show, share)
  • another "condensed" the most closely-related steps together (sense + see, socialize + swarm, show + share)
  • then the summary appeared to have four steps (evaluate, collaborate, validate, learn), which looks suspiciously similar to the Shewhart cycle of Plan-Do-Check-Act (though to be fair the differences between the two are important in meaning despite being only slight in appearance)
So perhaps the "jury" is still out on authoritatively characterizing those steps in the software agility cycle. Perhaps "strategize" is still better than "see from the perspective of the whole", even though doing the latter is really a prerequisite for being able to do the former correctly. And perhaps "swarming" and "socializing" are too readily misunderstood by those who don't already "grok" the whole notion of "emergence through self-organization" (and maybe don't really care to either).

The basic idea remains the same though: being "agile" means minimizing the response-time to some event or "stimulus" while maximizing the overall value of that response (and hence the efficiency and effectiveness of our behavior). This implies two basic things:
  1. We must have some regular means of becoming aware of the "significant" events in the first place, or else the "cycle" never even starts-up in the first place.

  2. There are multiple such "cycles" going on, each of which forms a feedback-loop at its own level of "scale."

So how do we "sense and make sense of" these events that indicate the presence of a need/opportunity for change? The answer is feedback loops! We have to mindfully and intentionally set them up ourselves and make sure they happen regularly, and at the right frequency and level of scale.

This is in fact how the software-agility cycle fits into the larger business-agility cycle. If we think of the business-agility cycle(s) as something that takes place at the level of entire portfolios, product-lines, markets, programs and their governance, then ultimately:
  1. The very software project/product we're trying to be "agile" for came about in response to some higher-level business-need.

  2. And putting an "agile" project+team in place was really the "act" step of the business-agility cycle.

  3. The act of putting that agile project into motion is what prompted us to set-up the feedback-loops for software agility for that product or service.

What then do these feedback-loops look like and how do we put them into place? Well, they typically need to validate our knowledge and understanding of a need/opportunity against that of the user or consumer in some systematic fashion. For software agility, these feedback-loops are manifested by some of the agile software development practices we have become quite familiar with:
    Iterations: An iteration is one feedback cycle we use, and one of the larger-grained ones. At the end of the iteration, we demonstrate the results to the customer and get feedback about what we did right/wrong and what to focus on next. We also have Retrospectives to get feedback about our process so we can learn to improve it by inspecting & adapting.

    Stand-up Meetings: This is another feedback cycle to hear from the workers in the trenches what problems and impediments there are. This typically is setup to happen daily.

    Continuous Integration: This is a feedback cycle that gives developers feedback on not just whether or not what they just coded works, but how well it does/doesn’t fit with what everyone else just implemented. It happens at least a couple times per day per developer (if they are practicing CI properly, and not just doing daily/nightly builds)

    Test-Driven Development: This feedback cycle forces one to first validate their thinking about the test (even watching it fail first) before trying to write the code that passes it. As far as knowing what you’re supposed to be trying to do, this forces that to happen at the front of the programming task, in terms of understanding the requirements very precisely, and designing for testability:

    • When done by the programmer at the unit-level, TDD forces the granularity of this feedback cycle to be pretty darn small (often just hours, or less).

    • At a higher-level of granularity is Acceptance Test Driven Development (ATDD) where customer-acceptance criteria are represented as readable yet "executable requirements" in the form of automated tests. (These, in turn, drive the TDD cycle at the unit-level.)

    Pair Programming: This is the most fine-grained of all the feedback loops mentioned above. It gives that second pair of eyes whose purpose is not to try and co-design the code so much as to ask questions about the clarity, correctness, and necessity of what the programmer is writing, and maintain the strategic direction of that effort.
One picture that is particularly good at depicting several of these feedback-loops all working together is the agile development poster from VersionOne:
Unfortunately, in order to see the picture at larger-size, you'll need to request it from VersionOne (It is free, but you have to fill-in a web-form to obtain it.)

Every single one of the above practices establishes a systematic feedback-loop whose purpose to “sense” some form of problem/opportunity by comparing our current understanding of something and validating against that of the consumer.
  • Each loop progresses through the software-agility cycle at its own level-of-scale.
  • And each one of them requires being able to “make sense” of the problem/opportunity after you’ve sensed it, by “seeing the problem in the context of the whole”
  • This requires us to think strategically about the end-goal before adding that unneeded abstraction or anticipating that not yet high-priority requirement, or fixing that urgent build-breakage with a band-aid that actually makes things harder for the next “consumer” downstream in the process).
So if the "secret sauce" of software agility comes from the "people factor" to create emergent results from close collaboration, then the "secret recipe" for applying that sauce is the "nested feedback-loops" that integrate the collaboration and resulting "emergent behavior" into adaptive cycles of activity that let us incrementally learn and evolve our understanding by iterating through each of those cycles at each level of scale.

No comments: