Sunday, July 30, 2006

The New Rules: Agile beats Big

The July 24 issue of Fortune Magazine has an article entitled The New Rules as the cover story, with the cover saying "Sorry Jack! Welch's Rules for Winning Don't Work Anymore (But We've Got 7 New Ones That Do)"

I think the new rules it discusses are very much about "Agile is better than Bigger!" and "Bigger isn't necessarily better!" The list of new rules is:
Old Rule: Big Dog Owns the Street.
New Rule: Being Agile is Best; Being Big can Bite You!

Old Rule: Be #1 or #2 in Your Market.
New Rule: Find a Niche, Create Something New.

Old Rule: Shareholders Rule.
New Rule: The Customer is King.

Old Rule: Be Lean and Mean.
New Rule: Look Out, Not In.

Old Rule: Rank your Players; Go with the A's.
New Rule: Hire Passionate People.

Old Rule: Hire a Charismatic CEO.
New Rule: Hire a Courageous CEO.

Old Rule: Admire my Might.
New Rule: Admire my Soul.
All in all I thought it was pretty fair-minded. There was even a sidebar to the article that gave Welch a chance to respond to the criticisms. You'll probably need to read the article for further insight into what exactly is meant by each of the "new rules" above. There was plenty of commentary across the industry on the article! (Just Google on "The New Rules" +Fortune +"Sorry Jack" and look through the results)

Monday, July 24, 2006

Agile SCM Principles - From OOD to TBD+CBV+POB

I finally finished a set of articles I'd been working on for almost 10 years on and off on the subject of "translating" principles of OOD into principles of SCM. See the following:
The principles of OOD translated into principles of Task-Based Development (TBD), Container-based Versioning (CBV), and Project-Oriented Branching (POB).

Here are the principles that I translated. Most of them are from Robert Martin's book Agile Software Development: Principles, Patterns, and Practices, but a couple of them are from The Pragmatic Programmers:


Here is what I ended-up translating them into. Note that some of the principles translated into more than one principle for version control because they applied to more than one of changes/workspaces, baselines, and codelines. I'm not real thrilled about the names & acronyms for several of them and am open to alternative names & acronyms:

    General Principles of Container-Based Versioning
    The Content Encapsulation Principle (CEP) All version-control knowledge should have a single authoritative, unambiguous representation within the system that is its "container. In all other contexts, the container should be referenced instead of duplicating or referencing its content.
    The Container-Based Dependency Principle (CBDP) Depend upon named containers, not upon their specific contents or context. More specifically, the contents of changes and workspaces should depend upon named configurations/codelines.
    The Identification Insulation Principle (IDIP) A unique name should not identify any parts of its context nor or of its related containers (parent, child or sibling) that are subject to evolutionary change.
    The Acyclic Dependencies Principle (ADP) The dependency graph of changes, configurations, and codelines should have no cycles.
    Principles of Task-Based Development
    The Single-Threaded Workspace Principle (STWP) A private workspace should be used for one and only one development change at a time.
    The Change Identification Principle (CHIP) A change should clearly correspond to one, and only one, development task.
    The Change Auditability Principle (CHAP) A change should be made auditably visible within its resulting configuration.
    The Change/Task Transaction Principle (CHTP) The granule of work is the transaction of change.
    Principles of Baseline Management
    The Baseline Integrity Principle (BLIP) A baseline's historical integrity must be preserved - it must always accurately correspond to what its content was at the time it was baselined.
    The Promotion Leveling Principle (PLP) Define fine-grained promotion-levels that are consumer/role-specific.
    The Integration/Promotion Principle (IPP) The scope of promotion is the unit of integration & baselining
    Principles of Codeline Management
    The Serial Commit Principle (SCP) A codeline, or workspace, should receive changes (commits/updates) to a component from only one source at a time.
    The Codeline Flow Principle (CLFP) A codeline's flow of value must be maintained - it should be open for evolution, but closed against disruption of the progress/collaboration of its users.
    The Codeline Integrity Principle (CLIP) Newly committed versions of a codeline should consistently be no less correct or complete than the previous version of the codeline.
    The Collaboration/Flow Integration Principle (CFLIP) The throughput of collaboration is the cumulative flow of integrated changes.
    The Incremental Integration Principle (IIP) Define frequent integration milestones that are client-valued.
    Principles of Branching & Merging
    The Codeline Nesting Principle (CLNP) Child codelines should merge and converge back to (and be shorter-lived than) their base/parent codeline.
    The Progressive-Synchronization Principle (PSP) Synchronizing change should flow in the direction of historical progress (from past to present, or from present to future): more conservative codelines should not sync-up with more progressive codelines; more progressive codelines should sync-up with more conservative codelines.
    The Codeline Branching Principle (CLBP) Create child branches for value-streams that cannot "go with the flow" of the parent.
    The Stable Promotion Principle (SPP) Changes and configurations should be promoted in the direction of increasing stability.
    The Stable History Principle (SHIP) A codeline should be as stable as it is "historical": The less evolved it is (and hence more mature/conservative), the more stable it must be.

You can read the 2nd article to see which version-control principles were derived from which OOD principles. Like I mentioned before, I'm not real thrilled about the names & acronyms for several of them and am open to alternative names & acronyms. So please share your feedback on that (or on any of the principles, and how they were "derived").

Saturday, July 22, 2006

Agile SCM Principles - Design Smells

I finally finished a set of articles I'd been working on for almost 10 years on and off on the subject of "translating" principles of OOD into principles of SCM. See the following:
In an August 2005 blog-entry on SCM Design smells, I tried to translate the design smells that Robert Martin. wrote up in his book on Agile principles, Patterns and Practices. In the first of the two articles above, I think I was much more successful at translating them into the version-control domain. Here is what I came up with ...

Symptoms of Poor Version Control
Rigidity/Inertia:
The software is difficult to integrate and deploy/upgrade because every update impacts, or is impacted by, dependencies upon other parts of the development, integration, or deployment environment

Fragility/Inconsistency:
Builds are easily “broken” because integrating new changes impacts other fixes/features/enhancements that are seemingly unrelated to the change that was just made, or changes keep disappearing/reappearing or are difficult to identify/reproduce

Immobility/Inactivity:
New fixes/features/enhancements take too long to develop because configurations and their updates take a long time to integrate/propagate or build & test.

Viscosity/Friction:
The “friction” of software process against the development flow of client-valued changes is too high because the process has an inappropriate degree of ceremony or control

Needless Complexity/Tedium:
Procedures and tasks are overly onerous and/or error-prone because of too many procedural roles/steps/artifacts, too fine-grained “micro” tracking/status-accounting, overly strict enforcement or rigid and inflexible workflow

Needless Repetition/Redundancy:
The version-control process exhibits excessive branching/merging, workspaces or baselining in order to copy the contents of files, changes and versions to maintain multiple views of the codebase.

Opacity/Incomprehensibility:
It is difficult to understand and disentangle the branching/merging and baselining schemes into a simple and transparent flow of changes for multiple tasks and features developed from multiple collaborating teams working on multiple projects from multiple locations for multiple customers

In my next entry I'll start describing the actual principles and their translations.

Thursday, July 20, 2006

Codeline Flow, Availability and Throughput

There has been an interesting discussion on codeline build+commit contention on the XP yahoogroup initiated by Jay Flowers' post about a proposed build contention equation ...

The basic problem is that there have been some commit contention issues where someone is ready to commit their changes, but someone else is already in the process of committing changes and is still building/testing the result to guarantee that they didnt break the codeline. So the issue isnt that they are trying to merge changes to the codeline at the same time, the issue is that there is overlap in the time-window it takes to merge+build+test (the overall integration process for "accepting" a change to the codeline).

Jay is being very agile to the extent that he wants to promote and sustain "flow" on the codeline (see my previous blog-entry on the 5 Cs of Agile SCM Codelines). He is looking at the [average] number of change-packages committed in a day, and taking into account build+test time, as well as some preparation and buffer time. Here the "buffer time" is to help reduce contention. It makes me think of the "buffer" in the Drum-Buffer-Rope strategy of critical-chain project management (CCPM) and theory-of-constraints (TOC).

Several interesting concepts were mentioned that seem to be closely related (and useful):
If we regard a codeline as a production system, its availability to the team is a critical resource. If the codeline is unavailable, it represents a "network outage" and critical block/bottleneck of the flow of value through the system. This relates to the above as follows:
  • Throughput of the codeline is the [average] number of change "transactions" per unit of time. In this case we'll use hours or days. So the number of change-tasks committed per day or per hour is the throughput (note that the "value" associated with each change is not part of the equation, just the rate at which changes flow thru the system).

  • Process Batch-size is all the changes made for a single change-task to "commit" and ...
  • Transfer Batch-size would be the number of change-tasks we allow to be queued-up (submitted) prior to merging+building+testing the result. In this case, Jay is targeting a one change-task per commit (which is basically attempting single-piece flow).

  • Processing-time is average duration of a development-task form the time it begins up until it is ready-to-commit. And ...
  • Transfer-time is the time it takes to transfer (merge) and then verify (build+test) the result.

  • Takt time in this case would regard the development as the "customers" and would be (if I understand it correctly) the [average] number of changes the team can complete during a given day/hour if they didnt have to wait-around for someone else's changes to be committed.

  • System outage would occur if the codeline/build is broken. It could also be unavailable for other reasons, like if corresponding network or hardware of version-control tool was "down", but for now let's just assume that outages are due to failure of the codeline to build and/or pass its tests (we can call these "breakages" rather than "outages" :-)

  • MTTR (Mean-time to repair) is the average time to fix codeline "breakage," and ...
  • MTBF (Mean-time before failure) is the average time between "breakages" of the codeline
Note that if full builds (rather than incremental builds) are used for verifying commits, then build-time is independent of the number of changes. Also note that it might be useful to capture the [average] number of people blocked by a "breakage," as well as the number of people recruited (and total effort expended) to fix it. That will helps us determine the severity (cost) of the breakage, and whether or not we're better off trying to have the whole team try to fix it, or just one person (ostensibly the person who broke it), or somewhere in between (maybe just the set of folks who are blocked).

Anyway, it's an interesting service-model of codeline availability and reliability for optimizing the throughput of codeline changes and maximizing collaborative "flow."

Has anyone ever captured these kinds of measures and calculations before? How did you decide the desired commit-frequency and how did you minimize build+test times? Did you resort to using incremental builds or testing?

I think that giving developers a repeatable way of doing a private development build in their workspace, even if its only incremental building+testing, gives developers a safe way to fail early+fast prior to commiting their changes, while sustaining flow.

I don't particularly care for the practice "build-breaker fixes build-breakage." At the very least I think everybody who is blocked should probably try to help (unless the number of people blocked is more than recommended size of a single team), and I'm sure the person who broke the build probably feels bad enough for causing the blockage (maybe even more so if multiple people help fix it). I think the build-breaker should certainly be a primary contributor to fixing the build and may be most familiar with the offending code, but they may need some help too, as they might not be as familiar with why/how the breakage happened in the first place since it slipped past them (unless of course it was "the stupid stuff" - which I suppose happens quite a bit :-)

So is anyone out their measuring serviceability, availability, reliability of their codelines? Are any of you using these other concepts to help balance the load on the codeline and maximize its throughput? I think that same of the more recent build automation tools (BuildForge, Maven, MS Build + TeamSystem, ParaBuild, etc.) on the market these days could help capture this kind of data fairly inobtrusively (except for maybe MTTR, and the number of people blocked and people+effort needed to effect the repair).

Saturday, July 15, 2006

The 5C's of Agile SCM

Lean has it's 5S technique. And while Im certain there's a way to translate those into SCM terms (which I may try and do someday, if someone hasn't already), I'm thinking about five important C-words for Agile SCM:
  • Correctness -- the property that the current configuration of the codeline executes correctly and passes its tests.

  • Consistency -- the property that the current configuration of the codeline builds correctly

  • Completeness -- the property that the curent configuration contains all that it should, builds all that it should, and tests all that it should

  • Cadence -- the property that the codeline has a regular rhythm/heartbeat/pulse that represents a healthy flow of progress (and creates a new resulting "current configuration" every time a new change is commited, integrated, and "promoted")

  • Collaboration -- the property that the balance of the above achieves a productive and sustainable degree of collaboration that serves as the source of value-generation
I think that the above represents all the properties that need to be present to a significant degree in order for the codeline to achieve smooth flow and accumulate increasing business value at a steady rate.

Am I missing anything? What about Concordance (via audits or with stakeholders)? Or Customer? Content? Context? (dare I use the word "Control"?)

Monday, July 10, 2006

Agile CMMI and Dancing Elephants

[updated June 1, 2007]

CMMI on the surface is definitely not very inviting to Agile. CMMI can be done in an agile fashion however. If CMMI is something you have a need for, then for secrets of how to do it "Agile-style", and details of success stories and lessons learned, take a look at the following links:

Also see "Integrating Agile Methods", and "Teaching the Elephant to Dance: Agility Meets Systems of Systems Engineering and Acquisition" (and others) from the CSE 2005 Annual Research Review.

Friday, July 07, 2006

Trustworthy Transparency over Tiresome Traceability

If there was an Agile CM Manifesto, then this statement should be part of it!
Trustworthy Transparency over Tiresome Traceability

Note that my position on traceability is more "lean" than "agile" I suspect. I base this on the XP & Scrum centric views that were expressed in the March 2004 YahooGroup discussion thread Why Traceability? Can it be Agile? I think the "tests over traceability" is probably a valid summary of the XP/Scrum perspective from that thread.

I think myself and David Anderson would probably say something more along the lines of "transparency over traceability", where we acknowledge the important goals that traceability is trying to fulfill (I'm not sure the XP community embraces all of those that I identified as the "8 reasons" and "6 facets" that I identified in my paper on traceability dissected). David in particular has written in the past about "trustworthy transparency" and "naked projects" (projects that are so transparent and visible in their status/accounting that they seem "naked").

I also differ strongly with the many vocal opinions expressed in XP community when it comes to the use of tools for tracking requests/changes: I'm strongly in favor of using a "good" tracking tool. I think index cards are a great and valuable "tool" for eliciting dialogue and interaction with the "customer" (and I use them for this purpose, along with post-it notes). But I believe index cards simply do not "cut it" as a serious means of storing, tracking, sorting, searching, slicing & dicing development/change requests).

I do believe an extent of traceability is necessary, and that it's not necessarily "agile", but that it can be, and should be, "lean" and streamlined, and should serve the purpose of transparency, visibility and status-accounting rather than being a goal in itself. And I think there are several strategies and tactics that can be employed to achieve "lean" traceability in service to "trustworthy transparency and friction-free metrics."

I think that a "lean" approach of traceability would focus on the following:
  1. Flow: If one uses "single piece flow" and does changes at the granularity that TDD mandates, then software-level requirements, design, coding, and testing are all part of the same task, and tracking them to a single record-id in the change-tracking system and version-control tool would actually go a long ways toward traceability (its much more work & intermediate artifacts when those activities are all separated over time (different lifecycle phases), space (different artifacts) and people (different roles/organizations). When traceability efforts noticeably interfere with "flow" is when agilists will start screaming.

  2. Minimizing intermediate artifacts and other perceived forms of "waste" (overspecifying requirements or too much requirements "up front") because fewer artifacts means fewer things to trace.

  3. Collocating both people & artifacts (the former for communication, the latter for "locality of reference") for those artifacts that are deemed necessary.

  4. Coarse-Granularity and Modularity/Factoring of what is traced: tracing at the highest practical level of granularity (e.g., is it practical to trace to the individual requirement or the use-case? To the line of code, or to the method/subroutine, or to the class/module) - this would be about "simple design" and "(re)factoring)" as it applies to the structure of the traced entities and their relationships.

  5. Transparent, frictionless automation of the terribly taxing and tiresome tedium of traceability: focus on taking the tedium out of manual traceability and have it streamlined and automated as much as possible, ideally happening seamlessly behind the seems (like with Jane Huang's event-based traceability (EBT), or thru the use of a common environment "event" catcher within Eclipse or MS Team System server), probably using a task-based, test-driven (TDD), or feature-driven (FDD) approach.
Many of these concepts and more are embodied in Sam Guckenheimer's recent book on Software Engineering with Microsoft Visual Studio Team System. I found this book to be surprisingly good (outstanding even), and not at all what I was expecting given the apparent tool/vendor-specific nature suggested by the title. The value-up paradigm and most of the other concepts and values in the book are very well aligned with agility while still meeting the needs of more rigorous ceremony in their software and systems engineering efforts.

I'll close with a description of a recent presentation by David Anderson on Changing the Software Engineering Culture with Trustworthy Transparency:
"Modern software tooling innovation allows the tracking of work performed by engineers and transparent reporting of that work in various formats suitable for everything from day-to-day management and team organization to monthly and quarterly senior executive reporting. Modern work item tracking is coupled to version control systems and aware of analysis, design, coding and testing transitions. This makes it not only transparent but trustworthy. Not only can a tool tell you the health of a project based on the state of completion of every work item, but this information is reliable and trustworthy because it is tightly coupled to the system of software engineering and the artifacts produced by it.

The age of trustworthy transparency in software engineering is upon us. Trustworthy transparency changes the culture in an organization and enables change that unleashes significant gains in productivity and initial quality. However, transparency and managing based on objective study of reality strains existing software engineering culture as all the old rules, obfuscation, economies of truth, wishful thinking and subjective decision making must be cast aside. What can you expect, how will you cope and how can you harness the power of trustworthy transparency in your organization?
"
As someone with a strong Unix and Open-Source heritage, I've long regarded Microsoft as "the evil empire" and loathed their operating system and browser and ALM tools. But in the last 3 years or so they've acquired a number of people in the Agile and ALM community that I highly respect (Brian White, Sam Guckenheimer, David Anderson, Ward Cunningham, James Newkirk) and the products these folks have worked on look incredibly impressive to me (even tho not all of them are still with Microsoft), plus I'm quite impressed with the whole of their Software Factories vision and approach ...

I actually may have to start liking them (or at least part of them :-). Don't get me wrong! I'm still a big fan Unix (and Mac OS/X), Open-Source, and more recently Eclipse, ALF and Corona; But the competing stuff from the folks in Redmond is looking exceedingly more and more impressive to me. Working on those kinds of things with those people would be an incredible experience I think (now if only I could do that without having to relocate from Chicago or spend 25% or more of my time traveling ;-).

Wednesday, July 05, 2006

Leadership/EQ Rites of Passage and the Mythical Manager Month

A bit of a follow-up on my previous blog-entry about Matthew Edwards and his recently published a book on Creating Globally Competitive Software: The Fundamentals for Regular People.

I wrote:
I have a lot of respect for Matt, he and I went thru a lot of "stuff" together over a very short+intense period (more on that in a moment) and managed to come through it while spreading a little bit of light. During that time I also pointed Matt in the direction of Agile development as a possible "way out of the madness", and he did his part to help make that a reality.
Here's the story on that ... I worked with Matt back in 1999-2002 on what was then a hideously dysfunctional "death march" project that we were trying to pull out of it's own self-created and self-perpetuated hole. The product was an internal one, and Matt, a former testing Guru, was one of my key customer reps. The project suffered from just about everything under the sun:
  • Bad management (failure to set+manage expectations & appropriate interfaces)
  • Dysfunctional customer & internal organization (warring tribes, turf wars, political silos, and a severe lack of trusting/trustworthy mgmt leadership),
  • Management that felt senior architects/designers aren't supposed to get their hands dirty in "coding"
  • A tech-lead with great technical & project knowledge/skill/experience and strong passion for quality design but with an equally great reluctance to lead, overly trusting and possessing piss-poor leadership & communication skills at that time (me)
  • Managers that had great communication skills, but no clue about successful software development, and no interest in learning it
  • A highly talented team of young, promising developers, but with a total lack of software development experience/maturity (which wouldnt necessarily be a bad thing if not combined with all of the above)
And so much more ... in fact that project managed to take two of the best-known worst practices ("the mythical man-month", and "too many chiefs/generals, not enough indians/soldiers") and combine them into an even worse one that I dubbed "The Mythical Manager-Month":
The Mythical Manager Month -- adding more managament to a late & failing project just makes everything thing worse and everyone more miserable.
I have to say, that project really taught me a lot about leadership and communication, particularly ...
  • how leadership differs from management, and from cheerleading
  • the importance of planning your communication and having a communication plan
  • the huge impact of really good managers versus really bad ones,
  • the difference between credibility and trust
  • the difference between power/influence and authority
  • how incredibly selfish, two-faced, and despicably unethical some folks can be
  • how to recognize malevolent manipulators who appear to "befriend" you to gain your trust, but will betray and backstab to get what they want
  • and how to recognize (and handle) a demagogue masquerading as a "heroic manager."
The first two years of that project were both a painfully magnficent failure, and a painfully magnificent teacher. It was definitely a leadership "rite of passage" for me, and leading the successful turnaround of project (in which agility played a large part) was a deeply educational and visceral personal experience that has largely shaped my career & objectives since.

The books by Patrick Lencioni on team dysfunctions and how to overcome them, as well as organizational silos, politics & turf-wars would have done me a world of good back then if they'd been available (and if I'd had enough previous appreciation of those problems to have read-up on them and other works related discovering and raising my Emotional Intelligence).

That project marked my transition from "unconscious incompetence" about leadership & communication to "conscious incompetence" and really motivated me to navigate the path to "conscious competence." I yearn for the day when it becomes unconscious competence.

I'm not quite there yet. It's been a long leadership journey (much longer in experience and learning than in actual years) since that project, and I still have a long ways to go. But these days my bookshelf at home is replete with just as many books about leadership, EQ, influence, and communication as my technical bookshelf at work is with books on software development, and I think about a lot more than just the technical strategies/techniques/practices and lessons learned in my day-to-day work.

Monday, July 03, 2006

Creating Globally Competitive Software

A friend of mine, Matthew Edwards, recently published a book on Creating Globally Competitive Software: The Fundamentals for Regular People. I can't wait to get my copy and start reading through it.

I have a lot of respect for Matt, he and I went thru a lot of "stuff" together over a very short+intense period (more on that in a later blog-entry) and managed to come through it while spreading a little bit of light. During that time I also pointed Matt in the direction of Agile development as a possible "way out of the madness," and he did his part to help make that a reality.


Since then Matt has had a few other "gigs" that have advanced his experience and insights into software development (in a very Gerry Weinberg-esque fashion). He later co-founded Ajilus, which works and consults in global software development with a strong socio-technical perspective, having embraced the ideas of Agility, Scrum, Theory of Constraints, and systems thinking about the organizational/social roots of most seemingly technical problems.

So I'm really looking forward to reading what Matt has to say, as someone who has seen all of that from many perspectives, and has seen the light regarding agility, collaboration, organization, globalization and how to convey those lessons to "regular people." As part of his bio, Matt writes:
"I consult, teach, speak, write and deliver in the software solution delivery space with a focus on helping teams simplify the software delivery lifecycle - and deliver. Time, cost, team solidarity and structures, organizational behavior, ability to deliver, pulling projects out of the hole ... everything is interdependent and is usually social, not technical."
-- Matthew Edwards,
http://www.ajilus.com/
Like I said, I'm definitely looking forward to reading through this one and seeing how it can help folks like me "connect" with "regular people."

Sunday, June 25, 2006

Nested Synchronization and Harmonic Cadences

I was reading David Anderson's weblog and his recent entry on good versus bad variation (which references an earlier blog-entry of mine on the same subject). Apparently this was a recurring theme at the recent Lean Summit in Chicago, and the consensus there was:
  • Organizing for routine work: Drive out variation (and automate profusely)
  • Organizing for innovative work: Encourage variation (and collaborate profusely)

One of the links was to Don Reinertsen's website (he is the author of Managing the Design Factory), and at the top of the page was the "tip of the month" for June 2006 on the subject of Synchronization:
The practical economics of different processes may demand different batch sizes and different cadences. Whenever we operate coupled processes using different cadences it is best to synchronize these cadences as harmonic multiples of the slowest cadence. You can see this if you consider how you would synchronize the arrival of frequent commuter flights with less frequent long haul flights at an airline hub.

Also, Mary Poppendieck was mentioning "Nested Synchronization" in the Lean Development YahooGroup while she was working on her latest book Implementing Lean Software Development: From Concept to Cash where she advised to use continuous integration and nested synchronization instead of infrequent, big-bang integration.

I think both of these apply directly to "Lean" SCM!
  • Harmonic cadences address nested synchronization of integration/build frequencies, both in the case of
    1. different types of builds (private build, integration build, release build), and ...
    2. different levels of builds (component builds, product-builds)
    3. and also in the case of connected supplier/consumer "queues" where builds or components are received from an (internal or external) supplier and incorporated into our own product/components builds.

  • Harmonic cadences would also address release cycle planning for a product-line of products that are releases of multiple (internal & external) component releases.

  • Nested synchronization would seem to apply to branching structures where development branches feed into integration/release branches and their relation to mainline branches, and the direction and frequency with which changes get merged or propagated across codelines.

Of course, when you can manage without the "nesting", that is ideal for continuous integration. Continuous integration together with test-driven development seems to approximate what Lean calls one piece flow. An article from Strategos discusses when one-piece flow is and isn't applicable.

In the context of SCM, particularly continuous integration and TDD, one piece flow would correspond to developing the smallest possible testable behavior, then integrating it once it is working, and then doing the next elementary "piece", and so on. This is typically bounded by:
  1. the time it takes to [correctly] code the test and the behavior
  2. the time it takes to sync-up (merge) your code with the codeline prior to building+testing it, and ...
  3. the time it takes to verify (build + test) the result
Working in such extremely fine-grained increments might not always work well if the one-piece-flow cycle-time was dominated by the time to sync-merge or to build+test, and/or if it always had a substantially disruptive/destabilizing effect on the codeline.

In those two cases, if the time/cost "hit" was more or less the same (independent of the size/duration of the change), then since the penalty per "batch" is roughly the same for a batch-size of one piece as it is for a larger batch-size, then it makes sense to develop in larger increments before integrating and committing your code to the codeline.

Monday, June 19, 2006

Agile Metrics in the Agile Journal

The June issue of the Agile Journal is devoted to the subject of Agile Metrics. Check it out!

There is also a review of David Anderson's book Agile Management for Software Engineering. Little did I know that while I was working on the review, David would be honoring me with praise at his own weblog.

I swear I knew nothing of it when I wrote my review, and that David had no knowledge that I was writing the review of his book (much less what I would say in it). We simply share a very deep admiration and respect for each other's work and ideas.

Wednesday, June 14, 2006

Agile Ideation and Lean Innovation?

More on "agile futures" from some of my earlier posts on globalization 3.0 and extreme competition and how the only way to stay competitive will be to innovate faster and more frequently than the competition ...

So does that mean that the "most valuable features" to implement first will be the ones that are considered "innovative"? Before we can execute on doing agile development for innovative features we have to have some kind of initial "innovation clearinghouse" in the organization where we have a buffer of potential innovation idea-candidates. Those "gestational" ideas will need to undergo some kind of evaluation to decide which ones to toss, which ones to develop a business case for, and which ones to do some early prototyping of, which ones to "incubate", etc.

Eventually, I see two queues, one feeding the other. The "Candidate Innovations" queue will need to "feed" the agile/lean software development queue. Things on the candidate innovations queue will have to go thru some equivalent of iterating, test-first, pairing/brainstorming, refactoring, and continuous innovation integration so that the queue can take "raw" and "half-baked" ideas in the front and churn out fully-baked, concrete, actionable ideas to then feed the development queue.

So the one queue will exist to create "actionable knowledge" (ideation) and will then go into the queue that cranks out "executable knowledge" in the form of working software. Given this two-queued system, how does this work where the "software queue" has both a request (product) backlog and a sprint (iteration) backlog. Lots of things on the product-backlog might be viewed as waste. And yet if they have made it thru the "ideation" backlog to produce an actionable concept and business-case, then that will indeed have value (but it will be perishable value).

What would Lean+TOC say about how to unconstrain and eliminate waste and maximize flow of the innovation flow that feeds the agile development flow? (I'm assuming the innovation flow would be a bigger bottleneck than the software development flow)

Friday, June 09, 2006

Extreme Economic Gloom and Doom

According to a number of different sources, the US economy is going to have its bottom fall out somewhere around 2010 due to a variety of reasons that are converging all around that same time:
  • Massive trade deficit, soaring personal and government debt, a housing bubble, runaway military expenditures, and skyrocketing healthcare costs with employers' insurance plans covering less and less these days (the usual)

  • Globalization 3.0 and the commoditization of knowledge-work & knowledge-workers

  • Peak oil supply will have been breached (some say it has already, others say it will happen anywhere between 2004 and 2010), resulting in soaring oil prices (far more than they are today) and the race for efficient mass production & distribution of low-cost alternative energy sources

  • Retirement of the "baby-boom" generation (starting in 2007 and peaking between 2010-2020) and its impact upon social security reserves (because of ERISA) and US supply of knowledge-workers

  • Global warming and depletion of the environment will reach the point of no return sometime between 2010 and 202o (if you believe Al Gore in the recent documentary "An Inconvenient Truth")

  • Likelihood of global pandemic flu (possibly bird-flu, but possibly any other kind of flu) happening within the next 5-10 years, and its global impact on medical and industrial/business supply chains (how far away will we be from harnessing nano-biotechnonology when it hits?)

I gleaned all of this just from browsing a bunch of books on amazon.com, like the following:

There are LOTS more saying the same things. On the other hand, a few authors hold out hope that we will finally focus on some of the right things (like the environment and alternative energy sources, and turning to nature itself for innovation):

These things are all converging together (coming to a "head") within the next 10 years. I wonder what the state of Agility will be like then ...
  • Will little/no inventory be desirable amidst the threat of global supply chain disruptions due to pandemic health crisis?
  • Or will agile business partnerships and the resulting "agile business ecosystems" somehow be "autonomic" by that time.
  • As for oil and transportation, might not the threat of pandemic flu end up fueling "virtual" travel and telecommuting?
  • Or will that just give people more time to use their cars for non-work reasons.
  • Who will want to go to the mall or the grocery store if they're worried about contracting life-threatening illnesses?
  • What about emerging markets that are going to "boom" but haven't yet? (Many say nanotechnology and biotechnology will do this eventually - but when?)

I won't be eligible for retirement for ~30 years, and within ~15 years I want to be able to finance a college education for my two children (less than 2 years apart in age). All this sort of makes me want to say "Beam me up Scottie!", or "Why oh why didn't I take the blue pill!"

Monday, June 05, 2006

Vexed by Variation: Eliminate or Encapsulate (with TOC+Lean+GoF)

I had some positive feedback on my previous entry about Six Sigma and Good vs. Bad Variation.

The Six Sigma methodology is largely about eliminating or reducing [destructive] process variation. In the case of software design (and what should be the case in software process design, but all too often is not) we recognize that change and uncertainty is inevitable and we use techniques to minimize and localize the impacts of changes. Three of the most common techniques come straight out of the classic Design patterns book from the "Gang of Four":
  • Identify what changes and Encapsulate the thing that varies
  • Program to an interface, not to an implementation
  • Prefer composition over inheritance
These principles could just as easily apply to process design and the design of process-families (a Product-Family or Product-Line for a family of processes). I attempted this in an earlier blog-entry entitled CM to an interface, not to an implementation.

So how do we find this variation, and how to we know what to do with it? Both Lean and TOC give us some tools to do this (as does Six Sigma). Six Sigma's process-maps are similar to (and quite possibly borrowed from) Lean's Value-stream maps. These are one way to form the "current reality tree" of TOC's Thinking Process and then look for things like waste, non-conformance, or conflict (e.g. a "process" impedance mismatch).

When we find something, what do we do? ...
  • If it is waste, we attempt to eliminate it or reduce it using Lean.

  • If it is variation, we should ask if it is destructive variation (causing poor quality) Is the variation the cause of the problem? Or is it the inherent uncertainty and/or our inability to adapt in the face of change?

  • If the variation is destructive, seek to eliminate it. Sometimes automation helps to eliminate variation as long as there is enough certainty in the procedural details and outputs of what to automate.

  • If the variation is not destructive, or if the problem is uncertainty and/or our ability to be resilient and adapt to change, then try to isolate and localize the variation by encapsulating it. Use the GoF principles and patterns to separate policy (process interface) from mechanism (procedural implementation).

  • If the problem is conflict, then look to see if it is destructive or constructive interference, or impedance mismatch.

  • Impedance mismatch needs to be corrected, and destructive interference (friction) needs to be eliminated or minimized.

  • Constructive interference, and possibly other forms of conflict may need to be retained, in which case we would again look to encapsulate the conflict by trying to separate policy from mechanism, or interface from implementation, and discovering the higher-level rules/forces that they still have in common.

In all the above cases, TOC's Five Focusing Steps can help with the identification of the appropriate patterns/practices to apply.

Comments? Did that make sense? Did it seem like a reasonable application of using OOD patterns and principles in conjunction with TOC, Lean and Six Sigma?

Tuesday, May 30, 2006

Simple ain't Easy: Myths and Misunderstandings about Simplicity

Obviously not all of us have the same idea of what Simple or Simplicity actually mean, specifically in the context of system design (including software, and processes). Here are some common misunderstandings that I frequently encounter about the meaning of "simple design":
"Simple" is not the same thing as "easy to do/understand."

Sometimes something that is "simple" is easy to do or easy to understand. Whether or not it is easy, is often more closely related to how familiar or intuitive it is. Eventually, it may be quite simple to do or understand. But initial attempts to do or understand it may be anything but easy!

The simpler solution may require us to learn something new, and think about something in a way that hasn't occurred to us before. Closed-minds will often close-doors on new ideas (simple or otherwise) because they simply don't want to entertain changing their current views and beliefs.


"Simple design" is not the same thing as "simple to develop/deploy."

If it's simple from the get-go, then it may perhaps be simple to develop/deploy. If the solution is already in place, then making it simpler may involve changing a lot of minds/behaviors as well as a lot of the system, and both of those may be anything but easy to do (especially changing minds/behaviors).


"Simple" is not the same thing as "good enough!"

Put another way, Simplicity != Sufficiency. "Good enough" has more to do with whether something is sufficiently workable "for now" while still fostering subsequent improvement or refinement for later. That doesn't mean the deployed result is simple/simpler; it just means we may be better served by getting to that point in an incremental and evolutionary (emergent) fashion.

In order for that to be true however, it means that the partial/incremental solution must be easy to subsequently change in order to refine and improve!!!

If I install something that is incomplete with the intent of making it more complete, and if it is very hard/painful to change, then I may end-up with the current-state-of-affairs for a number of IT solutions and business-processes: short-sighted, insufficient solutions that the organization defends, and chooses to suffer and impose on others because they don't want to suffer the impact of the change that could bring about relief from the suffering they have become accustomed to living with.


"Simple" is not the same thing as "simplistic!"

A simplistic solution often does not work! One definition of "simplistic" might be the false appearance of simplicity. It's not enough to seem/appear simple. It also has to work (successfully) in order to actually be simple!

Many times someone will discard or exclude a suggestion because it introduces something additional or new into the current design, and they don't want to add anything more/new in the name of simplicity, but they may be being simplistic instead. If the new/added thing is a rightful part of the problem that needs to be solved, then its introduction is correcting an act of omission in the solution design that neglected something essential in the problem-domain.

Sometimes we exclude things that we don't want to see (in the name of simplicity) which are nonetheless a real-world element of the problem we need to solve. Dismissing them in the case where ignoring them has failed to solve the problem, is not simplicity; it is ignorance. It is okay to want something that is "stupidly simple," but not at the expense of being simply stupid!

If the result doesn't do what it's supposed to do when it's supposed to do it, it may seem simple, but, as Gerry Weinberg points out, it's likely that something crucial to the problem statement was omitted or misunderstood either in the design, or in the problem statement itself.


What is "simple" for one request may not be "simple" for the whole!

When faced with a single, seemingly simple request to enhance a system, the requestor may want the specific solution to be simple for their particular situation and perspective (this is sometimes called "point-based thinking"). But what usually needs to be simple is the resulting overall system. Making it simple from one view or situation may just compromise other parts (and stakeholders) of the system. That's not eliminating complexity; it's not even "sweeping it under the rug"; it's just sweeping it onto someone else's doorstep.

Note how a lot of these myths/misunderstandings are more about resistance to changing our thinking/behavior than about being simple.

The Agile Manifesto defines simplicity as "maximizing the amount of work not done." But I think that's a more accurate characterization of Lean than of simplicity.

Recently, I looked though a number of sources of information about what is the meaning of "simplicity" and what are its principles, and I came across a number of interesting resources:
After mulling-over all of those, I think it's fair to say that while "Simplicity" may be, well, "simple", truly understanding "simplicity" is in fact quite hard!
  • Simplicity involves being able to see the whole from a systems thinking perspective while at the same time being able to focus in on what is relevant and essential and how it impacts the rest of the system.
  • Sustainable simplicity often has to evolve or emerge on it's own from a set of simple guiding rules.
  • The opposite of simplicity is complexity (as opposed to "hard" or "difficult" or "time-consuming" or "labor-intensive")
  • In mathematics, simplicity is often "elegance" and is more than just the intersection of "what is necessary" and "what is sufficient"
  • In architecture, "simplicity" is often synonymous with "beauty"
  • Hiding complexity isn't the same as removing complexity.
  • Many of the tools we use to manage complexity in systems design may in fact add more/new objects to hide complexity or separate concerns
  • Minimizing dependencies throughout a system is more critical to simplicity than minimizing the number/types of objects
  • Occam's Razor does not in fact say that the "simpler explanation is better" ... it says that the explanation that makes the fewest assumptions and poses the fewest hypotheticals (i.e., minimizes the number of "given" and "unproven" conditions) is the most preferable because it is easier to comprehensively test in order to prove/disprove.

I think that true simplicity is about minimizing and managing overall complexity. Complexity in software design and development comes from the sheer size and complexity of the problem we are being asked to solve, and the richness and vastness of the interconnected maze of interactions within the system and between the system and its environment.
  • The overall complexity of the system is dominated far more by the interactions between the parts that makeup the whole than it is by the parts alone.
  • For any non-trivial system, simplicity often has less to do with the number and kind of different things involved and more to do with the number and kind of interdependencies between them.
So achieving simplicity is less about managing "parts" of "things" or individual point-solutions, and is more about managing rules and relationships between the parts and things and solution-sets.

When dealing with large or complex systems (like most software, and software processes) the number of things (scale) and different types of things (diversity) that need to be managed is overwhelming. If we can come up with a modicum of modest, simple rules & principles to govern our design decisions in ways that help us minimize and manage interdependencies, eliminate constraints, and remove waste, then we are on the path to solving the real problem and meeting stakeholder needs in a way that is both simple and sustainable.

I'll close with my collection of favorite quotes on simplicity and simple design from the sources I culled above.


Everything should be made as simple as possible, but not simpler.
-- Albert Einstein
Three Rules of Work: Out of clutter find simplicity; From discord find harmony; In the middle of difficulty lies opportunity.
-- Albert Einstein
For every problem there is a solution which is simple, clean and wrong.
-- Henry Louis Mencken
Think simple as my old master used to say - meaning reduce the whole of its parts into the simplest terms, getting back to first principles.
-- Frank Lloyd Wright
Beauty of style and harmony and grace and good rhythm depend on simplicity.
-- Plato
The ability to simplify means to eliminate the unnecessary so that the necessary may speak.
-- Hans Hofmann
Making the simple complicated is commonplace; making the complicated simple, awesomely simple, that's creativity.
-- Charles Mingus
Everything is both simpler than we can imagine, and more complicated that we can conceive.
-- Goethe
The whole is simpler than the sum of its parts.
-- Willard Gibbs
The pure and simple truth is rarely pure, and never simple.
-- Oscar Wilde
Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius--and a lot of courage--to move in the opposite direction.
-- E. F. Schumacker
Besides the noble art of getting things done, there is the noble art of leaving things undone. The wisdom of life consists in the elimination of nonessentials.
-- Lin Yu Tang
Very often, people confuse simple with simplistic. The nuance is lost on most.-- Clement Mok
You can't force simplicity; but you can invite it in by finding as much richness as possible in the few things at hand. Simplicity doesn't mean meagerness but rather a certain kind of richness, the fullness that appears when we stop stuffing the world with things.
-- Thomas Moore
The point of philosophy is to start with something so simple as not to seem worth stating, and to end with something so paradoxical that no one will believe it.
-- Bertrand Russell
The aspects of things that are most important to us are hidden because of their simplicity and familiarity.
-- Ludwig Wittgenstein
Manifest plainness, Embrace simplicity, Reduce selfishness, Have few desires.
-- Lao-Tzu, Tao Te Ching
Simple things should be simple and complex things should be possible.
-- Alan Kay
The key to performance is elegance, not battalions of special cases. The terrible temptation to tweak should be resisted unless the payoff is really noticeable.
-- Jon Bentley and Doug McIlroy
... the purpose of abstraction is not to be vague, but to create a new semantic level in which one can be absolutely precise.
-- Edsger W. Dijkstra
Simplicity and elegance are unpopular because they require hard work and discipline to achieve and education to be appreciated.
-- Edsger W. Dijkstra
Beauty is more important in computing than anywhere else in technology because software is so complicated. Beauty is the ultimate defense against complexity.
-- David Gelernter
Fools ignore complexity; pragmatists suffer it; experts avoid it; geniuses remove it.
-- Alan Perlis
Technical skill is mastery of complexity, while creativity is mastery of simplicity.
-- E. Christopher Zeeman
Architect: Someone who knows the difference between that which could be done and that which should be done.
-- Larry McVoy
One of the great enemies of design is when systems or objects become more complex than a person - or even a team of people - can keep in their heads. This is why software is generally beneath contempt.
-- Bran Ferren
A complex system that works is invariably found to have evolved from a simple system that worked.
-- John Gall
The most powerful designs are always the result of a continuous process of simplification and refinement.
-- Kevin Mullet
There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult.
-- C.A.R. Hoare
Perfection (in design) is achieved not when there is nothing more to add, but rather when there is nothing more to take away.
-- Antoine de Saint-Exupéry
Simple, clear purpose and principles give rise to complex intelligent behavior. Complex rules and regulations give rise to simple stupid behavior.
-- Dee Hock

Wednesday, May 24, 2006

Six Sigma and Good vs Bad Variation

I briefly touched on Six Sigma (the methdology, not the number/measure) in my previous blog-entry on Cost, Cruft and Constraints, and how Six Sigma methods are about reducing what I called destructive variation:
Six Sigma is about eliminating process variation, but not just any kind of process variation. It's about eliminating "destructive" variation. Destructive variation is value-subtracting variation (rather than value-adding).
Some go too far with Six Sigma and interpret it to mean that ALL process variation is "bad." I don't subscribe to that belief. I think there is intentional variation that adds value, as well as "essential" variation that is either unavoidable, or else emerges naturally (and beneficially) out of the creative & learning processes.

Many have tried for repeatability & reproducibility where it isnt always appropriate. The knowledge creating aspects of software development aren't always appropriate places for procedural repeatability and removing variation. The feedback/validating aspects often are. I think Software CM and testing are appropriate places for this: anything that is appropriate and desirable to automate (like building and testing) would be an appropriate for removing variation and SixSigma techniques.

So just like not all conflict is bad, not all variation is bad! And Six Sigma efforts should (I think) limit their focus to destructive variation for the portions of their processes where it make sense to have both repeatable procedures and reproducible results.

I think I'm actually starting to come up with a combination of Agile, Lean, TOC and Six Sigma that is both cohesive and coherent. I credit David Anderson with starting me down that path. (Look out! CMMI might be next :-)

Tuesday, May 23, 2006

Business Agility Defined

The last few blog entries on agile definitions, and Agile + Lean + TOC have inspired me to proffer up this aliterative definition of Agility in a Business context.

Business Agility is ...
Rapid Response to Change with Optimal Efficiency in Motion, Economy of Effort, Energy in Execution, and Efficacy of Impact!

Is that too verbose? Is it enough to say simply:
Rapid response with optimal efficiency, economy, energy and efficacy!

Note that I dont say anything above about keen sense of timing & awareness to change in one's Environment. Should it? What about Entrusting and Empowering others?

Monday, May 22, 2006

Cost, Cruft and Constraints

My earlier blog-entry on "Feedback, Flow, and Friction" met with mixed reviews.

Many commented that agile development is ultimately about lowering the cost of change. Others noted feedback is important, but unless you respond to it, and take appropriate action, it's not the be-all and end-all it may seem. A few felt the that "friction" wasnt quite close enough to "contraints."

It seems when it comes right down to it, Agile, Lean and TOC are all about trying to eliminate and/or remove the following:
  • Cost-of-change: Agile is about reducing the cost of change by making change easy and catching the need to change much earlier thru continous feedback and close, frequent stakeholder interaction.

  • Cruft: It (cruft) is waste! And Lean is all about eliminating waste in order to maximize flow.

  • Constraints: TOC is all about eliminating constraints, and using the five focusing steps to do it.
While we're at it - Six Sigma is also about eliminating something: variation (but not just any kind of variation). It's about eliminating "destructive" variation. Destructive variation is value-subtracting variation (rather than value-adding).

The phrase "Cost, Cruft and Constraints" doesnt sound as attractive as "Feedback, Flow and Friction." A large part of that may be due to its nonconstructive focus on what to eliminate rather than on what to create.

For each thing I'm allegedly eliminating, I'm gaining something else:
  • Reducing the cost-of-change makes it easier to accommodate change and be adaptive/responsive

  • Eliminating waste helps maximize flow of the production pipeline.

  • Eliminating constraints help maximize throughput

  • Eliminating destructive variation helps maximize quality in terms of correctness, reliability, availability, operability, maintainability, etc.

Monday, May 15, 2006

Pragmatic Multi-Variant Management

I had a rather lengthy post on this subject on the Pragmatic Programming Yahoo group ...

My advice as a CM guy is you do NOT want to use branches or branching to solve this problem. If you have to manage/support multiple "configurations" of functionality for multiple markets/customers, branching is just about the worst way to do it (and this is coming from someone who is an advocate of branching, when done the right way for the right reasons).

The issue is likely to be one of "Binding Time". In your case, you would prefer the binding-time to be either at build-time (such as conditional compilation or linking), or release-engineering time (packaging up the "right" sets of files, including configuration files, for distributing to a customer), install/upgrade-time, or run-time.

One of the prefered ways to do this is with Business-Rules; particularly when it comes to decisions of "variable" policies and/or functionality. With GUI's depending on the kind of variation, you might resort to either conditional compilation or else simply creating separate source-files (instead of branching the same file).

There were some substantial discussions on this topic on CMCrossroads forums that yielded many practical insights and further resources. I recommend them:fantamango77 wrote:
I can say I tried out different strategies in earlier projects already. But not with such a big project. And none of the ways I took made me happy. It always ended up in a kind of hell: Configuration hell, branching hell, inheritance hell.
Yes - and some of those "circles" of h*ll are far worse than others when you look at the overall costs and effort. The bottom-line is that if you have to support multiple "variants" rather than a single evolving "line", you are adding complexity, and there is no way you are going to be able to sweep it under the rug or make it go away.

So it all comes down to which techniques and strategies in which "dimensions" of the project/product will prove most effective at minimizing and managing dependencies, effort and impact by most effectively allowing you to leverage proven principles such as encapsulation, cohesion, modularity, abstraction, etc. in the most efficient ways.

Think about what aspects or dimensions of your project/product are required to be "variable" in order for you to support multiple variants. Do your "variants" need to differ ...
  • in functional/behavioral dimensions
  • along organizational boundaries
  • along multiple platforms/environments
  • along temporal (evolution) dimensions
  • along project dimensions
Figure out which one or two of these are the primary "dimensions" of variability you need to support. Then find the set of mechanisms and techniques that apply to it.

For example, if you need to allow variability primarily along functional and environmental "dimensions", then which of those "dimensions" does version branching operate within? Version branching operates primarily within the space of evolution/time (concurrent, parallel, and distributed development).

So temporally-based techniques are not going to be best suited to handling variation in the behavioral or environmental dimensions, as the isolation/insulation they provide does not most effectively minimize, localize or encapsulate the kinds of dependencies in the non-time-based aspects of the system.

Differences in policy and/or mechanism are typically best handled using a business-rules approach to deliver a single codebase with multiple possible configurations of rules and rule-settings.

Differences in platforms are best handled by well known design and architecture patterns like Wrapper-Facade, and numerous patterns from the Gang-of-Four design patterns book.

Differences in behavior may be best handled by functional/feature selection and deselection "design patterns" like Configurator, to enable or disable features and/or services at post-development binding-times.

Inheritance may be useful in some cases, if the type of configuration needed really does fit a single hierarchical model of increasing specialization. In other cases, an aspect-oriented approach might be better.

Also think about the following in terms of what needs to "vary" and what needs to stay the same:
  • Interface -vs- Implementation -vs- Integration
  • Container -vs- Content -vs- Context
This kind of commonality and variability analysis helps isolate the fundamental dimensions or aspects of variation that need to apply to your project. If something needs to be able to vary while something else doesnt, then "encapsulate the thing that varies" using techniques that separate interface from implementation (or implementation from integration, or etc.) in ways that "keep it [structure] shy, DRY, and tell the other guy."

You might end up using a combination of strategies depending on the different aspects of variation you require and the "dimension" in which each one operates.