Friday, January 27, 2006

Kandt's SCM Best-Practices: Tool Practices

More from the paper Software Configuration Management Principles and Best Practices, by Ronald Kirk Kandt, appearing in the Proceedings of PROFES2002, the 4th International Conference on Product-Focused Software Process Improvement, Rovanieme Finland, December 2002.

In this article, Ronald Kandt describes ten basic principles that support configuration management activities, and then goes on to describe twenty three "fundamental" practices, which are split across the categories of: Management Practices, Quality Practices, Protection Practices, and Tool Practices.

In this entry, I'll enumerate Kandt's Tool Best-Practices for SCM (in priority order, with the most important ones listed first):
Practice 19:
Check code in often

Kandt goes on to explain "This practice should be constrained when checking in code on the primary development branch. That is, developers should only check in working versions of code to a [primary] development branch."

The main reason given is to ensure against loss of change by making sure the code is in the repository. So although initially it might seem this practice is about committing in small talks (which I'm sure Kandt would approve), that doesn't appear to be what he's emphasizing. This really seems to be about frequent use of "Private Versions" and then, before doing a Task-Level Commit, making sure one does a Private Build and runs Unit Tests and Smoke Tests as part of the Codeline Policy.

In other words: "If it ain't checked-in, it don't exist!"

Practice 20:
Configuration management tools should provide patch utilities

The stated benefit here is to support incremental release/deployment and also remotely submitted changes over slow telecommunication connections.

Practice 21:
Do not work outside of managed workspaces

Seems like the "Private Workspace pattern to me

Practice 22:
Do not share workspaces

This is sort of reinforcing that the workspace should be private rather than shared. Having said that, while I agree shared workspaces (not include Pair-Programming) shouldn't usually be the norm, I have seen cases where it can work quite well.

In order for this to work, there must be an explicit "protocol" agreed upon up-front by all the contributors to decide who doing what, when, and to what (not just who is modifying which files when, but who is building what else and when) and/or how to rapidly communicate who is doing what when. See also work by Andre van der Hoek et.al. on Continuous Coordination: A New Paradigm for Collaborative Software Engineering Tools and related work on the "Palantir" project.

Practice 23:
When developing software on a branch other than the primary branch, regularly synchronize development with the development branch

This seems like mainlining (the Mainline pattern) and rebasing (a.k.a. Workspace Update). Only I would say that regularly synchronizing should apply to private workspaces/sandboxes even when they aren't using a private or task branch. The more general rule would seem to be the Continuous Update pattern.
Th-th-th-thats all from Mr. Kandt's list of 10 SCM principles and 23 SCM best-practices!

Thursday, January 26, 2006

Kandt's SCM Best-Practices: Protection Practices

More from the paper Software Configuration Management Principles and Best Practices, by Ronald Kirk Kandt, appearing in the Proceedings of PROFES2002, the 4th International Conference on Product-Focused Software Process Improvement, Rovanieme Finland, December 2002.

In this article, Ronald Kandt describes ten basic principles that support configuration management activities, and then goes on to describe twenty three "fundamental" practices, which are split across the categories of: Management Practices, Quality Practices, Protection Practices, and Tool Practices.


In this entry, I'll enumerate Kandt's Protection Best-Practices for SCM (in priority order, with the most important ones listed first):
Practice 15:
Use a software system to perform configuration management functions

Also known as: "Use an SCM tool!"

Practice 16:
Repositories should exist on reliable physical storage elements

Practice 17:
Configuration management repositories should be periodically backed-up to non-volatile storage and purged of redundant or useless information

Practice 18:
Test and confirm the backup process

Is there really anything here to argue with? (I didn't think so :-)

Oh, and that #18 is one that is often overlooked. I can't tell you the number of times I've come across folks that think there stuff is safe "cuz it's backed-up," but their backup process didn't confirm that they were able to successfully read/extract the data that was archived.

Next up, we'll look at what Kandt identifies as Tool practices in his list of 23 SCM best-practices.

Wednesday, January 25, 2006

Kandt's SCM Best-Practices: Quality Practices

More from the paper Software Configuration Management Principles and Best Practices, by Ronald Kirk Kandt, appearing in the Proceedings of PROFES2002, the 4th International Conference on Product-Focused Software Process Improvement, Rovanieme Finland, December 2002.

In this article, Ronald Kandt describes ten basic principles that support configuration management activities, and then goes on to describe twenty three "fundamental" practices, which are split across the categories of: Management Practices, Quality Practices, Protection Practices, and Tool Practices.


In this entry, I'll enumerate Kandt's Quality Best-Practices for SCM (in priority order, with the most important ones listed first):
Practice 8:
All source artifacts should be under configuration control

Practice 9:
Use a Change-Control Board (CCB)

Practice 10:
Build software on a regular, preferably daily, basis, followed by invocations of regression test suites

Practice 11:
Document identified software defects

Practice 12:
Software artifacts that comprise a release should adhere to defined acceptance criteria

Practice 13:
Each software release should be regression tested before the test organization receives it

Practice 14:
Apply defect repairs to every applicable release or ongoing development effort

Most of folks in the "Agile camp" and in the "CM camp" would probably consider most of these to be "no brainers." The ones that Agilists might pick nits with are 8, 9, 11 and 14.
Control Contrarians and Wooden "Boards"

For 8 and 9, I don't think you'll find any agile folks recommending against the use of a version control system/repository. They would likely take issue with the appropriate meaning of "control." The words "control" and "change control" often set off sirens and red-flags for many agilists. And with good reason - they all too often see control attempted in a way that stifles developers and imposes highly restrictive waiting and redundancy for authorization/permission to do things; and they often see it executed as a means of preventing change instead of embracing it (see The Unchangeable Rules of Software Change).

Agilists also don't like the term CCB. Not only does the word "control" raise hackles (as already mentioned), but the term "Board" doesn't mesh very well with Agile values and perspectives on the whole concept of "team": a "board" conjures images of people isolated from the day-to-day workings and workers of the project. In reality, iteration planning meetings basically fill the CCB function for most agile projects because the iteration is usually small enough that there just isn't time to introduce new requirements within the iteration's timebox, so it's usually deferred to the next iteration.

Bugbases and Human Faces

There are many agilists who will rail against using a change/defect tracking tool. They say things like: there shouldn't be enough defects to warrant a bugbase (as opposed to a spreadsheet); rather than documenting and describing them in a database, we should just go fix 'em instead; index cards are a better for recording the bare essentials and promoting conversational dialogue and two-way face-to-face communication; tracking systems dehumanize communication and are too easily misused to the effect of hindering rather than facilitating dialogue and collaboration.

I understand all of these remarks and concerns. And while they all raise valid points, my own position is different. I consider a basic defect/issue/enhancement tracking (DIET) system to be every bit as essential as a version control tool. I still use index cards as a means of initiating dialogue and eliciting/capturing resulting needs and constraints. But then I put them into the tracking tool. Maybe if my handwriting were more legible the cards would be comprehendible enough to others, but I still think it's much more useful and convenient for tracking, organizing, sorting, searching, status accounting and generating reports (even if they just get posted to a local "information radiator").

I also think information about defects should be captured in order to identify and understand possible trends and root-causes, as well as provide evidence and history to consumers and customers (especially those requiring formal support/service-level agreements).

I do think a lot of change control implementations (and resulting tool customizations) make things harder on developers than needed, for the sake of convenience to other roles. I think that's what leaves the bitter taste in many agilists mouths and why they dislike it so much - because it holds them back and/or diverts them away from what they feel is most important: delivering value to the customer in the form of tangible, tested results. I think that's a shame because I think it doesnt' have to be that way. A few tracking tools are gaining popularity in the Agile arena, like VersionOne, and Jira.

The Monstrosity of Multiple Maintenance

Okay - I don't think that any agilists will say that a defect shouldn't be fixed in all applicable release. What they will say is that they would first do everything in their power to have one and only one mainline of development. Branching a new mainline to support a legacy release or a market/platform/project-specific variant is anathema for most agilists: it creates redundancy among code-streams (fixes have to be applied and integrated more than once) and splits the value-delivery stream into isolated streams/teams of diminished capacity.

Agilists would prefer to see a solution at a later binding time that uses design patterns of software architecture, building/releasing, licensing/distribution, & install/upgrade rather than patterns of version branching. For the most part I agree with them, but I am less wary of branching in the case of parallel development (because I know how to do it really effectively), and I am perhaps more accepting of the case of multiple releases (but I would still fight like heck against multiple variants :-)

Next up, we'll look at what Kandt identifies as Protection practices in his list of 23 SCM best-practices.

Tuesday, January 24, 2006

Kandt's SCM Best-Practices: Management Practices

More from the paper Software Configuration Management Principles and Best Practices, by Ronald Kirk Kandt, appearing in the Proceedings of PROFES2002, the 4th International Conference on Product-Focused Software Process Improvement, Rovanieme Finland, December 2002.

In this article, Ronald Kandt describes ten basic principles that support configuration management activities, and then goes on to describe twenty three "fundamental" practices, which are split across the categories of: Management Practices, Quality Practices, Protection Practices, and Tool Practices.


In this entry, I'll enumerate Kandt's Management Best-Practices for SCM (in priority order, with the most important ones listed first):
Practice 1:
Maintain a unique, read-only copy of each release

Also known as: create an immutable release label

Practice 2:
Control the creation, modification, and deletion of software artifacts following a defined procedure

Also known as: use version control, and agree on how you'l be doing it -- for example by identifying which SCM patterns you'll be using and their specific implementation in your project's context

Practice 3:
Create a formal approval process for requesting and approving changes

Also known as: manage change/scope, preferably by managing expectations rather than by trying to prevent change

Practice 4:
Use Change Packages

This one is more than just task-level commit, it builds on task-level-commit to provide task-based-development

Practice 5:
Use shared build processes and tools

We wrote about this in our October 2003 CM Journal article on Agile Build Management and it's March 2004 successor article on Continuous Staging

Practice 6:
A version manifest should describe each software release

This is more than just a self-identifying configuration listing of files and versions. Kandt also intends it to mean identifying the set of features, fixes, and enhancements too, as well as all open problems and issues. This is often included in the release notes for a new release. It also relates to the information necessary to satisfy a configuration audit.

Practice 7:
Segregate derived artifacts from source artifacts

Also known as: know your sources from your targets! Often times, the versioning/storage strategies used for the two may differ. (Of course, that's not the only reason to segregate them.)
Next up, we'll look at what Kandt identifies as Quality practices in his list of 23 SCM best-practices.

Monday, January 23, 2006

Kandt's SCM Principles

From the paper Software Configuration Management Principles and Best Practices, by Ronald Kirk Kandt, appearing in the Proceedings of PROFES2002, the 4th International Conference on Product-Focused Software Process Improvement, Rovanieme Finland, December 2002.

In this article, Ronald Kandt describes ten basic principles that support configuration management activities, and then goes on to describe twenty three "fundamental" practices, which are split across the categories of: Management Practices, Quality Practices, Protection Practices, and Tool Practices.


In this entry, I'll simply enumerate the principles from the article. In subsequent entries I'll list the 4 different sets of best-practices.

Kandt's Ten Basic Principles of SCM are:
Principle 1:
Protect critical data and other resources

Principle 2:
Monitor and control software development procedures and processes

Principle 3:
Automate processes and procedures when cost-effective

Principle 4:
Provide value to customers

Principle 5:
Software artifacts should have high-quality

Principle 6:
Software systems should be reliable

Principle 7:
Products should provide only necessary features, or those having high value

Principle 8:
Software systems should be maintainable

Principle 9:
Use critical resources efficiently

Principle 10:
Minimize development effort

Nothing particularly earth shattering here. A few interesting things to note:
  • All of them are certainly well aligned with agility, or any sound engineering practices for that matter. But #7 and #10 seem especially well aligned with agility, and are often not emphasized enough in many CM circles.
  • #10 in particular might surprise some folks, because I'm sure many developers might perceive CM as trying to do anything but minimize development effort, and may feel #10 is often treated as secondary and subordinate to #2
That's all well and good. Those things are easy to say "should" about. What's harder is to successfully do them all, and balance them all effectively when facing an SCM problem. It will be more interesting to see what Kandt's 23 SCM best-practices are, and how they manage to uphold these principles.

I'll also note that these principles seem somewhat different from the kind of SCM Principles I've been trying to compile. The things I'm looking for are less about SCM process goals, and more about SCM solution design (e.g., design principles for how to design or select an SCM best-practice that most effectively preserves the above goals). In this regard, I might consider most of the above to be goals more than principles (but with a few exceptions.)

Tuesday, January 17, 2006

The Unchangeable Rules of Software Change

I dont think I've ever written this down before, but I commonly say this to many a developer and development team. I often come across teams in the early stages of learning to deal with changing requirements. They typically run into two pitfalls in the following order:
Pitfall #1: No scope change-management
The very first pitfall they fall into is not having any kind of change-management for the project/requirements. They allow any and all changes in a very reactive fashion without thinking about trying to renegotiate the scope, schedule, or resources for the project/release.

Pitfall #2: Preventing scope changes
The very next time around, they run into the second pitfall: they overcompensate for being burned/bitten by the first pitfall by trying to prevent any and all changes to the scope of their next product release.
They keep insisting that the fundamental problem is they dont have completely stable detailed requirements. If the requirements were detailed enough up-front, they think their estimates would be more accurate; and if only the requirements would stop changing before a single line of code is written, then the schedule wouldn't keep fluctuating so much. It's those darn users/customers/analysts that don't know what exactly they want at the outset. It's all their fault!

The irony is that the more they try to get stable detailed requirements up-front, the more the schedule becomes protracted: first to get all the gory-details analyzed and specified up-front; and second because so many of the gory-details were either incorrect or still not detailed enough. It becomes this vicious cycle of trying to prevent change with more up-front detail, and yet things keep getting worse instead of better.

The first thing I commonly do here is explain the following:
    There is a very fancy technical term that biologists use to describe completely stable systems. This highly sophisticated technical term is the word "DEAD!"
I then try to explain that we meager humans (including ourselves and our customers) are imperfect, and we have imperfect and incomplete knowledge: We don't know things, and we don't know that we don't know things, and we don't know how to find out many of those things earlier.

Then I tend to mention Phil Armour's description of the Five Orders of Ignorance and how Software is not a Product, and that software development is a therefore a knowledge creation activity which involves reducing our ignorance over time though learning and discovery about the domain (our requirements) and ourselves (our process, culture, and skill/capabilities).

At this point I then introduce them to my "tried and true, battle-proven and industry-acknowledged, Unchangeable Rules of Software Change":
Rule #0: Change is Inevitable!
The Requirements/Plans ARE going to change!

Rule #1: Resistance is Futile!
There isn’t a darn thing you can do to prevent Rule #0.

Rule #2: Change is like Quicksand -- Fighting it only makes it worse!
The more you try to deny and defy rule #1 by attempting to prevent rule #0, the worse things will get.

Rule #3: Change is like Quicksilver -- Tightening your grip makes it slip from your grasp!
The more you try to deny and defy rule #2 by attempting to precisely predict, or rigidly control change, the more erratic and onerous the result will be.

Rule #4: Embrace change to control change
The more flexible and adaptive you are at accommodating change, the more control you will have over it.
Recently I was talking to a group that was struggling with rule #2. They thought if they could only do even more detailed specification up-front (they already do a reasonable amount of up-front detail), that it would somehow eliminate problems with estimation accuracy, which in turn would alleviate problems with "conformance to plan" and prevent the details from being discovered later (because they would instead get them all "right" up-front).

Despite having plenty of historical evidence/data in this particular product to support the "inescapable truth" laid out by these rules, there still seemed to be that desire to cling to the illusion of control that we can somehow prevent such changes if only we spend more time+effort getting a more complete+perfect+detailed spec up-front.

I was searching for external validation ("not invented here") and then came across the following three things that I liked a lot:

Tuesday, January 10, 2006

Lean Principles for Branching

A recent thread on the scrumdevelopment YahooGroup about "Scrum releases and SCM" got me thinking about a set of "Agile SCM" slides I prepared, one of which tried to apply principles of lean thinking to branching and merging for version control and their relationship to some of the SCM Patterns.

That was using an earlier version of the principles when Tom and Mary had 10 or so of them. Now they've narrowed it doesnt to seven, so I figured I'd take another stab at it:
  1. Eliminate Waste – Eliminate avoidable merge-propagation (multiple maintenance), duplication (long-lived variant branches), and stale code in infrequently synchronized workspaces (partially completed work)

  2. Build Quality In – Maintain codeline integrity with (preferably automated) unit & integration tests and a Codeline Policy to establish a set of invariant conditions that all checkins/commits to the codeline must preserve (e.g., running and passing all the tests :-)

  3. Amplify Learning – Facilitate frequent feedback via frequent/continuous integration and workspace update

  4. Defer Commitment (Decide as late as possible) -- Branch as late as possible! Create a label to take a "snapshot" of where you MIGHT have to branch off from, but don't actually create the branch until parallelism is needed.

  5. Deliver Fast (Deliver as fast as possible) -- complete and commit change-tasks and short-lived branches (such as task-branches, private-branches, and release-prep branches) as early as possible

  6. Respect People (Decide as low as possible) -- let developers reconcile merges and commit their own changes (as opposed to some "dedicated integrator/builder")

  7. Optimize the "Whole" -- when/if branches are created, use the Mainline pattern to maintain a "leaner" and more manageable branching structure


Did I get it right? Did I miss anything?

Friday, January 06, 2006

Big 'A', the three pillars, and the three 'F's

In the past I've asked "What are Form and Fit for Software CM? and gotten some very interesting answers. Configuration auditing for physical, functional, and process "integrity" (correct+consistent+complete) is a commonly recurring phrase in many classical CM documents and standards. And I was curious to understand how "form, fit and function" mapped from the physical world of hardware into the virtual world of software.

I assumed the "function" part was easy to map (functionality) and that it was the other two, form and function, that were hard. I also wondered where "the three 'F's of form+fit+function originated from.

That made me wonder if it had anything to do with the three pillars of Vitrivius from classical architecture. This goes back to an earlier blog posting about Commodity, Integrity and Simplicity that also discussed the Big 'A' (Architecture) and the three 'F's.

The classical Greco-Roman architect Vitruvius described the three pillars of architecture as Utilitas, Firmitas, and Venustas: Utilitas is usually translated as utility, need, or function; Firmitas as firmness, durability, or stability of structure; and Venustas as beauty, aesthetics, or having pleasing/attractive form.

I can see how beauty or aesthetics could be translated as "form", and certainly see how "utility" could be translated as function. I'm not sure if I see a direct translation between "firmness" and "fit" (perhaps the better the "fit" the more durable the structure?)

I am wondering if form fit and function evolved on their own, separate from form, function, and durability ... or if they are related and "durability" somehow got translated into "Fit" in CM circles. What is the difference between the three pillars of architecture, and form + fit + function for configuration auditing of product integrity?

Friday, December 30, 2005

Software CM and Version Control Blogs

I've been looking around for other blogs that are primarily (or at least regularly) devoted to the subject of Software CM and/or Version Control. I did some searching thru blogsearch.google.com but mostly my own surfing turned up good results. I chose to omit blogs that don't seem to be updated anymore (like Brian White's Team Foundation blog - especially since Brian left Microsoft).

Anyway, here is what I found. If you know of others, please drop me a line.

Blogs about Software CM or Version Control:Blogs frequently discussing Software CM or Version ControlI found a few others, but they didnt seem to be active (like a ClearCase-centric SCM blog and a Continuous Integration 'book' blog -- not to be confused with Jason Yip's fairly active continuousintegration YahooGroup).

Do you know of any that I might have missed?

Happy New Year everybody!

Thursday, December 22, 2005

Agile SCM 2005 Book Reflections and Recommendations

I just finished writing my article for the December 2005 CMCrossroads Journal entitled Agile SCM 2005 - Reflecting back on the year in books. An excerpt follows ...
Hello! My name is Brad Appleton, and I'm a book-a-holic! Hear my serenity prayer:
Lord, please grant me ...
the serenity to accept that I can't read everything,
the time to read and understand everything that I can,
the wisdom to know the difference
[so I won't have to leave my estate to Amazon.com],
and a sufficiently well-read network of friends
[to tell me all about the books they've read].
We thought 2005 was a pretty gosh darn great year for Agile and Software CM alike. We wanted to share what we feel are some of the timeless classics that we have most looked to throughout the year, as well as the new books in the last year that we have been most impressed with.

Those of you reading this are encouraged to read the article to see what we had to say about some of the following books (as well as several others):
Happy Holidays and Hopeful New Years!
A Very Happy Merry ChristmaHannaValiRamaKwanzaakah (or non-denominational solstice celebration) to all in 2005! And looking forward to what 2006 will bring to all of us in the coming year!

Sunday, December 18, 2005

4+2 Views of SCM Principles?

In my last blog-entry I wondered if the interface segregation principle (ISP) translated into something about baselines/configuration, or codelines, or workspaces, or build-management. Then I asked if it might possibly relate to all them,

Here's a somewhat scary thought (or "cool" depending on your perspective), what if the majority of Robert Martin's (Uncle Bob's) Principles of OOD each have a sensible, but different "translation" for each of the architectural views in my 4+2 Views Model of SCM/ALM Solution Architecture? (See the figure below for a quick visual refresher.)




Thus far, the SCM principles I've "mapped" from the object-oriented domain revolve around baselines and configurations, tho I did have one foray into codeline packaging. What if each "view" defined a handful of object-types that we want to minimize and manage dependencies for? And what if those principles manifested themselves differently in each of the different SCM/ALM subdomains of:
  • change control (project-view)
  • version control (evolution view)
  • artifact (requirements, models, code, tests, docs) hierarchy and build management (product view)
  • workspace/repository/site management and application integration & synchronization (environment view)
  • workflow and process design (process view)
  • teaming, inter-group coordination and interfaces/expectations (organization view)
What might the principles translate into in each of those views, and how would the interplay between those principles give rise to the patterns already captured today regarding recurring best-practices for the use of baselines, codelines, workspaces, repositories, sites, change requests & tasks, etc.

Thursday, December 15, 2005

Interface Segregation and Configuration Promotion

I've been thinking more about the Interface Segregation Principle (abbreviated as "ISP") from (Uncle) Bob Martin's Principles of Object-Oriented Design.

The "short version" of ISP in the initial article states that:
=> "Clients should NOT be forced to depend on interfaces that they do not use."

The summary of ISP in Uncle Bob's website says it differently:
=> "Make fine grained interfaces that are client specific."

In previous blog-entries, I've wondered how this might correctly translate into an SCM principle (if at all).
  • In Change-Packaging Principles, I wondered if maybe it corresponds to change-segregation or incremental integration: Make fine-grained incremental changes that are behavior-specific. (i.e., partition your task into separately verifiable/testable yet minimal increments of behavior.)

  • On the scm-patterns list I wondered if maybe it corresponds to composite baselines: composing baselines of other, more fine-grained baselines

  • Now I'm thinking maybe it corresponds to promotion lifecycle modeling and defining the promotion-levels in a promotion-lifecycle of a configuration-item (e.g., a build).
Why am I thinking this?

I guess I'm trying to go back to the basis of my means of comparison: configurations (and hence baselines) as "objects." If a configuration is an object, then what is an interface of a configuration, and what is a fine-grained interface (or "service")?

If I am thinking in terms of configuration building, then the interface for building the object (configuration) is the equivalent of Make/ANT "methods" and targets for a given item: (e.g., standard make targets like "clean", "all", "doc", "dist", and certain standard conventions for makeflags). That is certainly a plausible translation.

But if I am thinking in terms of baselining and supporting CM-mandated needs for things like reproducibility, repeatability, traceability, from the perspective of the folks who "consume" the baseline (it's clients), then maybe the different consumers of a baseline need different interfaces.

If those consumers end up each "consuming" the baseline at different times in the development lifecycle (e.g., coding, building, testing, etc.) then perhaps that defines what the promotion model and promotion levels should be for that configuration.

    What if they aren't at different times in the lifecycle? What if they are at the same time?
Then I guess it matters if the different consumers are interested in the same elements of the baseline. If they're not, maybe that identifies a need for composite baseline.

    What if they aren't at different times and aren't for different elements, but rather the same sets of elements?
Then maybe that identifies different purposes (and services) needed by different consumers for the same configuration at the same time. Building -versus- Coding might be one such example. Would branching -versus- labeling be another? (i.e. "services" provided by a configuration as represented by a "label" as opposed to by a "codeline", or a "workspace"?)

    What if no one of these is the "right" interpretation? What if it's ALL of them?
Then that would be very interesting indeed. If the result encompassed the interfaces/services provided by different Promotion-Levels, Make/ANT-targets, branch -vs- label -vs- workspace, then I don't even know what I would call such a principle. I might have to call it something like the Configuration ISP, or the Representation separation principle, or the manifestation segregation principle, or ....

What, if anything, do YOU think the ISP might mean when applied to Software CM and software configurations as represented by a build/label/codeline/workspace?

Sunday, December 11, 2005

Polarion for Subversion

A quick follow-on to my previous blog-entry on Subversion plus Trac gives SubTraction and an even earlier one asking Can I have just one repository please? ...

I just heard of a new product called Polarion which allegedly appears to do almost exactly what I envisioned in my "just one repository" blog-entry, and there appears to be a "Polarion for Subversion" offering (which also claims to support Ant, Maven, and Eclipse):
"In classic software development tool environments, many different point solutions are used for software life-cycle management. There are requirements management tools, bug trackers, change management, version and configuration management tools, audit and metrics engines, etc. The problem: your development artifacts are scattered, making it difficult to derive useful, timely management information. POLARION® ... keeps all artifacts of the entire software life-cycle in one single place ... gives organizations both tools (for requirements, tasks, change requests, etc.) AND project transparency through real-time aggregated management information ... combines all tools and information along the Software lifecycle in one platform. No tool islands, no interface problems, no difficult, potentially fragile integrations anymore."
However, it does NOT appear to be opensource.

I'd LOVE to see a mixed commercial offering of, say, AccuRev, Jira and Confluence be able to provide this all in one package (just as I described in the blog-entry). [And with AccuRev's and Atlassian's roots in and commitment to opensource (the folks at AccuRev had previously developed the open-source CM system "ODE" for the OSF), they might even consider making it freely available for opensource projects (like Atlassian currently does for both Jira and Confluence)]

Hey! I can dream - can't I? :-)

Friday, December 09, 2005

Subversion plus Trac gives SubTraction

Here's a bit of a "plug" for some open source SCM tool offerrings ...

For those CVS users who don't already know about Subversion I urge you to take a look. Subversion was designed to be a next-generation replacement for CVS that has a lot of the same basic syntax and development model while fixing or updating most of its well known shortcomings.

Another spiffy open-source project that integrates with both CVS and Subversion is Trac, which provides simple but powerful defect/issue/enhancement tracking (DIET) using a Wiki-web interface, and readily integrates with both CVS and Subversion to add collaborative, low-friction request/activity tracking to your version control and can be used to track change-sets in the version control tool and associate them with change-tasks/requests in the tracking tool.

Using Trac with Subversion can help "subtract" a lot of the tedium of traceability from your day-to-day work and give more "traction" to your development efforts. So, in a way, Subversion plus Trac gives SubTraction :-)

Saturday, December 03, 2005

Agile Six Sigma - Holistic, Organic, Lean, Emergent

I've been reading bits and pieces about "Lean Six Sigma" for the past couple years. It seems a reasonable mix of Lean Production and the Principles of Lean Thinking with Six Sigma methods and the SEI's description of Six Sigma. Lately it seems to be getting abbreviated to "Lean Sigma"

More recently, I've been hearing about "Design For Six Sigma (DFSS)" and "convergences" between "Lean" and Goldratt's "Theory of Constraints" (TOC), and techniques like the "The 5 Focusing Steps", "Throughput Accounting" and "Drum-Buffer-Rope." (There was a nice ASQ article comparing Lean, Six Sigma, and TOC awhile back.)

So I wanted to be the first to try and coin the phrase "Agile Six Sigma" - except I'm not real fond of the resulting acronym, plus someone else might have come up with it already (if only in passing). So I wanted to embellish it a bit to create an even better acronym before I commence the marketing madness for my new "cash cow" idea. Thus I have decided upon:
    "Agile Six Sigma - Holistic, Organic, Lean, Emergent."
Seriously tho! I actually think there is a lot of GREAT stuff in and synergies between Agile, Lean, TOC, and Systems Thinking. I think DFSS has some useful tools in its toolbox. I'm less sure of the overall methodology for SixSigma being compatible with Agile methods -- tho I admit David J. Anderson has some GREAT articles that seem to show a connection, particularly the one on Variation in Software Engineering.

I am getting weary of lots of hype that simply throws these buzzwords together (hence my marketing slogan and acronym above :-) but I think they have a lot to offer, and I would be interested in applying them to CM.

I'm particularly curious about using the Lean tools of value-stream mapping along with TOC in analyzing anti-patterns and bottlenecks that often occur in building, baselining and branching & merging (since there seems to be a fairly direct correlation to "code streams" of "change flows" and a "value stream" or "value chain"). Has anyone already done this for CM? (I wonder if something like this could better substantiate the "goodness" of the Mainline pattern.)

Tuesday, November 29, 2005

John Vlissides

I just learned from Martin Fowler's Bliki that John Vlissides passed away on Nov 24, 2005 after a long-term battle with cancer.

John was probably best known as one of the "Gang of Four" who authored the book Design Patterns, which was the seminal work on the subject of patterns if not on all of O-O software design, and one of the best selling computer-science books of all time. A wiki-page created in John’s memory is available for all to read, and to contribute to for those who remember him or have been influenced by him. I'll be posting the following memory there in a couple of days...

My first encounter with John was in 1995 on the "patterns" and "patterns-discussion" mailing lists. I was just a lurker on those lists at the time, and didn't feel "weighty" or "worthy" enough to post anything to them.

Then after having lunch (Pizza actually) with Robert Martin ("Uncle Bob") who encouraged me to do so, I ventured a posting to the patterns-list and described the Pizza Inversion pattern. I was actually quite nervous about it - me being a complete unknown and "daring" to post something that poked a little fun at patterns. John and Richard Gabriel were among the first to respond, and the response was very positive. I felt I had been officially "warmly welcomed" into the software patterns community.

A couple years later I attended the PLoP'97 conference and got to meet John in person for the first time at one of the lunches. Like many others, I was in awe of how unpretentious and humble he was. Again he made me feel very welcome amidst himself and others at the table of "rock star status" in the patterns community: he apparently recognized my name and included me in the running conversation, mentioning that when he first read my Pizza Inversion pattern, he "thought it was briliant!"

Later, at PLoP'98 and PLoP'99, John encouraged me to get together with Steve Berczuk and write a book on Software CM Patterns for the Addison-Wesley Software Patterns Series of books, for which he was the series editor. And during 1999 I actually became editor for the Patterns++ section of the C++ Report, including John's "Pattern Hatching" column and Jim Coplien's "Column Without a Name."

It was both an exciting and humbling experience for me to serve as editor for the contributions of two people so famous and revered in the patterns and object-oriented design communities. They both mentored and taught me so much (as did Bob Martin) during the "hey day" of patterns and OOD.

During the years between 1998 and 2002, John personally shared with me a great deal of insight and sage advice about writing, authoring and editing, as well as lending loads of encouragement and support. I truly feel like I have lost one of my mentors in the software engineering community. John's humor, insight, humility and clarity will be sorely missed.

Thursday, November 24, 2005

Pragmatic Book Reviews

HAPPY THANKSGIVING EVERYONE! (Even if you're not in the US :-)

As I mentioned in my previous blog-entry, I'll be attempting to post reviews of several books in the next month or two, mostly from the Pragmatic Programmers and from Addison-Wesley Professional. The ones I currently have are the following:

Saturday, November 19, 2005

Book Review: JUnit Recipes

I have a whole bunch of reviewer-copies of books that I've been intending to review for several months. So I'll be doing a number of book reviews throughout the remainder of this year, particularly titles from The Pragmatic Programmers and from Addison-Wesley Professional (who were nice enough to give me copies of the books).

Today however I'll be posting about a review fo a book from a different publisher. I did a review of the book JUnit Recipes for StickyMinds a few months ago. My summary of my review was:
JUnit Recipes should probably be mandatory reading for anyone using Java, J2EE and JUnit in the real-world. This comprehensive and imminently pragmatic guide not only conveys a great deal of highly practical wisdom but also clearly demonstrates and explains the code to accomplish and apply the techniques it describes.
The full review is featured this month on the StickyMinds front page and is available from their website at http://www.stickyminds.com/s.asp?F=S767_BOOK_4

Saturday, November 12, 2005

Commodity, Integrity, Simplicity

In a previous blog-entry on the subject of perishable -vs- durable value, I wrote about how business value is depreciable and therefore the business value of a feature is a perishable commodity. I then went on to describe what I thought were more durable forms of value: Integrity and Simplicity.
  • I defined Integrity as a triplet of related properties: {Correctness, Consistency, Completeness}. Integrity is a property of a deliverable item such as a feature, a configuration or a configuration item. So a feature or item has "integrity" if it is correct, consistent and complete.

  • I also defined Simplicity as a triplet of related properties: {Clarity, Cohesiveness (Coherency), Conciseness}. So a feature, item, or logical entity is "simple" if it is clear, cohesive and concise.
I then asked the question:
What about "form, fit and function"? Are "form" and "fit" also components of perishable value?
What I've been thinking since then is that the perishable form of value is the extrinsic value that it is given by the customer. From the end-consumers perspective, what they perceive as the form, fit, and function of the deliverable is what makes it valuable or not. We might call type of value "Commodity" or "Marketability". [Note: There are several things I both like and dislike about both those possible names, so please comment if you have a preference for one over the other (or for something else) and let me know why.]

I suggested this in a posting to the continuousintegration YahooGroup entitled "Commodity, Integrity, Simplicity (was Re: Extreme Frequency and Extreme Integrity)". Some relevant excerpts from the discussion:
Commodity is customer-desired Form, Fit and Function. ... Commodity has to do with what requirements are most valued by the customer at a given time. I think maybe those requirements are in terms of "Form, Fit, and Function". Which requirements those are and how much they are valued is most definitely time-sensitive. When I add "commodity"-based value to a codebase, I am adding time-sensitive perishable value that can depreciate or greatly fluctuate over time.
...
[from Ron Jeffries]:
A thing, to me, has integrity and simplicity but is a commodity.
I thought about this. And I completely agree - that probably is the main thing that makes the word "commodity" stand-out apart from the other two like "one of those things that just doesnt belong" with them.

Then I think about it some more, and I think, maybe the thing that makes it seem so "wrong" when listed with the other two is perhaps what is so "right" about it after all. Maybe it's a good think to think that a feature (or "story") is a commodity.

Maybe that's what it is first and foremost (a commodity) that we should always keep in mind, and where the most direct value to the customer is perceived. And maybe those other two things (integrity and simplicity) are the "secret sauce" that make all the difference in how we do it:
  • Maybe the integrity is the "first derivative" that gives us velocity AND continuity at a sustainable pace.

  • And maybe when we throw in simplicity, that is the second derivative of value, and it maybe harder for the customer to see directly, but when we do it right, that gives us more than just continuity+sustainability, it also gives us the acceleration to adaptiveness and responsiveness and "agility" to overcome that cost-of-change curve.
...
[follow-up from Ron Jeffries]:
However, a bit further insight (or what I use in place of insight) for why it troubles me. A "commodity" is a kind of product with value, but it is a fungible one. A commodity is a product usually sold in bulk at a price per item or per carload. One potato is like every other potato. A story/feature, in an important sense, isn't like every other story/feature.

Thanks Ron for all the thoughtful feedback. You are spot-on of course. And that notion of a commodity as a bulk shipment or mass purchase of units definitely "kills" the notion of value I'm trying to get at.

I'm still at a loss for a word/term that I like better. Marketability perhaps? It's more syllables than I'd like, although there is a precedent set for it in the book Software By Numbers in its use of an "Incremental Funding Method" (IFM) with "Minimal, Marketable Features" (MMFs).

So to my readers that have read this far ... what is your take on all of this talk about commodity/marketability and "perishable value"? Are commodity, integrity, and simplicity each just different perspectives of form, fit, and function, where:
  • "commodity/marketability" would be the customer view
  • "integrity" would be the view of requirements analysts/engineers, V&V/QA, and CM
  • "simplicity" would be the view of the developers and architects
What do form, fit and function mean for software anyway?
  • Is it container, context and content?
  • Is it interface, integration and implementation?
And how should that all trace back to our discussion about value and whether that value is extrinsic or intrinsic, and whether it is perishable, durable, or latent/emergent?

I admit I dont have a lot of coherent thoughts here, just a lot of incoherent ramblings and inconsistent questions. Let me know how you think this should all make sense (or if it shouldn't).

Saturday, November 05, 2005

Agile Lifecycle Collapses the V-model upon itself

Many are familiar with the waterfall lifecycle model for software development. Fewer are familiar with the 'V' lifecycle model for software development.

These two lifecycles models are very similar. The main difference is that the 'V' model makes a deliberate attempt to "engage" stakeholders located on the back-end of the 'V' during the corresponding front-end phase:
  • During Requirements/Analysis, system testers are engaged to not only review the requirements (which they will have to test against), but also to begin developing the tests.

  • During architectural and high-level design, integrators and integration testers are engaged to review the design and the interface control specs, as well as to begin developing plans and test-cases for integration and integration-testing

  • at this point, hopefully you get the main idea ... at a given phase where deleverables are produced, the folks who are responsible for validating conformance to those specs are engaged to review the result and to begin development of their V&V plans and artifacts
When used in conjunction with Test-Driven Development (TDD), and especially with a lean focus on minimizing intermediate artifacts, the agile lifecycle in a very real sense makes the two sides of the 'V' converge to create almost a single line (instead of two lines forming a 'V'):
  • TDD attempts to use tests as the requirements themselves to the greatest extent possible

  • emphasis on lean, readable/maintainable code oftenlead to a literate programming style (e.g., JavaDocs) and/or a verbose naming convention style such that detailed design and source code are one and the same.

  • focus on simplicity and eliminating redundancy increases this trend via principles and practices such as those mentioned in the Principle of Locality of Reference Documentation and Single-Source Information)

  • Use of iterative development with short iterations makes the 'V' (re)start and then converge over and over again throughout the development of a release.
The result: using cross-lifecycle collaboration in combination with tests as requirements and self-documenting code as detailed design and writing tests before the code makes the ends of the 'V' model converge together so that each end practically collapses against the other in a thick, almost single line. Plus successive short iterations serve to increase the frequency of this trend.

The agile lifecycle tries to eliminate (or at least create a tessarect for) the distance between the symmetric points at each end of the V-model by making the stakeholders come together and collaborate on the same artifacts (rather than separate ones) while also working in many small vertical slices on a feature-by-feature (or story-by-story) basis. There are no separately opposing streams of workflow: just a single stream of work and workers that collaborate to deliver business value down this single stream as lean + agile as possible.

Saturday, October 29, 2005

Codelines as Code Portals

I've been thinking a bit about the evolution of branching capability in version control tools.
  • First we had no branching support

  • Then we had very primitive branching support at the physical level of individual files using funky looking numbers like 1.1.1.1 that were basically 4-level revision numbers

  • Then we had better branching support, but still file-based, and it allowed us to use some reasonably readable-looking symbolic names to identify a branch

  • Then we has support for branching at the project/product level across the entire configuration item

  • Nowadays the better tools (such as AccuRev, ClearCase/UCM, Subversion, OurayCM, SpectrumSCM, Neuma CM+, and others) have "streams"
Among the differences between streams and project-oriented branches were that project-oriented branches were still only the changes that took place on that branch; whereas streams gave me a dynamically evolving "current configuration" of the entire item (not just the changes); And in many cases "streams" are first-class entities which can have other attributes as well.

Streams are, in a sense, giving a view of a codeline that is similar to a web portal. They are a "code portal" that pulls the right sets of elements and their versions into the "view" of the stream and eases the burden of configuration specification and selection by providing us this nice "portal."

So what might be next in the evolution of branches and branching after this notion of "code portal"?
  • Will it be in the area of distribution across multiple sites and teams?

  • Will it be in the area of coordination, collaboration and workflow?

  • Will it be in the area of increasing scale? What would a "stream of streams" look like?
Maybe it will be all three! Maybe a stream of streams is a composite stream where the parent stream gave a virtual view across several (possibly remotely distributed) streams and repositories, but via a dynamic reference (rather than a copy), so that the current configuration was a view of the combined currenty configuration of each consitituent stream? (somewhat reminiscent of how composite baselines work in ClearCase/UCM)?

What do you think will be the next steps in the evolution of branching beyond "streams" and what do you think are the trends that will fuel the move in that direction?

Saturday, October 22, 2005

Bugs versus Enhancements

On the SCRUM Develoment Yahoo Group, Stephen Bobick initiated a discussion about Bugs versus Enhancements:
Here's something I've run into agile and non-agile projects alike: the blurring of distinction between bugs and enhancement requests. To me a bug is erroneous operation of the software based on the customer's requirements. That's fine when both sides agree to what the requirements are. Sometimes a bug can also be caused by a misunderstanding of the requirements by the team, however, and yes I'll still call this a bug. Often, however, customers will dub "missing" functionality (which was never discussed initially) or "nice-to-have" features, shortcuts and so on as "bugs"....

When I have tried to make the distinction between bugs and enhancements clearer to the PO or customer, sometimes through a SM, the customer thinks we are nit-picking, or trying to "play the blame game", rather than properly categorize and identify their feedback. One approach is to keep trying to educate and convince them anyways (on a case by case basis, if necessary). Another approach is just to let them call anything they want a "bug". Of course this can screw up your metrics (incidence of bugs) - something we are interested in at my current job (i.e. reducing the rate of new bugs and fixing bugs in the backlog).

Any words from the wise out in the trenches on how to best approach this? Obviously, with unit testing and other XP practices there is a claim that bug rates will be low. But if anything can be declared a bug, it becomes more difficult to make management and the customer believe the claims you make about your software development process and practices. And when this happens, the
typical response is to revert to "old ways" (heavy-handed, waterfall-type approaches with formal QA).

-- Stephen
I've actually had a lot of personal experience in this for the past several years. Here are some of the things I have learned...



1. DONT ASSUME ALL DEFECTS ARE BUGS!

The term "bug" and the term "defect" don't always mean the same thing:
  • Bug tends to refer to something "wrong" in the code (either due to nonconformance with design or requirements).

  • Defect often means something that is "wrong" in any work-product (including the requirements).

  • Hence, many consider ALL of incorrect, inconsistent, incomplete, or unclear requirements to be "defects": if they believe a requirement is"missing" or incorrectly interpreted, it's still a "bug" in their eyes.

  • Ive also seen some folks define "bug" as: anything that requires changing ONLY the code to make it work "as expected". If it requires a change to docs, the consider it a "change request" (and the issue ofwhether or not it is still a "defect" isnt really addressed)

  • Also, many folk's metrics (particularly waterfall-ish metrics for phase containment and/or screening, but I think also orthogonal-defect classification -- ODC) explicitly identify "missing requirements" as a kind of defect

2. DO YOU TREAT BUGS DIFFERENTLY FROM ENHANCEMENTS?

If so, then be prepared to battle over the differences. Very often, the difference between them is just a matter of opinion, and the resolution will almost always boil down to a matter of which process (the bugfix process or the enhancement process) is most strongly desired for the particular issue, or else will become an SLA/contractual dispute. Then you can bid farewell to the validity of your defect metrics.

If your development process/practice is to treat "bugs" differently than "enhancements" (particularly if there is some contractual agreement/SLA on how soon/fast "bugs" are to be fixed and whether or not enhancements cost more $$$ but bugfixes are "free"), then definitions of what a bug/defect is will matter only to the extent outlined in the contract/SLA, and it will be in the customer's interest to regard any unmet expectation as a "bug".

If, on the other hand, you treat all customer reported "bugs" and "enhancements" sufficiently similar, then you will find many of the previous battles you used to have over what is a "bug" and what isn't will go away, and wont be as big of an issue. And you can instead focus on getting appropriate prioritization and scheduling of all such issues using the same methods.

If the customer learns that the way to get the thing they want when they want it is a matter of prioritization by them, and if the "cost" for enhancements versus bugfixes is the same or else isn't an issue, then they will learn that in order to get what they want, they don't have to claim its a bug, they just need to tell you how important it is to them with respect to everything else they have to prioritize for you.


3. IT'S ALL ABOUT SETTING AND MANAGING EXPECTATIONS!

None of the above (or any other) dickering over definitions is what really matters. What really matters is managing and meeting expectations. Sometimes business/organizational conditions mandate some contractual definition of defects versus enhancements and how each must be treated and their associated costs. If your project is under such conditions, then you may need to clearly define "bug" and "enhancement" and the expectations for each, as well as any agreed upon areas of "lattitude"

Other times, we don't have to have such formal contractual definitions. And in such cases, maybe you can treat enhancements and defects/bugs the same way (as noted earlier above).

Lastly, and most important of all, never forget that ...


4. EVERYONE JUST WANTS TO FEEL HEARD, UNDERSTOOD, AND VALUED!


If you can truly listen empathically and non-defensively (which isn't always easy), connecting with their needs at an emotional as well as intellectual level, and demonstrate that it is important to you, then EVERYONE becomes a whole lot easier to work with and that makes everything a whole lot easier to do.

Then it's no longer about what's a bug or what's an enhancement; and not even a matter of treating bugs all that differently from enhancements ... it simply becomes a matter of hearing, heeding and attending to their needs in a win-win fashion.


I'm sure there are lots of other lessons learned. Those were the ones that stuck with me the most. I've become pretty good at the first two, and have become competent at the third. I still need a LOT of work on that fourth one!!!

Sunday, October 16, 2005

TDD/BDD + TBD + IDE = EBT 4 Free?

I've been thinking a bit more about inter-relationships between Test-Driven Development (TDD), Task-Based Development (TBD), a spiffy interactive development environment (IDE) such as Eclipse, and the trouble with traceability ...

One thing that occurs to me that might actually make traceability be easier for agile methods is that some agile methods work in extremely fine-grained functional increments. I'm talking about more than just iterations or features. I mean individually testable behaviors/requirements:
    If one is following TDD, or its recent offshoot Behavior-Driven Development (BDD), then one starts developing a feature by taking the smallest possible requirement/behavior that can be tested, writing a test for it, then making the code pass the test, then refactoring, then going on to develop the next testable behavior etc., until the feature is done.
This means, with TDD/BDD, a single engineering task takes a single requirement through the entire lifecycle: specification (writing the test for the behavior), implementation (coding the behavior), verification (passing the test for the behavior), and design.

That doesnt happen with waterfall or V-model development lifecycles. With the waterfall and V models, I do much of the requirements up front. By the time I do design for a particular requirement it might be months later and many tasks and engineers later. Ditto for when the code for the requirement actually gets written.

So traceability for a single requirement thru to specs, design, code, and test seems much harder to establish and maintain if those things are all splintered and fragmented across many disjointed tasks and engineers over many weeks or months.

But if the same engineering task focused on taking just that one single requirement thru its full lifecycle, and if I am doing task-based development in my version control tool, then ...
    The change-set that I commit to the repository at the end of my change-task represents all of that work across the entire lifecycle of the realization of just that one requirement, then the ID of that one task or requirement can be associated with the change-set as a result of the commit operation/event taking place.
And voila! Ive automatically taken care of much of the traceability burden for that requirement!

If I had a spiffy IDE that gave me a more seamless development environment integration and event/message passing with my change/task tracking tool, and my version-control tool, and the interface I use to edit code, models, requirements, etc., then it would seem to me that:
  • The IDE could easily know what kind of artifact Im working on (requirement, design, code, test

  • Operations in the IDE and the version-control tool would be able broadcast "events" that know my current context (my task, my artifact type, my operation) and could automatically create a "traceability link" in the appropriate place.
I realize things like CASE tools and protocols like Sun's ToolTalk and HP's SoftBench tried to do this over a decade ago, but we didnt have agile methods quite so formalized then and werent necessarily working in a TDD/TBD fashion. I think this is what Event-Based Traceability (EBT) is trying to help achieve.

If I had (and/or created) the appropriate Eclipse plug-ins, and were able to develop all my artifacts using just one repository, then if I used TDD/BDD with TBD in this IDE, I might just be able to get EBT for free! (Or at least come pretty darn close)

Wouldn't I?

Tuesday, October 11, 2005

XP as an overreaction?

Response to Damon Poole's blog-entry asking "Is XP an overreaction?" ...

I believe Extreme Programming (XP) and other Agile Methods are indeed a strong counter-reaction to some prevailing management and industry trends from arround 1985-1995. [Note I said counter-reaction rather than over-reaction]

I think the issue ultimately revolves around empowerment and control. During 1985-1995 two very significant things became very trendy and management and organizations bought into their ideas: The SEI Software Capability Maturity Model (CMM), and Computer-Aided Software Engineering.

During this same time, programming and design methods were all caught up in the hype of object-oriented programming+design, and iterative+incremental development.

Many a large organization (and small ones too) tried to latch-on to one or more of these things as a "silver bullet." Many misinterpreted and misimplemented CMM and CASE as a magic formula for creating successful software with plug-and-play replaceable developers/engineers:
  • Lots of process documentation was created
  • Lots of procedures and CASE tools were deployed with lots of contraints regarding what they may and may not do
  • and "compliance/conformance" to documented process was audited against.

Many felt that the importance of "the people factor" had been dismissed, and that creativity and innovation were stifled by such things. And many felt disempowered from being able to do their best work and do the things that they new were required to be successful, because "big process" and "big tools" were getting and their way and being forced upon them.

(Some would liken this to the classic debate between Hamiltonian and Jeffersonian philosophies of "big government" and highly regulated versus "that governemnt is best which governs least")

I think this is the "crucible" in which Agile methods like XP were forged. They wanted to free themselves from the ball and chain of restrictive processes and disabling tools.

So of course, what do we do when the pendulum swings so far out of balance in a particular direction that it really makes us say "we're mad as h-ll and we're not gonna take it any more!" ??

Answer: we do what we always do, we react with so much countering force that instead of putting the pendulum back in the middle where it belongs and is "balanced", we kick it as far as we can in the other direction. And we keep kicking as hard as we can until we feel "empowered" and "in control of our own destiny" again.

Then we don't look back and see when the pendulum (or the industry) starts self-correcting about every 10 years or so and starts to swing back and bite us again :)

XP started around 1995 and this years marks its 10th anniversary. Agile methods have been officially embraced by industry buzz somewhere around 2002, and for the last couple years, there has been some work on how to balance agility with large organizations and sophisticated technology.

Among the main things coming out of it that are generating a goodly dose of much deserved attention are:
  • testing and integration/buidling are getting emphasized much earlier in the lifecycle, and by development (not just testers and builders)

  • the "people factor" and teaming and communication is getting "equal time"

  • iterative development is being heavily emphasized up the management hierarchy - and not just iterative but HIGHLY iterative (e.g., weeks instead of months)
These are all good things!

There are some folks out there who never forgot them to begin with. They never treated CASE or CMM as a silver bullet and took a balanced approach from the start. And they didnt treat "agile" as yet another silver bullet either. And they have been quietly delivering successful systems without a lot of noise - and we didnt hear much about them because they weren't being noisy.

Unfortunately some other things may seem like they are "babies" being "thrown out with the bathwater". Agile puts so much emphasis on the development team and the project - that practitioners of some of the methods seem to do so at the expense of other important disciplines and roles across the organization (including, and perhaps even especially, SCM)

Saturday, October 08, 2005

When to Commit: Perishable Value and Durable Value

We had a recent (and interesting) discussion on the scm-patterns YahooGroup about the notion of "value" and Frank Schophuizen got me thinking about what is the "value" associated with a configuration or a codeline: how does value increase or decrease when a configuration is "promoted" or when/if the codeline is branched/split?

Agile methods often talk about business value. They work on features in order of the most business-value. They eschew activities and artifacts that don't directly contribute to delivery business value. etc...

David Anderson, in several of his articles and blogs at agilemanagement.net, notes that the value of a feature (or other "piece" of functionality) is not dependent upon the cost to produce it, but upon what a customer is willing to pay for it. Therefore the value of a feature is perishable and depreciates over time:
  • The longer it takes to receive delivery of a feature, the less a customer may begin to value it.

  • If it doesn't get shipped in the appropriate market-window of opportunity, the value may be significantly lost.

  • If the lead-time to market for the feature is too long, then competitive advantage may be lost and your competitor may be able to offer it to them sooner than you can, resulting in possible price competition, loss of sale or business
So business value is depreciable; and the value of a feature is a perishable commodity.

Might there be certain aspects to business value that are not perishable? Might there be certain aspects that are of durable value? Is it only the functionality associated with the feature that is of perishable value? Might the associated "quality" be of more durable value?

I've seen the argument arise in Agile/XP forums about whether or not one should "commit" one's changes every time the code passes the tests, or if one should wait until after refactoring, or even until more functionality is implemented (to make it "worth" the time/effort to update/rebase, reconcile merge conflicts and then commit).

Granted, I can always use the Private Versions pattern to checkin my changes at any time (certainly any time they are correct+consistent) without also committing them to the codeline for the rest of the team to see and use. So, assuming that the issue is not merely having it secured in the repository (private versions), when is it appropriate to commit my changes to the codeline for the rest of the team to (re)use?

If refactoring is a "behavior preserving transformation" of the structure of the code, and if it improves the design and makes it "simpler", then is "good design" or "simplicity" something that adds durable value to the implementation of a running, tested feature? Kent Beck's initial criteria for "simple code" (and how to know when you are done refactoring your latest change) was described in an XPMagazine article by Ron Jeffries as the following, in order of importance:
  1. it passes all the tests (correctly :-)

  2. it contains no redundancy (the DRY principle: Don't Repeat Yourself)

  3. it expresses every thought we intended it to convey about the program (i.e. reveals all our intent, and intends all that it reveals)

  4. it minimizes the size and number of classes and methods
If I squint a little when I read thru the above, it almost looks like it's saying the same thing that writing-instructors and editor's say about good writing! It should be: correct, consistent, complete, clear and concise!

I have often heard "correct, consistent and complete" used as a definition of product integrity. So maybe integrity is an aspect of durable value! And I have sometimes heard simplicity defined as "clear and concise" or "clear, concise and coherent/cohesive" (where "concise" would be interpreted as having very ruthlessly rooted out all unnecessary/extraneous or repeated verbage and thoughts). So maybe simplicity is another aspect of durable value.

And maybe integrity is not enough, and simplicity is needed too! That could possibly explain why it might make more sense to wait until after a small change has been refactored (simplified) before committing it instead of waiting only until it is correct+consistent+complete.

Perhaps the question "when should I commit my changes?" might be answered by saying "whenever I can assure that I am adding more value than I might otherwise be subtracting by introducing a change into a 'stable' configuration/codeline!"
  • If my functionality isn't even working, then it's subtracting a lot of value, even if did get it into the customer's hands sooner. It causes problems (and costs) for my organization and team to fix it, has less value to the customer if it doesn't work, and can damage the trust I've built (or am attempting to build) in my relationship with that customer

  • if my functionality is working, but the code isn't sufficiently simple, the resulting lack of clarity, presence of redundancy or unnecessary dependency can make it a lot harder (and more costly) for my teammates to add their changes on top of mine

  • if I wait too long, and/or don't decompose my features into small enough working, testable increments of change, then the business value of the functionality I am waiting to commit is depreciating!
Now I just have to figure out some easy and objective means of figuring out the "amount" of value I have added or subtracted :-)

So are "integrity" (correct + consistent + complete) and "simplicity" (clear + concise + coherent/cohesive) components of durable value? Is functionality the only form of perishable value?

What about "form, fit and function"? Are "form" and "fit" also components of perishable value? Am I onto something or just spinning around in circles?

Saturday, October 01, 2005

The Single Configuration Principle

I'm wondering if I tried to bite off too much at once with my Baseline Immutability Principle. Maybe there needed to be another step before that on the way from the Baseline Identification Principle ...

The baseline identification principle said that I need to be able to identify what I have to be able to reproduce. The baseline immutability principle said that the definition of a baselined configuration needs to be timesafe: once baselined, the identified set of elements and versions associated with that baseline must always be the same set of elements and versions, no matter how that baseline evolves in the form of subsequent changes and their resulting configurations.

Maybe somewhere in between the baseline identification principle and the baseline immutability principle should be the single configuration principle:
    The Single Configuration Principle would say that a baseline should correspond to one, and only one, configuration.
Of course the baseline itself might be an assembly of other baselined configurations, but then it still corresponds to the one configuration that represents that assembly of configurations. So the same baseline "identification" shouldnt be trying to represent multiple configurations; just one configuration.

What does that mean? It means don't try to make a tag or label serve "double-duty" for more than one configuration. This could have several ramifications:
  • maybe it implies that "floating" or "dynamic" configurations, that are merely "references", should have a separate identifier, even when the reference the same configuration as what was just labeled. So maybe the identifiers like "LATEST or "LAST_GOOD_BUILD" should be different from the one that identifies the current latest build-label (e.g., "PROD-BUILD-x.y.z-a.b")

  • maybe it might also imply that when we use a single label to capture a combination of component versions, that we really want true "
    composite" labeling support. This would literally let me define "PROD_V1.2" as "Component-One_V1.1" and "Component-Two_V1.0" without requiring the label to explicitly tag all the same elements already tagged by the component labels

  • maybe it implies something similar for the notion of a "composite current configuration" or even a "composite codeline" where a product-wide "virtual" codeline could be defined in terms of multiple component codelines
What do you think? Is the single configuration principle a "keeper" or not?