Saturday, October 29, 2005

Codelines as Code Portals

I've been thinking a bit about the evolution of branching capability in version control tools.
  • First we had no branching support

  • Then we had very primitive branching support at the physical level of individual files using funky looking numbers like 1.1.1.1 that were basically 4-level revision numbers

  • Then we had better branching support, but still file-based, and it allowed us to use some reasonably readable-looking symbolic names to identify a branch

  • Then we has support for branching at the project/product level across the entire configuration item

  • Nowadays the better tools (such as AccuRev, ClearCase/UCM, Subversion, OurayCM, SpectrumSCM, Neuma CM+, and others) have "streams"
Among the differences between streams and project-oriented branches were that project-oriented branches were still only the changes that took place on that branch; whereas streams gave me a dynamically evolving "current configuration" of the entire item (not just the changes); And in many cases "streams" are first-class entities which can have other attributes as well.

Streams are, in a sense, giving a view of a codeline that is similar to a web portal. They are a "code portal" that pulls the right sets of elements and their versions into the "view" of the stream and eases the burden of configuration specification and selection by providing us this nice "portal."

So what might be next in the evolution of branches and branching after this notion of "code portal"?
  • Will it be in the area of distribution across multiple sites and teams?

  • Will it be in the area of coordination, collaboration and workflow?

  • Will it be in the area of increasing scale? What would a "stream of streams" look like?
Maybe it will be all three! Maybe a stream of streams is a composite stream where the parent stream gave a virtual view across several (possibly remotely distributed) streams and repositories, but via a dynamic reference (rather than a copy), so that the current configuration was a view of the combined currenty configuration of each consitituent stream? (somewhat reminiscent of how composite baselines work in ClearCase/UCM)?

What do you think will be the next steps in the evolution of branching beyond "streams" and what do you think are the trends that will fuel the move in that direction?

Saturday, October 22, 2005

Bugs versus Enhancements

On the SCRUM Develoment Yahoo Group, Stephen Bobick initiated a discussion about Bugs versus Enhancements:
Here's something I've run into agile and non-agile projects alike: the blurring of distinction between bugs and enhancement requests. To me a bug is erroneous operation of the software based on the customer's requirements. That's fine when both sides agree to what the requirements are. Sometimes a bug can also be caused by a misunderstanding of the requirements by the team, however, and yes I'll still call this a bug. Often, however, customers will dub "missing" functionality (which was never discussed initially) or "nice-to-have" features, shortcuts and so on as "bugs"....

When I have tried to make the distinction between bugs and enhancements clearer to the PO or customer, sometimes through a SM, the customer thinks we are nit-picking, or trying to "play the blame game", rather than properly categorize and identify their feedback. One approach is to keep trying to educate and convince them anyways (on a case by case basis, if necessary). Another approach is just to let them call anything they want a "bug". Of course this can screw up your metrics (incidence of bugs) - something we are interested in at my current job (i.e. reducing the rate of new bugs and fixing bugs in the backlog).

Any words from the wise out in the trenches on how to best approach this? Obviously, with unit testing and other XP practices there is a claim that bug rates will be low. But if anything can be declared a bug, it becomes more difficult to make management and the customer believe the claims you make about your software development process and practices. And when this happens, the
typical response is to revert to "old ways" (heavy-handed, waterfall-type approaches with formal QA).

-- Stephen
I've actually had a lot of personal experience in this for the past several years. Here are some of the things I have learned...



1. DONT ASSUME ALL DEFECTS ARE BUGS!

The term "bug" and the term "defect" don't always mean the same thing:
  • Bug tends to refer to something "wrong" in the code (either due to nonconformance with design or requirements).

  • Defect often means something that is "wrong" in any work-product (including the requirements).

  • Hence, many consider ALL of incorrect, inconsistent, incomplete, or unclear requirements to be "defects": if they believe a requirement is"missing" or incorrectly interpreted, it's still a "bug" in their eyes.

  • Ive also seen some folks define "bug" as: anything that requires changing ONLY the code to make it work "as expected". If it requires a change to docs, the consider it a "change request" (and the issue ofwhether or not it is still a "defect" isnt really addressed)

  • Also, many folk's metrics (particularly waterfall-ish metrics for phase containment and/or screening, but I think also orthogonal-defect classification -- ODC) explicitly identify "missing requirements" as a kind of defect

2. DO YOU TREAT BUGS DIFFERENTLY FROM ENHANCEMENTS?

If so, then be prepared to battle over the differences. Very often, the difference between them is just a matter of opinion, and the resolution will almost always boil down to a matter of which process (the bugfix process or the enhancement process) is most strongly desired for the particular issue, or else will become an SLA/contractual dispute. Then you can bid farewell to the validity of your defect metrics.

If your development process/practice is to treat "bugs" differently than "enhancements" (particularly if there is some contractual agreement/SLA on how soon/fast "bugs" are to be fixed and whether or not enhancements cost more $$$ but bugfixes are "free"), then definitions of what a bug/defect is will matter only to the extent outlined in the contract/SLA, and it will be in the customer's interest to regard any unmet expectation as a "bug".

If, on the other hand, you treat all customer reported "bugs" and "enhancements" sufficiently similar, then you will find many of the previous battles you used to have over what is a "bug" and what isn't will go away, and wont be as big of an issue. And you can instead focus on getting appropriate prioritization and scheduling of all such issues using the same methods.

If the customer learns that the way to get the thing they want when they want it is a matter of prioritization by them, and if the "cost" for enhancements versus bugfixes is the same or else isn't an issue, then they will learn that in order to get what they want, they don't have to claim its a bug, they just need to tell you how important it is to them with respect to everything else they have to prioritize for you.


3. IT'S ALL ABOUT SETTING AND MANAGING EXPECTATIONS!

None of the above (or any other) dickering over definitions is what really matters. What really matters is managing and meeting expectations. Sometimes business/organizational conditions mandate some contractual definition of defects versus enhancements and how each must be treated and their associated costs. If your project is under such conditions, then you may need to clearly define "bug" and "enhancement" and the expectations for each, as well as any agreed upon areas of "lattitude"

Other times, we don't have to have such formal contractual definitions. And in such cases, maybe you can treat enhancements and defects/bugs the same way (as noted earlier above).

Lastly, and most important of all, never forget that ...


4. EVERYONE JUST WANTS TO FEEL HEARD, UNDERSTOOD, AND VALUED!


If you can truly listen empathically and non-defensively (which isn't always easy), connecting with their needs at an emotional as well as intellectual level, and demonstrate that it is important to you, then EVERYONE becomes a whole lot easier to work with and that makes everything a whole lot easier to do.

Then it's no longer about what's a bug or what's an enhancement; and not even a matter of treating bugs all that differently from enhancements ... it simply becomes a matter of hearing, heeding and attending to their needs in a win-win fashion.


I'm sure there are lots of other lessons learned. Those were the ones that stuck with me the most. I've become pretty good at the first two, and have become competent at the third. I still need a LOT of work on that fourth one!!!

Sunday, October 16, 2005

TDD/BDD + TBD + IDE = EBT 4 Free?

I've been thinking a bit more about inter-relationships between Test-Driven Development (TDD), Task-Based Development (TBD), a spiffy interactive development environment (IDE) such as Eclipse, and the trouble with traceability ...

One thing that occurs to me that might actually make traceability be easier for agile methods is that some agile methods work in extremely fine-grained functional increments. I'm talking about more than just iterations or features. I mean individually testable behaviors/requirements:
    If one is following TDD, or its recent offshoot Behavior-Driven Development (BDD), then one starts developing a feature by taking the smallest possible requirement/behavior that can be tested, writing a test for it, then making the code pass the test, then refactoring, then going on to develop the next testable behavior etc., until the feature is done.
This means, with TDD/BDD, a single engineering task takes a single requirement through the entire lifecycle: specification (writing the test for the behavior), implementation (coding the behavior), verification (passing the test for the behavior), and design.

That doesnt happen with waterfall or V-model development lifecycles. With the waterfall and V models, I do much of the requirements up front. By the time I do design for a particular requirement it might be months later and many tasks and engineers later. Ditto for when the code for the requirement actually gets written.

So traceability for a single requirement thru to specs, design, code, and test seems much harder to establish and maintain if those things are all splintered and fragmented across many disjointed tasks and engineers over many weeks or months.

But if the same engineering task focused on taking just that one single requirement thru its full lifecycle, and if I am doing task-based development in my version control tool, then ...
    The change-set that I commit to the repository at the end of my change-task represents all of that work across the entire lifecycle of the realization of just that one requirement, then the ID of that one task or requirement can be associated with the change-set as a result of the commit operation/event taking place.
And voila! Ive automatically taken care of much of the traceability burden for that requirement!

If I had a spiffy IDE that gave me a more seamless development environment integration and event/message passing with my change/task tracking tool, and my version-control tool, and the interface I use to edit code, models, requirements, etc., then it would seem to me that:
  • The IDE could easily know what kind of artifact Im working on (requirement, design, code, test

  • Operations in the IDE and the version-control tool would be able broadcast "events" that know my current context (my task, my artifact type, my operation) and could automatically create a "traceability link" in the appropriate place.
I realize things like CASE tools and protocols like Sun's ToolTalk and HP's SoftBench tried to do this over a decade ago, but we didnt have agile methods quite so formalized then and werent necessarily working in a TDD/TBD fashion. I think this is what Event-Based Traceability (EBT) is trying to help achieve.

If I had (and/or created) the appropriate Eclipse plug-ins, and were able to develop all my artifacts using just one repository, then if I used TDD/BDD with TBD in this IDE, I might just be able to get EBT for free! (Or at least come pretty darn close)

Wouldn't I?

Tuesday, October 11, 2005

XP as an overreaction?

Response to Damon Poole's blog-entry asking "Is XP an overreaction?" ...

I believe Extreme Programming (XP) and other Agile Methods are indeed a strong counter-reaction to some prevailing management and industry trends from arround 1985-1995. [Note I said counter-reaction rather than over-reaction]

I think the issue ultimately revolves around empowerment and control. During 1985-1995 two very significant things became very trendy and management and organizations bought into their ideas: The SEI Software Capability Maturity Model (CMM), and Computer-Aided Software Engineering.

During this same time, programming and design methods were all caught up in the hype of object-oriented programming+design, and iterative+incremental development.

Many a large organization (and small ones too) tried to latch-on to one or more of these things as a "silver bullet." Many misinterpreted and misimplemented CMM and CASE as a magic formula for creating successful software with plug-and-play replaceable developers/engineers:
  • Lots of process documentation was created
  • Lots of procedures and CASE tools were deployed with lots of contraints regarding what they may and may not do
  • and "compliance/conformance" to documented process was audited against.

Many felt that the importance of "the people factor" had been dismissed, and that creativity and innovation were stifled by such things. And many felt disempowered from being able to do their best work and do the things that they new were required to be successful, because "big process" and "big tools" were getting and their way and being forced upon them.

(Some would liken this to the classic debate between Hamiltonian and Jeffersonian philosophies of "big government" and highly regulated versus "that governemnt is best which governs least")

I think this is the "crucible" in which Agile methods like XP were forged. They wanted to free themselves from the ball and chain of restrictive processes and disabling tools.

So of course, what do we do when the pendulum swings so far out of balance in a particular direction that it really makes us say "we're mad as h-ll and we're not gonna take it any more!" ??

Answer: we do what we always do, we react with so much countering force that instead of putting the pendulum back in the middle where it belongs and is "balanced", we kick it as far as we can in the other direction. And we keep kicking as hard as we can until we feel "empowered" and "in control of our own destiny" again.

Then we don't look back and see when the pendulum (or the industry) starts self-correcting about every 10 years or so and starts to swing back and bite us again :)

XP started around 1995 and this years marks its 10th anniversary. Agile methods have been officially embraced by industry buzz somewhere around 2002, and for the last couple years, there has been some work on how to balance agility with large organizations and sophisticated technology.

Among the main things coming out of it that are generating a goodly dose of much deserved attention are:
  • testing and integration/buidling are getting emphasized much earlier in the lifecycle, and by development (not just testers and builders)

  • the "people factor" and teaming and communication is getting "equal time"

  • iterative development is being heavily emphasized up the management hierarchy - and not just iterative but HIGHLY iterative (e.g., weeks instead of months)
These are all good things!

There are some folks out there who never forgot them to begin with. They never treated CASE or CMM as a silver bullet and took a balanced approach from the start. And they didnt treat "agile" as yet another silver bullet either. And they have been quietly delivering successful systems without a lot of noise - and we didnt hear much about them because they weren't being noisy.

Unfortunately some other things may seem like they are "babies" being "thrown out with the bathwater". Agile puts so much emphasis on the development team and the project - that practitioners of some of the methods seem to do so at the expense of other important disciplines and roles across the organization (including, and perhaps even especially, SCM)

Saturday, October 08, 2005

When to Commit: Perishable Value and Durable Value

We had a recent (and interesting) discussion on the scm-patterns YahooGroup about the notion of "value" and Frank Schophuizen got me thinking about what is the "value" associated with a configuration or a codeline: how does value increase or decrease when a configuration is "promoted" or when/if the codeline is branched/split?

Agile methods often talk about business value. They work on features in order of the most business-value. They eschew activities and artifacts that don't directly contribute to delivery business value. etc...

David Anderson, in several of his articles and blogs at agilemanagement.net, notes that the value of a feature (or other "piece" of functionality) is not dependent upon the cost to produce it, but upon what a customer is willing to pay for it. Therefore the value of a feature is perishable and depreciates over time:
  • The longer it takes to receive delivery of a feature, the less a customer may begin to value it.

  • If it doesn't get shipped in the appropriate market-window of opportunity, the value may be significantly lost.

  • If the lead-time to market for the feature is too long, then competitive advantage may be lost and your competitor may be able to offer it to them sooner than you can, resulting in possible price competition, loss of sale or business
So business value is depreciable; and the value of a feature is a perishable commodity.

Might there be certain aspects to business value that are not perishable? Might there be certain aspects that are of durable value? Is it only the functionality associated with the feature that is of perishable value? Might the associated "quality" be of more durable value?

I've seen the argument arise in Agile/XP forums about whether or not one should "commit" one's changes every time the code passes the tests, or if one should wait until after refactoring, or even until more functionality is implemented (to make it "worth" the time/effort to update/rebase, reconcile merge conflicts and then commit).

Granted, I can always use the Private Versions pattern to checkin my changes at any time (certainly any time they are correct+consistent) without also committing them to the codeline for the rest of the team to see and use. So, assuming that the issue is not merely having it secured in the repository (private versions), when is it appropriate to commit my changes to the codeline for the rest of the team to (re)use?

If refactoring is a "behavior preserving transformation" of the structure of the code, and if it improves the design and makes it "simpler", then is "good design" or "simplicity" something that adds durable value to the implementation of a running, tested feature? Kent Beck's initial criteria for "simple code" (and how to know when you are done refactoring your latest change) was described in an XPMagazine article by Ron Jeffries as the following, in order of importance:
  1. it passes all the tests (correctly :-)

  2. it contains no redundancy (the DRY principle: Don't Repeat Yourself)

  3. it expresses every thought we intended it to convey about the program (i.e. reveals all our intent, and intends all that it reveals)

  4. it minimizes the size and number of classes and methods
If I squint a little when I read thru the above, it almost looks like it's saying the same thing that writing-instructors and editor's say about good writing! It should be: correct, consistent, complete, clear and concise!

I have often heard "correct, consistent and complete" used as a definition of product integrity. So maybe integrity is an aspect of durable value! And I have sometimes heard simplicity defined as "clear and concise" or "clear, concise and coherent/cohesive" (where "concise" would be interpreted as having very ruthlessly rooted out all unnecessary/extraneous or repeated verbage and thoughts). So maybe simplicity is another aspect of durable value.

And maybe integrity is not enough, and simplicity is needed too! That could possibly explain why it might make more sense to wait until after a small change has been refactored (simplified) before committing it instead of waiting only until it is correct+consistent+complete.

Perhaps the question "when should I commit my changes?" might be answered by saying "whenever I can assure that I am adding more value than I might otherwise be subtracting by introducing a change into a 'stable' configuration/codeline!"
  • If my functionality isn't even working, then it's subtracting a lot of value, even if did get it into the customer's hands sooner. It causes problems (and costs) for my organization and team to fix it, has less value to the customer if it doesn't work, and can damage the trust I've built (or am attempting to build) in my relationship with that customer

  • if my functionality is working, but the code isn't sufficiently simple, the resulting lack of clarity, presence of redundancy or unnecessary dependency can make it a lot harder (and more costly) for my teammates to add their changes on top of mine

  • if I wait too long, and/or don't decompose my features into small enough working, testable increments of change, then the business value of the functionality I am waiting to commit is depreciating!
Now I just have to figure out some easy and objective means of figuring out the "amount" of value I have added or subtracted :-)

So are "integrity" (correct + consistent + complete) and "simplicity" (clear + concise + coherent/cohesive) components of durable value? Is functionality the only form of perishable value?

What about "form, fit and function"? Are "form" and "fit" also components of perishable value? Am I onto something or just spinning around in circles?

Saturday, October 01, 2005

The Single Configuration Principle

I'm wondering if I tried to bite off too much at once with my Baseline Immutability Principle. Maybe there needed to be another step before that on the way from the Baseline Identification Principle ...

The baseline identification principle said that I need to be able to identify what I have to be able to reproduce. The baseline immutability principle said that the definition of a baselined configuration needs to be timesafe: once baselined, the identified set of elements and versions associated with that baseline must always be the same set of elements and versions, no matter how that baseline evolves in the form of subsequent changes and their resulting configurations.

Maybe somewhere in between the baseline identification principle and the baseline immutability principle should be the single configuration principle:
    The Single Configuration Principle would say that a baseline should correspond to one, and only one, configuration.
Of course the baseline itself might be an assembly of other baselined configurations, but then it still corresponds to the one configuration that represents that assembly of configurations. So the same baseline "identification" shouldnt be trying to represent multiple configurations; just one configuration.

What does that mean? It means don't try to make a tag or label serve "double-duty" for more than one configuration. This could have several ramifications:
  • maybe it implies that "floating" or "dynamic" configurations, that are merely "references", should have a separate identifier, even when the reference the same configuration as what was just labeled. So maybe the identifiers like "LATEST or "LAST_GOOD_BUILD" should be different from the one that identifies the current latest build-label (e.g., "PROD-BUILD-x.y.z-a.b")

  • maybe it might also imply that when we use a single label to capture a combination of component versions, that we really want true "
    composite" labeling support. This would literally let me define "PROD_V1.2" as "Component-One_V1.1" and "Component-Two_V1.0" without requiring the label to explicitly tag all the same elements already tagged by the component labels

  • maybe it implies something similar for the notion of a "composite current configuration" or even a "composite codeline" where a product-wide "virtual" codeline could be defined in terms of multiple component codelines
What do you think? Is the single configuration principle a "keeper" or not?