Wednesday, March 08, 2006

The World is 5-Dimensional: Globalization and Teleportation

This will be the first of several blog posts on the subject of globalization and my thoughts on what it portends for CM and Agility ...

For those of us who haven't read Thomas Friedman's The World is Flat, here are a few resources to familiarize yourself with the basic gist of the material in short-order time:The book is about globalization and how technology (and other "levelers") have "leveled the playing field" for so many others in (formerly) thirld-world countries to compete with "the big boys."

I think the book is, in part, saying that mass availability of technologies like the internet and Wireless connectivity, have provided us with the "inverse" of teleportation. The mass availability and accessibility of the internet and its pervasiveness into the everyday life of businesses and individuals is essentially the virtual inverse of teleportation: instead of being able to send ourselves anywhere, instantly, we have the ability to virtually summon anything from anywhere to wherever we are!

Okay, so we've only be able to do that digitally, with the mass commoditization of the internet and wireless technologies. We can do it with information and data, but not physical objects. But we can still connect with people anywhere instantly!

In terms of business, the world is "flat", but in terms of technology, we're basically saying The World is now 5-Dimensional!

The traditional 4 dimensions are the three spatial dimensions (height/length, width, depth) , and time. Friedman is saying that these "flatteners" have occured. Of course the world isnt really flat (we know it's round), but what I perceive him to be saying is that a new "dimension" has emerged that now allows us to come pretty darn close to bypassing the traditional constraints of time and space.

This 5th dimension is technology plus mass accessibility! Each of the "flatteners" Friedman describes is a form of technology that allows information, communication, and collaboration to transcend the traditonal time-and-space boundaries of the physical world. And the internet (the "virtual world") is a big part of that 5th dimension. But we did't make it to that 5th dimension right away. It wasnt until everyone and their mother in this country, and all the other countries we economically compete and partner with, had access to that technology (and used it!)

Friedman calls this Globalization 3.0. Whereas Globalization 2.0 was centered around business, the increased pervasiveness of the internet into daily lives and gadgets gas created Globalization 3.0, which centers around individuals. I wonder if globalization 4.0 might be achieved when we finally perfect virtual teleportation and have the ability to project our own virtual presence (not just a "holographic image" but many of our our five senses) to interact with that of others. (That sounds a little too much like the Matrix for my taste.)

Here some interesting reviews and commentary on Friedman's book:Over the coming weeks, I'll be musing more on what this might mean for the future of CM and Agility, and I'll be commenting on several related books on the topic:

Tuesday, February 28, 2006

Unchangeable Rules of Software Change - Redux

I put together a couple of my earlier blog-entries on the topic of software change and iterative development and developed them into an article in the February 2006 issue of CMCrossroads Journal. The article is entitled The Unchangeable Rules of Software Change (just like the earlier blog-entry) and updates some of what I had blogged about earlier.

In addition to the first three commonly recurring pitfalls encountered when first faced with the reality of these "unchangeable rules", I identified three additional pitfalls that typically occur when first attempting iterative development. It also has a few more iterative development resources than my previous follow-up blog-entry. Lastly, I expanded the rule-set by one, adding the "quicksilver" rule to the "quicksand" rule as noted below:

The Unchangeable Rules of Software Change
Rule #0: Change is Inevitable!
The Requirements/Plans ARE going to change!

Rule #1: Resistance is Futile!
There isn't a darn thing you can do to prevent Rule #0.

Rule #2: Change is like Quicksand -- Fighting it only makes it worse!
The more you try to deny and defy rule #1 by attempting to prevent rule #0, the worse things will get.

Rule #3: Change is like Quicksilver -- Tightening your grip makes it slip from your grasp!
The more you try to deny and defy rule #2 by attempting to precisely predict, or rigidly control change, the more erratic and onerous the result will be.

Rule #4: Embrace change to control change
The more flexible and adaptive you are at accommodating change, the more control you will have over your outcomes
You can read the whole article here!

Thursday, February 23, 2006

More SCM Blogs

In my last blog-entry of 2005, I posted a list Software CM and Version-Control Blogs and asked for others that any of you recommend. I know of a few more now:
Austin Hastings now has a blog! Check it out at Doing Better!

Austin is incredibly knowledgeable about CM and architecture, and I also think he's not only much more concise than I am [I'm (in)famous for being verbose] but he's also a lot more insightful and much more quickly sees the 'whole' system and gets at the crux of the matter. I'm expecting great and inspiring things from this blog, and judging from his entries on Defining Baseline and Table Data Gateway, I won't be disappointed!

Kevin Lee has a forthcoming blog!

Okay - so it's not quite a blog yet. But it supposedly will be very soon. Kevin has a forthcoming book on Continuous Integration using ClearCase, ANT, and CruiseControl that looks to be pretty good. And he has some nice articles and downloads from his "buildmeister" website.

Rob Caron writes that Robert Horvick started a blog about Team System's version control features and API

Sunday, February 19, 2006

Agile IT Organization Refactoring?

On the agilenterprise YahooGroup, someone asked for advice about how to structure the whole enterprise/organization, including core competencies for development, support, testing/QA/V&V, business/marketing analysts, systems engineering/architecture, deployment, PMO, CM, IT, etc...

I asked if he was looking for things like the following:
Mishkin Berteig wrote that he thinks that "this is one of those places where Lean and Agile really start to blur into one-another via queuing theory." I mentioned that I think Theory of Constraints (TOC) also blurs-in with Agile and Lean via queuing theory as well, as evidenced by the work of David J. Anderson.

Mishkin also wrote:
"The answer is that in a lean portfolio management situation this is a mandated constraint for projects. Projects simply are not approved unless they are able to fit inside that timebox. If you have a larger project, you must break it into two.... and you must not make it fit by making a larger team.... which leads to the other side: all teams should be roughly the same size, team composition should change very slowly, and people should be dedicated to a single team at a time."
I replied that, rather than the above, the practice of "pairing" and "refactoring" might actually scale up by refactoring people across projects and teams every 3-6 months. I'm thinking about the case of an IT department that supports several so called "products", and in any 3-6 month period, they of course get requests against those projects, as well as requests for new projects.

Now, not every request and/or project has the exact same priority. So having each project or product prioritize it's backlog and then work on whatever "fits" into the next iteration sort of assumes that each project has the same priority (if all the teams are more-or-less the same size and experience mix).

If, instead of each project/product separately prioritizing it's backlog, they might instead do something like:
  • Form a combined backlog list across the entire department
  • Have representatives [governance] from each customer organization in the enterprise meet, and prioritize the department-wide backlog list
  • And whatever shows up as the topmost requests that can be handled in the next financial quarter with the available staffing is what gets worked.
If that means that some projects or products get more of their requests worked on in that time-frame, then so be it. And people might be "refactored" across teams and projects within the department, adding more staff to "feed" the projects that have the "lion's share" of the most highly prioritized requests from the backlog.

Wouldnt that be essentially creating a "pull" system for "allocating resources" to projects?

If pairing were used, it would help the "refactored" folks come up-to-speed more quickly on the new product or project. And after awhile, I'd think most folks in the department would have a reasonably high knowledge level and awareness (and appreciation) about all the "important" projects going on in the department, and understand the overall "business" big-picture a little better (at least for that department).

That would still seem agile to me. It looks similar to some matrixed approaches, but I think its a bit different because it is more fine-grained and incremental. I'm thinking it would help "scale" a single agile project and team into an overall "agile" department servicing an entire portfolio of projects, and making sure that those projects that are most valued for the given quarter get the appropriate amount of resources relative to how the "Customer" prioritized the backlog across the entire portfolio.

Wouldn't it? Or would team-dynamics and other people-issues make it too darn hard to incrementally rotate/refactor people in that manner?

Isn't this also an example of using the Five Focusing Steps of TOC? (Would this be an example of a team/project constraint elevated to the level of the department and using dynamic allocation of the entire department's staff to subordinate to the constraint and place the most staff on the projects with the most valued requests?)

Friday, February 10, 2006

Agile vs MDE: XP, AMDD, FDD and Color Modeling

The February 2006 issue of IEEE Computer is devoted to Model-Driven Engineering (MDE). MDE is actually a bit broader than MDA/MDD, because MDE (allegedly) covers more of the lifecycle, and corresponding process and analysis. Doug Schmidt's Guest Editor's Introduction to MDE is a pretty good overview of the current theory and practice and the obstacles to overcome.

A co-worker of mine is very interested in combining Agile methods with Model-Driven Engineering. He feels that the benefits of agility and of model-driven code-generation show tremendous promise as a breakhrough combination in productivity and quality and he is stymied that there aren't a lot more folks out there trying to do it.

He attended UML World in June 2005 and had some discussions with Scott Ambler (AgileModeling), Steve Mellor (co-creator of the Schlaer-Mellor OOAD method, and co-author of "Agile MDA", Executable UML and MDA Distilled), and Jon Kern, Agile MDA Evangelist (who helped Peter Coad launch TogetherSoft). He found most of what they had to say supported the possible synergy between Agility and MDA, but was very surprised to see AMDD folks and XP/Scrum folks throwing away their models once they had the code for it.

Upon hearing the above, I noted that Peter Coad is quite possibly the missing link between MDE and Agility:
The potential mismatch between MDA, with AMDD and XP-like Agile methods, is that:
  • Full/pure MDA strives for 100% generation of all code and executables directly from the models.

  • Ambler's AMDD, and "domain modeling" espoused by the likes of Robert Martin, Martin Fowler, and others in the XP community strives for "minimal, meaningful models", where they model only as needed, as a means of gaining more understanding, and then embed the knowledge gained into the code and/or tests.
I beleive FDD has the potential to bridge the gap. It strives for a comprehensive domain model, but from that point the code is written by hand (using coding practices that are traditionally non-Agile in nature, including strict-code-ownership, and formal code reviews/inspections). FDD doesn't say anything about using MDA/MDD techniques to auto-generate code, but the method is extremely amenable to doing exactly that.

Furthermore, doing so would remove a lot of the manual parts and practices of FDD that many consider to be the least "Agile". And much of the FDD "Color Modeling" patterns and techniques are very much the equivalent of refactoring and design-patterns that are currently used for code. See the end of this message for some more resources on Color Modeling.

In my own humble opinion, I think the "sweet spot" is somewhere in between 100% code generation and "hand-crafted" code. I realize that 100% is the ideal, but I'm thinking about the 80/20 rule here, and whether trying to eliminate that last 20% is perhaps not always practical.

I think the biggest barrier to doing that today is tools:
  • The modeling tools are good at handling the structure, but not as much as the behavior.

  • High-level programming languages like Java and C# and their IDEs are more convenient for specifying behavior (which is still largely textual in UML 2 and Action-Syntax Languages).

  • It is extremely difficult to maintain the non-interface code for a "class" or "package" unless it is either 100% manually coded or else 100% auto-generated. If it is 50-50, or even 80-20, then the "nirvana" of seamless and reversible round-trip design to code and back just isn't quite there yet.
What would get us there and help close that gap? I think the "melding" of the IDE with the modeling tool is what is needed - and it would have to allow specifying code such as Java or C# as opposed to only allowing ASL "code" (most of which looks pretty darn close to Java and C# anyway :) as well as a means indicating if/how a chunk of code was auto-generated or if it was to be hand-crafted, but "navigable" and editable via the Model.

The Eclipse framework shows a lot of promise in helping us get to that point, and has a lot of the groundwork and building blocks already in place, but still has a lot more work to be done.

I hear some of you saying, "Okay Brad, I see what this has to do with Agility. But what does this have to do with CM?" Well, in my January 2005 Agile SCM column, among other "crystal-ball gazing" predictions, I talked a little about "Model-Driven CM" and how it would resurrect the once popular SCM research-area of Software/System Configuration Modeling:
  • MDE/MDA would readily lend itself to allowing the developer to focus on the logical structure of the code, letting the physical structure (files and directories) be dictated by the code-generation tool with some configuration scripting+profiles.

  • This in turn would allow models and modeling to be easily used to analyze and design the physical structure of the code, including build & configuration dependencies.
Of course, we have a ways to go until we get there, but I do believe the trend is on the rise and it's only a matter of time.

Some other resources related to Agility and MDE:

Wednesday, February 08, 2006

Book Review: Practical Development Environments

Matthew Doar's Practical Development Environments (PDE) looks to be a pretty AMAZING book. It really does cover the entire lifecycle of development environment tools for version control, build management, test tools, change/defect tracking, and more. My previous favorite work on this topic was the Pragmatic Programmer's Pragmatic Project Automation (PPA), but no more.

The PPA book is still a GREAT book! And it focuses a lot more on programming and automating tasks and good ways to go about doing it. It goes into some of the details of particular tools and setting them up, especially JUnit.

But the PDE book is far more comprehensive in the range of development environment practices and tools that it covers, including not just the automation aspects, but evaluating them, setup and administration, integrating them together (and issues and challenges encountered), and many more aspects of testing, building, project tracking, version controlling, and just generally helping the development team get work done with maximal support and minimal hindrance from the tools they use.

If you want to be a toolsmith, and learn more about scripting and automating tasks and some of the common tools that already exist, then I'd still recommend Mike Clark's Pragmatic Project Automation.

If you're focus is less on how/when/why to automate and more on evaluating, setting-up and maintaining a practical development environment for your team, then Matthew Doar's Practical Development Environments is definitely my top pick nowadays!

Sunday, February 05, 2006

Book Review: Perl Best Practices

As far as I'm concerned, Damian Conway's Perl Best Practices book should be required reading for any serious Perl programming, and should be mandatory for any team that does any serious Perl development. These best-practices and conventions are exactly the sort of thing that programming teams need to come to grips with, and establish shared norms for how to make their codebase clear and "clean."

Next time I come across a team of Perl scripters that needs to develop a set of team standards and strategies for how to do thse kinds of things, I'm simply going to tell them to get this book: read it together, discuss it, learn it, understand it, and then do it!

Friday, February 03, 2006

O'Reilly Book Reviews

I received a whole slew of books from O'Reilly to review, so I'll be writing about them in a subsequent review either on this blog or in separate articles. The one's I'll be reading through are:Watch this space! I've already been making my way through Perl Best Practices, and it looks quite good. The other one I'll be doing soon is Practical Development Environments, which looks like it might give the Pragmatic Programmer's Practical Project Automation more than a run for it's money.

Saturday, January 28, 2006

Iterative Development Resources

As a follow-up to my earlier entry on The Unchangeable Rules of Software Change, one of the things that particular team figured out it needed was to try and use iterative development as a means of managing change and being more responsive to customer feedback while still controlling scope.

Of course, saying "do iterative development" is one thing. Figuring out how to actually do it for a group in an organization that isn't accustomed to it is another thing entirely. So here is a list of resources on the subject of adopting, planning/managing, and doing iterative software development -- particularly for those coming from a background of phased-sequential (waterfall,V) model of planning.

Iterative Development Resources:

Friday, January 27, 2006

Kandt's SCM Best-Practices: Tool Practices

More from the paper Software Configuration Management Principles and Best Practices, by Ronald Kirk Kandt, appearing in the Proceedings of PROFES2002, the 4th International Conference on Product-Focused Software Process Improvement, Rovanieme Finland, December 2002.

In this article, Ronald Kandt describes ten basic principles that support configuration management activities, and then goes on to describe twenty three "fundamental" practices, which are split across the categories of: Management Practices, Quality Practices, Protection Practices, and Tool Practices.

In this entry, I'll enumerate Kandt's Tool Best-Practices for SCM (in priority order, with the most important ones listed first):
Practice 19:
Check code in often

Kandt goes on to explain "This practice should be constrained when checking in code on the primary development branch. That is, developers should only check in working versions of code to a [primary] development branch."

The main reason given is to ensure against loss of change by making sure the code is in the repository. So although initially it might seem this practice is about committing in small talks (which I'm sure Kandt would approve), that doesn't appear to be what he's emphasizing. This really seems to be about frequent use of "Private Versions" and then, before doing a Task-Level Commit, making sure one does a Private Build and runs Unit Tests and Smoke Tests as part of the Codeline Policy.

In other words: "If it ain't checked-in, it don't exist!"

Practice 20:
Configuration management tools should provide patch utilities

The stated benefit here is to support incremental release/deployment and also remotely submitted changes over slow telecommunication connections.

Practice 21:
Do not work outside of managed workspaces

Seems like the "Private Workspace pattern to me

Practice 22:
Do not share workspaces

This is sort of reinforcing that the workspace should be private rather than shared. Having said that, while I agree shared workspaces (not include Pair-Programming) shouldn't usually be the norm, I have seen cases where it can work quite well.

In order for this to work, there must be an explicit "protocol" agreed upon up-front by all the contributors to decide who doing what, when, and to what (not just who is modifying which files when, but who is building what else and when) and/or how to rapidly communicate who is doing what when. See also work by Andre van der Hoek et.al. on Continuous Coordination: A New Paradigm for Collaborative Software Engineering Tools and related work on the "Palantir" project.

Practice 23:
When developing software on a branch other than the primary branch, regularly synchronize development with the development branch

This seems like mainlining (the Mainline pattern) and rebasing (a.k.a. Workspace Update). Only I would say that regularly synchronizing should apply to private workspaces/sandboxes even when they aren't using a private or task branch. The more general rule would seem to be the Continuous Update pattern.
Th-th-th-thats all from Mr. Kandt's list of 10 SCM principles and 23 SCM best-practices!

Thursday, January 26, 2006

Kandt's SCM Best-Practices: Protection Practices

More from the paper Software Configuration Management Principles and Best Practices, by Ronald Kirk Kandt, appearing in the Proceedings of PROFES2002, the 4th International Conference on Product-Focused Software Process Improvement, Rovanieme Finland, December 2002.

In this article, Ronald Kandt describes ten basic principles that support configuration management activities, and then goes on to describe twenty three "fundamental" practices, which are split across the categories of: Management Practices, Quality Practices, Protection Practices, and Tool Practices.


In this entry, I'll enumerate Kandt's Protection Best-Practices for SCM (in priority order, with the most important ones listed first):
Practice 15:
Use a software system to perform configuration management functions

Also known as: "Use an SCM tool!"

Practice 16:
Repositories should exist on reliable physical storage elements

Practice 17:
Configuration management repositories should be periodically backed-up to non-volatile storage and purged of redundant or useless information

Practice 18:
Test and confirm the backup process

Is there really anything here to argue with? (I didn't think so :-)

Oh, and that #18 is one that is often overlooked. I can't tell you the number of times I've come across folks that think there stuff is safe "cuz it's backed-up," but their backup process didn't confirm that they were able to successfully read/extract the data that was archived.

Next up, we'll look at what Kandt identifies as Tool practices in his list of 23 SCM best-practices.

Wednesday, January 25, 2006

Kandt's SCM Best-Practices: Quality Practices

More from the paper Software Configuration Management Principles and Best Practices, by Ronald Kirk Kandt, appearing in the Proceedings of PROFES2002, the 4th International Conference on Product-Focused Software Process Improvement, Rovanieme Finland, December 2002.

In this article, Ronald Kandt describes ten basic principles that support configuration management activities, and then goes on to describe twenty three "fundamental" practices, which are split across the categories of: Management Practices, Quality Practices, Protection Practices, and Tool Practices.


In this entry, I'll enumerate Kandt's Quality Best-Practices for SCM (in priority order, with the most important ones listed first):
Practice 8:
All source artifacts should be under configuration control

Practice 9:
Use a Change-Control Board (CCB)

Practice 10:
Build software on a regular, preferably daily, basis, followed by invocations of regression test suites

Practice 11:
Document identified software defects

Practice 12:
Software artifacts that comprise a release should adhere to defined acceptance criteria

Practice 13:
Each software release should be regression tested before the test organization receives it

Practice 14:
Apply defect repairs to every applicable release or ongoing development effort

Most of folks in the "Agile camp" and in the "CM camp" would probably consider most of these to be "no brainers." The ones that Agilists might pick nits with are 8, 9, 11 and 14.
Control Contrarians and Wooden "Boards"

For 8 and 9, I don't think you'll find any agile folks recommending against the use of a version control system/repository. They would likely take issue with the appropriate meaning of "control." The words "control" and "change control" often set off sirens and red-flags for many agilists. And with good reason - they all too often see control attempted in a way that stifles developers and imposes highly restrictive waiting and redundancy for authorization/permission to do things; and they often see it executed as a means of preventing change instead of embracing it (see The Unchangeable Rules of Software Change).

Agilists also don't like the term CCB. Not only does the word "control" raise hackles (as already mentioned), but the term "Board" doesn't mesh very well with Agile values and perspectives on the whole concept of "team": a "board" conjures images of people isolated from the day-to-day workings and workers of the project. In reality, iteration planning meetings basically fill the CCB function for most agile projects because the iteration is usually small enough that there just isn't time to introduce new requirements within the iteration's timebox, so it's usually deferred to the next iteration.

Bugbases and Human Faces

There are many agilists who will rail against using a change/defect tracking tool. They say things like: there shouldn't be enough defects to warrant a bugbase (as opposed to a spreadsheet); rather than documenting and describing them in a database, we should just go fix 'em instead; index cards are a better for recording the bare essentials and promoting conversational dialogue and two-way face-to-face communication; tracking systems dehumanize communication and are too easily misused to the effect of hindering rather than facilitating dialogue and collaboration.

I understand all of these remarks and concerns. And while they all raise valid points, my own position is different. I consider a basic defect/issue/enhancement tracking (DIET) system to be every bit as essential as a version control tool. I still use index cards as a means of initiating dialogue and eliciting/capturing resulting needs and constraints. But then I put them into the tracking tool. Maybe if my handwriting were more legible the cards would be comprehendible enough to others, but I still think it's much more useful and convenient for tracking, organizing, sorting, searching, status accounting and generating reports (even if they just get posted to a local "information radiator").

I also think information about defects should be captured in order to identify and understand possible trends and root-causes, as well as provide evidence and history to consumers and customers (especially those requiring formal support/service-level agreements).

I do think a lot of change control implementations (and resulting tool customizations) make things harder on developers than needed, for the sake of convenience to other roles. I think that's what leaves the bitter taste in many agilists mouths and why they dislike it so much - because it holds them back and/or diverts them away from what they feel is most important: delivering value to the customer in the form of tangible, tested results. I think that's a shame because I think it doesnt' have to be that way. A few tracking tools are gaining popularity in the Agile arena, like VersionOne, and Jira.

The Monstrosity of Multiple Maintenance

Okay - I don't think that any agilists will say that a defect shouldn't be fixed in all applicable release. What they will say is that they would first do everything in their power to have one and only one mainline of development. Branching a new mainline to support a legacy release or a market/platform/project-specific variant is anathema for most agilists: it creates redundancy among code-streams (fixes have to be applied and integrated more than once) and splits the value-delivery stream into isolated streams/teams of diminished capacity.

Agilists would prefer to see a solution at a later binding time that uses design patterns of software architecture, building/releasing, licensing/distribution, & install/upgrade rather than patterns of version branching. For the most part I agree with them, but I am less wary of branching in the case of parallel development (because I know how to do it really effectively), and I am perhaps more accepting of the case of multiple releases (but I would still fight like heck against multiple variants :-)

Next up, we'll look at what Kandt identifies as Protection practices in his list of 23 SCM best-practices.

Tuesday, January 24, 2006

Kandt's SCM Best-Practices: Management Practices

More from the paper Software Configuration Management Principles and Best Practices, by Ronald Kirk Kandt, appearing in the Proceedings of PROFES2002, the 4th International Conference on Product-Focused Software Process Improvement, Rovanieme Finland, December 2002.

In this article, Ronald Kandt describes ten basic principles that support configuration management activities, and then goes on to describe twenty three "fundamental" practices, which are split across the categories of: Management Practices, Quality Practices, Protection Practices, and Tool Practices.


In this entry, I'll enumerate Kandt's Management Best-Practices for SCM (in priority order, with the most important ones listed first):
Practice 1:
Maintain a unique, read-only copy of each release

Also known as: create an immutable release label

Practice 2:
Control the creation, modification, and deletion of software artifacts following a defined procedure

Also known as: use version control, and agree on how you'l be doing it -- for example by identifying which SCM patterns you'll be using and their specific implementation in your project's context

Practice 3:
Create a formal approval process for requesting and approving changes

Also known as: manage change/scope, preferably by managing expectations rather than by trying to prevent change

Practice 4:
Use Change Packages

This one is more than just task-level commit, it builds on task-level-commit to provide task-based-development

Practice 5:
Use shared build processes and tools

We wrote about this in our October 2003 CM Journal article on Agile Build Management and it's March 2004 successor article on Continuous Staging

Practice 6:
A version manifest should describe each software release

This is more than just a self-identifying configuration listing of files and versions. Kandt also intends it to mean identifying the set of features, fixes, and enhancements too, as well as all open problems and issues. This is often included in the release notes for a new release. It also relates to the information necessary to satisfy a configuration audit.

Practice 7:
Segregate derived artifacts from source artifacts

Also known as: know your sources from your targets! Often times, the versioning/storage strategies used for the two may differ. (Of course, that's not the only reason to segregate them.)
Next up, we'll look at what Kandt identifies as Quality practices in his list of 23 SCM best-practices.

Monday, January 23, 2006

Kandt's SCM Principles

From the paper Software Configuration Management Principles and Best Practices, by Ronald Kirk Kandt, appearing in the Proceedings of PROFES2002, the 4th International Conference on Product-Focused Software Process Improvement, Rovanieme Finland, December 2002.

In this article, Ronald Kandt describes ten basic principles that support configuration management activities, and then goes on to describe twenty three "fundamental" practices, which are split across the categories of: Management Practices, Quality Practices, Protection Practices, and Tool Practices.


In this entry, I'll simply enumerate the principles from the article. In subsequent entries I'll list the 4 different sets of best-practices.

Kandt's Ten Basic Principles of SCM are:
Principle 1:
Protect critical data and other resources

Principle 2:
Monitor and control software development procedures and processes

Principle 3:
Automate processes and procedures when cost-effective

Principle 4:
Provide value to customers

Principle 5:
Software artifacts should have high-quality

Principle 6:
Software systems should be reliable

Principle 7:
Products should provide only necessary features, or those having high value

Principle 8:
Software systems should be maintainable

Principle 9:
Use critical resources efficiently

Principle 10:
Minimize development effort

Nothing particularly earth shattering here. A few interesting things to note:
  • All of them are certainly well aligned with agility, or any sound engineering practices for that matter. But #7 and #10 seem especially well aligned with agility, and are often not emphasized enough in many CM circles.
  • #10 in particular might surprise some folks, because I'm sure many developers might perceive CM as trying to do anything but minimize development effort, and may feel #10 is often treated as secondary and subordinate to #2
That's all well and good. Those things are easy to say "should" about. What's harder is to successfully do them all, and balance them all effectively when facing an SCM problem. It will be more interesting to see what Kandt's 23 SCM best-practices are, and how they manage to uphold these principles.

I'll also note that these principles seem somewhat different from the kind of SCM Principles I've been trying to compile. The things I'm looking for are less about SCM process goals, and more about SCM solution design (e.g., design principles for how to design or select an SCM best-practice that most effectively preserves the above goals). In this regard, I might consider most of the above to be goals more than principles (but with a few exceptions.)

Tuesday, January 17, 2006

The Unchangeable Rules of Software Change

I dont think I've ever written this down before, but I commonly say this to many a developer and development team. I often come across teams in the early stages of learning to deal with changing requirements. They typically run into two pitfalls in the following order:
Pitfall #1: No scope change-management
The very first pitfall they fall into is not having any kind of change-management for the project/requirements. They allow any and all changes in a very reactive fashion without thinking about trying to renegotiate the scope, schedule, or resources for the project/release.

Pitfall #2: Preventing scope changes
The very next time around, they run into the second pitfall: they overcompensate for being burned/bitten by the first pitfall by trying to prevent any and all changes to the scope of their next product release.
They keep insisting that the fundamental problem is they dont have completely stable detailed requirements. If the requirements were detailed enough up-front, they think their estimates would be more accurate; and if only the requirements would stop changing before a single line of code is written, then the schedule wouldn't keep fluctuating so much. It's those darn users/customers/analysts that don't know what exactly they want at the outset. It's all their fault!

The irony is that the more they try to get stable detailed requirements up-front, the more the schedule becomes protracted: first to get all the gory-details analyzed and specified up-front; and second because so many of the gory-details were either incorrect or still not detailed enough. It becomes this vicious cycle of trying to prevent change with more up-front detail, and yet things keep getting worse instead of better.

The first thing I commonly do here is explain the following:
    There is a very fancy technical term that biologists use to describe completely stable systems. This highly sophisticated technical term is the word "DEAD!"
I then try to explain that we meager humans (including ourselves and our customers) are imperfect, and we have imperfect and incomplete knowledge: We don't know things, and we don't know that we don't know things, and we don't know how to find out many of those things earlier.

Then I tend to mention Phil Armour's description of the Five Orders of Ignorance and how Software is not a Product, and that software development is a therefore a knowledge creation activity which involves reducing our ignorance over time though learning and discovery about the domain (our requirements) and ourselves (our process, culture, and skill/capabilities).

At this point I then introduce them to my "tried and true, battle-proven and industry-acknowledged, Unchangeable Rules of Software Change":
Rule #0: Change is Inevitable!
The Requirements/Plans ARE going to change!

Rule #1: Resistance is Futile!
There isn’t a darn thing you can do to prevent Rule #0.

Rule #2: Change is like Quicksand -- Fighting it only makes it worse!
The more you try to deny and defy rule #1 by attempting to prevent rule #0, the worse things will get.

Rule #3: Change is like Quicksilver -- Tightening your grip makes it slip from your grasp!
The more you try to deny and defy rule #2 by attempting to precisely predict, or rigidly control change, the more erratic and onerous the result will be.

Rule #4: Embrace change to control change
The more flexible and adaptive you are at accommodating change, the more control you will have over it.
Recently I was talking to a group that was struggling with rule #2. They thought if they could only do even more detailed specification up-front (they already do a reasonable amount of up-front detail), that it would somehow eliminate problems with estimation accuracy, which in turn would alleviate problems with "conformance to plan" and prevent the details from being discovered later (because they would instead get them all "right" up-front).

Despite having plenty of historical evidence/data in this particular product to support the "inescapable truth" laid out by these rules, there still seemed to be that desire to cling to the illusion of control that we can somehow prevent such changes if only we spend more time+effort getting a more complete+perfect+detailed spec up-front.

I was searching for external validation ("not invented here") and then came across the following three things that I liked a lot:

Tuesday, January 10, 2006

Lean Principles for Branching

A recent thread on the scrumdevelopment YahooGroup about "Scrum releases and SCM" got me thinking about a set of "Agile SCM" slides I prepared, one of which tried to apply principles of lean thinking to branching and merging for version control and their relationship to some of the SCM Patterns.

That was using an earlier version of the principles when Tom and Mary had 10 or so of them. Now they've narrowed it doesnt to seven, so I figured I'd take another stab at it:
  1. Eliminate Waste – Eliminate avoidable merge-propagation (multiple maintenance), duplication (long-lived variant branches), and stale code in infrequently synchronized workspaces (partially completed work)

  2. Build Quality In – Maintain codeline integrity with (preferably automated) unit & integration tests and a Codeline Policy to establish a set of invariant conditions that all checkins/commits to the codeline must preserve (e.g., running and passing all the tests :-)

  3. Amplify Learning – Facilitate frequent feedback via frequent/continuous integration and workspace update

  4. Defer Commitment (Decide as late as possible) -- Branch as late as possible! Create a label to take a "snapshot" of where you MIGHT have to branch off from, but don't actually create the branch until parallelism is needed.

  5. Deliver Fast (Deliver as fast as possible) -- complete and commit change-tasks and short-lived branches (such as task-branches, private-branches, and release-prep branches) as early as possible

  6. Respect People (Decide as low as possible) -- let developers reconcile merges and commit their own changes (as opposed to some "dedicated integrator/builder")

  7. Optimize the "Whole" -- when/if branches are created, use the Mainline pattern to maintain a "leaner" and more manageable branching structure


Did I get it right? Did I miss anything?

Friday, January 06, 2006

Big 'A', the three pillars, and the three 'F's

In the past I've asked "What are Form and Fit for Software CM? and gotten some very interesting answers. Configuration auditing for physical, functional, and process "integrity" (correct+consistent+complete) is a commonly recurring phrase in many classical CM documents and standards. And I was curious to understand how "form, fit and function" mapped from the physical world of hardware into the virtual world of software.

I assumed the "function" part was easy to map (functionality) and that it was the other two, form and function, that were hard. I also wondered where "the three 'F's of form+fit+function originated from.

That made me wonder if it had anything to do with the three pillars of Vitrivius from classical architecture. This goes back to an earlier blog posting about Commodity, Integrity and Simplicity that also discussed the Big 'A' (Architecture) and the three 'F's.

The classical Greco-Roman architect Vitruvius described the three pillars of architecture as Utilitas, Firmitas, and Venustas: Utilitas is usually translated as utility, need, or function; Firmitas as firmness, durability, or stability of structure; and Venustas as beauty, aesthetics, or having pleasing/attractive form.

I can see how beauty or aesthetics could be translated as "form", and certainly see how "utility" could be translated as function. I'm not sure if I see a direct translation between "firmness" and "fit" (perhaps the better the "fit" the more durable the structure?)

I am wondering if form fit and function evolved on their own, separate from form, function, and durability ... or if they are related and "durability" somehow got translated into "Fit" in CM circles. What is the difference between the three pillars of architecture, and form + fit + function for configuration auditing of product integrity?

Friday, December 30, 2005

Software CM and Version Control Blogs

I've been looking around for other blogs that are primarily (or at least regularly) devoted to the subject of Software CM and/or Version Control. I did some searching thru blogsearch.google.com but mostly my own surfing turned up good results. I chose to omit blogs that don't seem to be updated anymore (like Brian White's Team Foundation blog - especially since Brian left Microsoft).

Anyway, here is what I found. If you know of others, please drop me a line.

Blogs about Software CM or Version Control:Blogs frequently discussing Software CM or Version ControlI found a few others, but they didnt seem to be active (like a ClearCase-centric SCM blog and a Continuous Integration 'book' blog -- not to be confused with Jason Yip's fairly active continuousintegration YahooGroup).

Do you know of any that I might have missed?

Happy New Year everybody!

Thursday, December 22, 2005

Agile SCM 2005 Book Reflections and Recommendations

I just finished writing my article for the December 2005 CMCrossroads Journal entitled Agile SCM 2005 - Reflecting back on the year in books. An excerpt follows ...
Hello! My name is Brad Appleton, and I'm a book-a-holic! Hear my serenity prayer:
Lord, please grant me ...
the serenity to accept that I can't read everything,
the time to read and understand everything that I can,
the wisdom to know the difference
[so I won't have to leave my estate to Amazon.com],
and a sufficiently well-read network of friends
[to tell me all about the books they've read].
We thought 2005 was a pretty gosh darn great year for Agile and Software CM alike. We wanted to share what we feel are some of the timeless classics that we have most looked to throughout the year, as well as the new books in the last year that we have been most impressed with.

Those of you reading this are encouraged to read the article to see what we had to say about some of the following books (as well as several others):
Happy Holidays and Hopeful New Years!
A Very Happy Merry ChristmaHannaValiRamaKwanzaakah (or non-denominational solstice celebration) to all in 2005! And looking forward to what 2006 will bring to all of us in the coming year!

Sunday, December 18, 2005

4+2 Views of SCM Principles?

In my last blog-entry I wondered if the interface segregation principle (ISP) translated into something about baselines/configuration, or codelines, or workspaces, or build-management. Then I asked if it might possibly relate to all them,

Here's a somewhat scary thought (or "cool" depending on your perspective), what if the majority of Robert Martin's (Uncle Bob's) Principles of OOD each have a sensible, but different "translation" for each of the architectural views in my 4+2 Views Model of SCM/ALM Solution Architecture? (See the figure below for a quick visual refresher.)




Thus far, the SCM principles I've "mapped" from the object-oriented domain revolve around baselines and configurations, tho I did have one foray into codeline packaging. What if each "view" defined a handful of object-types that we want to minimize and manage dependencies for? And what if those principles manifested themselves differently in each of the different SCM/ALM subdomains of:
  • change control (project-view)
  • version control (evolution view)
  • artifact (requirements, models, code, tests, docs) hierarchy and build management (product view)
  • workspace/repository/site management and application integration & synchronization (environment view)
  • workflow and process design (process view)
  • teaming, inter-group coordination and interfaces/expectations (organization view)
What might the principles translate into in each of those views, and how would the interplay between those principles give rise to the patterns already captured today regarding recurring best-practices for the use of baselines, codelines, workspaces, repositories, sites, change requests & tasks, etc.

Thursday, December 15, 2005

Interface Segregation and Configuration Promotion

I've been thinking more about the Interface Segregation Principle (abbreviated as "ISP") from (Uncle) Bob Martin's Principles of Object-Oriented Design.

The "short version" of ISP in the initial article states that:
=> "Clients should NOT be forced to depend on interfaces that they do not use."

The summary of ISP in Uncle Bob's website says it differently:
=> "Make fine grained interfaces that are client specific."

In previous blog-entries, I've wondered how this might correctly translate into an SCM principle (if at all).
  • In Change-Packaging Principles, I wondered if maybe it corresponds to change-segregation or incremental integration: Make fine-grained incremental changes that are behavior-specific. (i.e., partition your task into separately verifiable/testable yet minimal increments of behavior.)

  • On the scm-patterns list I wondered if maybe it corresponds to composite baselines: composing baselines of other, more fine-grained baselines

  • Now I'm thinking maybe it corresponds to promotion lifecycle modeling and defining the promotion-levels in a promotion-lifecycle of a configuration-item (e.g., a build).
Why am I thinking this?

I guess I'm trying to go back to the basis of my means of comparison: configurations (and hence baselines) as "objects." If a configuration is an object, then what is an interface of a configuration, and what is a fine-grained interface (or "service")?

If I am thinking in terms of configuration building, then the interface for building the object (configuration) is the equivalent of Make/ANT "methods" and targets for a given item: (e.g., standard make targets like "clean", "all", "doc", "dist", and certain standard conventions for makeflags). That is certainly a plausible translation.

But if I am thinking in terms of baselining and supporting CM-mandated needs for things like reproducibility, repeatability, traceability, from the perspective of the folks who "consume" the baseline (it's clients), then maybe the different consumers of a baseline need different interfaces.

If those consumers end up each "consuming" the baseline at different times in the development lifecycle (e.g., coding, building, testing, etc.) then perhaps that defines what the promotion model and promotion levels should be for that configuration.

    What if they aren't at different times in the lifecycle? What if they are at the same time?
Then I guess it matters if the different consumers are interested in the same elements of the baseline. If they're not, maybe that identifies a need for composite baseline.

    What if they aren't at different times and aren't for different elements, but rather the same sets of elements?
Then maybe that identifies different purposes (and services) needed by different consumers for the same configuration at the same time. Building -versus- Coding might be one such example. Would branching -versus- labeling be another? (i.e. "services" provided by a configuration as represented by a "label" as opposed to by a "codeline", or a "workspace"?)

    What if no one of these is the "right" interpretation? What if it's ALL of them?
Then that would be very interesting indeed. If the result encompassed the interfaces/services provided by different Promotion-Levels, Make/ANT-targets, branch -vs- label -vs- workspace, then I don't even know what I would call such a principle. I might have to call it something like the Configuration ISP, or the Representation separation principle, or the manifestation segregation principle, or ....

What, if anything, do YOU think the ISP might mean when applied to Software CM and software configurations as represented by a build/label/codeline/workspace?

Sunday, December 11, 2005

Polarion for Subversion

A quick follow-on to my previous blog-entry on Subversion plus Trac gives SubTraction and an even earlier one asking Can I have just one repository please? ...

I just heard of a new product called Polarion which allegedly appears to do almost exactly what I envisioned in my "just one repository" blog-entry, and there appears to be a "Polarion for Subversion" offering (which also claims to support Ant, Maven, and Eclipse):
"In classic software development tool environments, many different point solutions are used for software life-cycle management. There are requirements management tools, bug trackers, change management, version and configuration management tools, audit and metrics engines, etc. The problem: your development artifacts are scattered, making it difficult to derive useful, timely management information. POLARION® ... keeps all artifacts of the entire software life-cycle in one single place ... gives organizations both tools (for requirements, tasks, change requests, etc.) AND project transparency through real-time aggregated management information ... combines all tools and information along the Software lifecycle in one platform. No tool islands, no interface problems, no difficult, potentially fragile integrations anymore."
However, it does NOT appear to be opensource.

I'd LOVE to see a mixed commercial offering of, say, AccuRev, Jira and Confluence be able to provide this all in one package (just as I described in the blog-entry). [And with AccuRev's and Atlassian's roots in and commitment to opensource (the folks at AccuRev had previously developed the open-source CM system "ODE" for the OSF), they might even consider making it freely available for opensource projects (like Atlassian currently does for both Jira and Confluence)]

Hey! I can dream - can't I? :-)

Friday, December 09, 2005

Subversion plus Trac gives SubTraction

Here's a bit of a "plug" for some open source SCM tool offerrings ...

For those CVS users who don't already know about Subversion I urge you to take a look. Subversion was designed to be a next-generation replacement for CVS that has a lot of the same basic syntax and development model while fixing or updating most of its well known shortcomings.

Another spiffy open-source project that integrates with both CVS and Subversion is Trac, which provides simple but powerful defect/issue/enhancement tracking (DIET) using a Wiki-web interface, and readily integrates with both CVS and Subversion to add collaborative, low-friction request/activity tracking to your version control and can be used to track change-sets in the version control tool and associate them with change-tasks/requests in the tracking tool.

Using Trac with Subversion can help "subtract" a lot of the tedium of traceability from your day-to-day work and give more "traction" to your development efforts. So, in a way, Subversion plus Trac gives SubTraction :-)

Saturday, December 03, 2005

Agile Six Sigma - Holistic, Organic, Lean, Emergent

I've been reading bits and pieces about "Lean Six Sigma" for the past couple years. It seems a reasonable mix of Lean Production and the Principles of Lean Thinking with Six Sigma methods and the SEI's description of Six Sigma. Lately it seems to be getting abbreviated to "Lean Sigma"

More recently, I've been hearing about "Design For Six Sigma (DFSS)" and "convergences" between "Lean" and Goldratt's "Theory of Constraints" (TOC), and techniques like the "The 5 Focusing Steps", "Throughput Accounting" and "Drum-Buffer-Rope." (There was a nice ASQ article comparing Lean, Six Sigma, and TOC awhile back.)

So I wanted to be the first to try and coin the phrase "Agile Six Sigma" - except I'm not real fond of the resulting acronym, plus someone else might have come up with it already (if only in passing). So I wanted to embellish it a bit to create an even better acronym before I commence the marketing madness for my new "cash cow" idea. Thus I have decided upon:
    "Agile Six Sigma - Holistic, Organic, Lean, Emergent."
Seriously tho! I actually think there is a lot of GREAT stuff in and synergies between Agile, Lean, TOC, and Systems Thinking. I think DFSS has some useful tools in its toolbox. I'm less sure of the overall methodology for SixSigma being compatible with Agile methods -- tho I admit David J. Anderson has some GREAT articles that seem to show a connection, particularly the one on Variation in Software Engineering.

I am getting weary of lots of hype that simply throws these buzzwords together (hence my marketing slogan and acronym above :-) but I think they have a lot to offer, and I would be interested in applying them to CM.

I'm particularly curious about using the Lean tools of value-stream mapping along with TOC in analyzing anti-patterns and bottlenecks that often occur in building, baselining and branching & merging (since there seems to be a fairly direct correlation to "code streams" of "change flows" and a "value stream" or "value chain"). Has anyone already done this for CM? (I wonder if something like this could better substantiate the "goodness" of the Mainline pattern.)