Friday, July 22, 2005

CM to an Interface, not to an Implementation

It's been a very busy and heated week or so for "Agile CM" discussion on CMCrossroads.com. First, we published the debate-provoking article about some fictitious "newly discovered" National Treasures of Agile Development. Then there was, and still still is, the ensuing (and much heated) discussion on the Balancing CM and Agility discussion thread.

The gist of the article was that it purported to have discovered some historical artifact whose wording resembled the July 1776 declaration of independence, but was referring to agile developers complaining about, and wanting freedom from, the so called "tyrannies" of many recurring heavyweight Software CM process implementations. Of course since it was posted in a forum for CM professionals, the article sparked some very strong reactions from some people (and you can read them in the "Balancing CM and Agility" thread if you like).

One thing regarding the article and discussion thread that struck me is very much related to one of the SCM Principles from GoF Design Patterns that I blogged about last week:
  • Program to an Interface, Not an Implementation (a.k.a "Separate Interface from Implementation")

Some regarded the "National Treasures" article as an attack against CM and CM principles in general instead of taking it for what it was, a grievance with some commonly recurring characteristics of some heavyweight SCM process implementations.

That got me to thinking - maybe these could be recast as instances of violating an important SCM principle! Previously, I had blogged about "separating interface from implementation" as it applies to designing build scripts/recipes as well as CM processes and workflow. But I didnt really talk about how it applies to the roles involved in defining the CM activities and in executing the CM activities.

In the case of CM and development, I think it is frequently the case in many heavyweight SCM implementation, that the developers quite likely did not get a lot of say in defining the process and procedures they must follow to meet the needs of CM. It was instead defined primarily by configuration/build managers to meet needs needs like reproducibility, repeatability, traceability, and auditability without enough considerations to development's needs for:
  • high quality+velocity acheived though rapid and frequent feedback that comes from very frequent integration (including being able to do their own merges+builds on their own "Active Development Line")

  • unit, integration, and regression testing with the latest development state of the codeline

  • effect fixes/changes quickly and efficiently without long wait/approval periods for authorization of changes they are likely to know need to be made ASAP (like most bugfixes and maintainability/refactoring changes found after the corresponding task was committed to the development codeline)
This leads me to think an important manifestation of separating interface from implementation is to:
      Configuration Manage to an Interface, Not to an Implementation!
What I mean by this is that when defining CM policies, processes and procedures, the impacted stakeholders (particularly the ones who will execute the procedures) not only need to be involved, but as the "consuming end-user" of the process and procedures, they need to play a key role in defining the process implementation.

  • CM (and other downstream stakeholders) should define their needs/requirements in terms of interfaces: policies, invariants, constraints, and entry criteria for what is given to configuration/build managers.

  • Development should be empowered to attempt to define the implementation: the processes/procedures and conventions for how they will meet or conform to the "interface" needed by CM.
This seems very "Agile" to me in that it fosters collaboration between CM and development. It lets CM be the "customer" of agile developers by giving them their CM process requirements and allowing them to drive and fine-tune the implementation's "acceptance tests" to ensure it meets their needs. It also allows development, the folks who are executing the development activities, to collaboratively define their own processes (in accordance with lean development/manufacturing principles).

Does that mean every different development group that delivers to CM should be allowed to come up with their own process? What about having consistent and repeatable processes? If the requirements/interfaces are defined and repeatedly met, why and when do we need to care? Each development group is defining and repeatably executing its process to meet consistent CM entry-criteria. And doesnt the CMM/CMMI allow for project-specific tailoring of standard organizational processes?

Still, there should be some mechanism for a larger-grained collaboration, such as a so called "community of practice" for all the development projects to share their successful development practices so that that knowledge can be reused throughout the organization. And every time CM collaborates with yet another project, they can point to an existing body of development processes being successfully used that the project they are engaging might want to consider adopting/adapting.

I do think that if they (development + CM) choose to use a different process or significantly adapt an existing one, it would be good to know the differences and WHY they were necessary. Seems to me that matches the description of what an SCM Pattern is: something that defines a solution to an SCM problem in a context and captures the forces, their resolution, and the rationale behind it.

Then when CM and development work together the next time for the next project, they simply look at the set of SCM patterns they have been growing (into a pattern language perhaps) and decide which combination of patterns to apply, and how to apply them, to balance and resolve the needs of both CM and development, collaboratively!

Thursday, July 14, 2005

SCM Principles from GoF Design Patterns

I was reading Leading Edge Java online and saw an article on Design Principles from Design Patterns. The article is part III of a conversation with Erich Gamma, one of the famous Gang of Four who authored the now legendary book Design Patterns: Elements of Reusable Object-Oriented Software.

In this third installment, Gamma discusses two design principles highlighted in the GoF book:
  • program to an interface, not an implementation (a.k.a "separate interface from implementation")

  • favor object composition over class inheritance
Another recurring theme echoed throughout the book is:
  • encapsulate the thing that varies (separate the things that change from the things that stay the same during a certain moment/interval)
I think these three GoF Design Principles have pretty clear translations into the SCM domain:
  • Separate Interface from Implementation - this applies not only to code, but to Make/ANT scripts when dealing with multi-platform issues like building for different operating environments, or windowing systems, or component libraries from different vendors for the same functionality. We are often able to use variables to represent the target platform and corresponding set of build options/libraries: the rules for building the targets operate at the abstract level, independent of the particular platform. This can also apply to defining the process itself, trying to ensure the high-level workflow roles, states & actions are mostly independent of a particular vendor tool

  • Prefer Composition & Configuration over Branching & Merging - This is one of my favorites, because it talks about one of my pet peeves: inapproproriate use of branching to solve a problem that is better solved by using either compile-time, install-time, or run-time configuration options to "switch on" the desired combinations of variant behavior. Why deal with the pain of branching and then having to merge a subsequent fix/enhancement to multiple variant-branches if you can code it once in the same directory structure with patterns like Strategy, Decorator, Wrapper-Facade, or other product-line practices

  • Isolate Variation - this one is almost too obvious. Private Workspaces and Private Branches isolate variation/change, as does just about any codeline. And we do the same things in the build-scripts too

Can you think of any other valid interpretations of the above design rules in terms of how they translate into the SCM domain?

Tuesday, July 05, 2005

Whitgift's Principles of Effective SCM

In an effort to try and deduce/derive The Principles of SCM, I'm going through the SCM literature to see what other published SCM experts have identified as SCM Principles.

Among the best "oldies but goodies" are the books by David Whitgift (Methods and Tools for SCM, Wiley 1991) and Wayne Babich (SCM: Coordination for Team Productivity, Addison-Wesley 1986). These books are 15-20 years old, but most of what they say still seems relevant for software development and well-aligned with agile and iterative development methodologies.

Today I'll focus on David Whitgift's writings. In the very first chapter ("Introduction") he says that CM is more concerned with the relationships between items than with the contents of the items themselves:
  • This is because CM needs to understand the decomposition + dependencies among and between all the things that are necessary for a change to result in a correct + consistent version of the system that is usable to the rest of the team (or its stakeholders).

  • He also states that most of the problems that arise from poor CM and which CM is meant to resolve are issues of coordination/communication and control. And he gives a shopping list of common problems encountered in each of five different areas (change-management, version-management, build-management, repository-management, item identification/relationship management).
At the end of section 1.2 in the first chapter, Whitgift writes:
In the course of the book, five principles become apparent which are the keys to effective CM. They are that CM should be:
  • Proactive. CM should be viewed not so much as a solution to the problems listed in the previous section but as a collection of procedures which ensure the problems do not arise. All too often CM procedures are instituted in response to problems rather than to forestall them. CM must be carefully planned.

  • Flexible. CM controls must be sensitive to the context in which they operate. Within a single project an element of code which is under development should not be subject to restrictive change control; once it has been tested and approved, change control needs to be formalized. Different projects may have very different CM requirements.

  • Automated. All aspects of CM can benefit from the use of software tools; for some aspects, CM tools are all but essential. Much of this book is concerned with describing how CM tools can help. Beware, however, that no CM tool is a panacea for all CM problems.

  • Integrated. CM should not be an administrative overhead with which engineers periodically have to contend. CM should be the linchpin which integrates everything an engineer does; it provides much more than a repository where completed items are deposited. Only if an engineer attempts to subvert CM controls should he or she be conscious of the restrictions which CM imposes.

  • Visible. Many of the issues raised in the previous section stem from ignorance of the content of items, the relationships between items, and the way items change. CM requires that any activity which affects items should be conducted according to clearly defined procedures which leave a visible and auditable record of the activity.

Whitgift's "proactivity principle" might seem a bit "Big Planning/Design Up Front" rather than responsive, adaptive, or emergent. And his "visibility principle" one may seem like heavyweight traceability and documentation. However, in light of being flexible, automated, and integrated, it might not be as bad as it sounds.

Each of the above five Principles of Effective CM seem (to me) to be potentially competing objectives ("forces") that need to be balanced when resolving a particular SCM problem. I wouldn't regard them as principles in the same sense as Robert Martin's "Principles of Object-Oriented Design."

Each of the subsequent chapters in Whitgift's text delves into various principles for the various sub-areas of CM. I write about those in subsequent blog-entries.

Monday, June 27, 2005

FDD - an agile alternative to XP

I think there are a lot of folks out there who judge all of "Agile" by what they know of Extreme Programming (XP), quite possibly because that is all or most of what they've heard about agile development.

I think folks (especially agile skeptics) should take a close look at Feature-Driven Development (FDD) if for no other reason than because it is an example of an agile method that is very different from XP. FDD is quite agile while still employing of many of the traditional practices that agile skeptics are probably more accustomed to seeing.

For example, FDD has the more traditional progression of waterfall-ish phases in its iterations (while still being highly iterative and collaborative). FDD does conduct up-front planning, design and documentation and relies very heavily upon domain modeling. FDD also uses feature-teams with chief architects/programmers and traditional code-ownership and code-review (as opposed to pair-programming and refactoring).

To that end, here are some more resources about FDD so folks can learn more about it and become more aware of the fact that XP isnt the only agile "game" in town when it comes to development practices (SCRUM and DSDM are focused primairly on planning rather than development practices):

Sunday, June 19, 2005

Language Workbenches

Wouldn't ya know it! On the very same day that I blogged about Customer-Oriented Requirements Architecture (CORA) as The Next Big Thing, it turns out Martin Fowler wrote an article about Language Workbenches which seems to be getting at exactly the same core idea:
  • Language Workbenches utilize things like Meta-Programming Systems and Domain-Specific Languages (DSLs) to let the developer work more closely in the conceptual domain of the various subject-matter "spaces" of the requirements and the design.

  • They provide all sorts of IDE and refactoring support for the language domain they are created to support.

  • It seems a bit more focused on the design-end, whereas my CORA idea is a bit more focused on applying architectural principles and design patterns to the expression and maintenance of the requirements. I believe the end-result is the same however.

The more enabled we become at formally expressing the requirements in a language and framework more closely bound to the problem domain, the more important it will be to apply principles, patterns, and practices of refactoring, encapsulation, modularity, etc. to the groupings of requirements we develop and the relationships within and between them. And Language Workbenches become part of the environment that supports, maintains, and automates requirements dependency management and traceability

Feature-Driven Development (FDD) has an interesting way of trying to do some of this with its feature-sets and color modeling patterns. See recent discussions on the agilemanagement YahooGroup and the newly created colormodeling YahooGroup for more details.

I may blog in the future about the relationship between FDD, Color-modeling, "grammar rules" for domain-modeling, DSLs, and the Law of Demeter.

Sunday, June 12, 2005

Customer-Oriented Requirements Architecture - The Next Big Thing?

Robert Martin writes about what is "The 'Next Big Thing." From structured programming, to modular programming, object-oriented, extreme, aspect-oriented and service-oriented .... All have been heavily hyped in their "hey day."

Bob notes almost all of those started off as just "programming" and then each "trifurcated", having "design" and later "analysis" come afterward - completing the waterfall "triplet." Robert (Uncle Bob) Martin writes:
“I hope the next big thing is the big thing that we’ve needed for the last thirty years and haven’t had the guts to actually say. I hope the next big thing is professionalism. Perhaps a better word would be Craftsmanship.”
I notice that, within the past year, Aspect-Oriented Programming (AOP) and Aspect-Orientation has now "trifurcated", completing the waterfall triplet: the last few years have seen books and articles on Aspect-Oriented "Design", and within the last 6-8 months we have a book on "Aspect-Oriented Analysis and Design" and another on Aspect-Oriented Use-Cases.

Aspects and analysis/requirements (use-cases) interests me because I wonder if the trend we're seeing isnt so much aspects, but the same trend indicated by Charles Simonyi's emphasis on Intentional Programming (he now calls it "Intentional Software"), and by the emphasis on Domain-Specific Languages (DSLs) in Microsoft's "Software Factories", and to some extent even eXtreme Programming's representation of tests as precise requirements specification, and the way ObjectMentor's own FitNesse framework is almost a DSL-like front-end for that.

I think programming languages and software specifications have evolved beyond the domain of the coder and designer into the domain of the analyst and the customer.
  1. Assembly language was just some ever so slight mnemonic sugar (it wasnt even syntactic sugar) on "raw" machine code. We used some short, cryptic but mnemonic names, but it was still making the programmer think in terms of the way the computer's "processing" was organized. We had to think in terms of opcodes and registers and addresses in storage. We had to try and think like the machine.

  2. Then we got structured languages. You know, the kind where all that pseudo-code we used to write with words like "if then else", "while" and "for" could now be written directly in the language. We made the programming language try and represent the way the sequential logic seemed to go down in our heads. But it was still coding.

  3. Then with abstractions and objects we made programming languages cross the threshhold from mere coding & programming, to design, where we could now express not merely logical processing directly in the language, but design abstractions and encapsulation and various associations and interfaces.

  4. Then we built on that with patterns and libraries, and have now adorned it not just with inheritance and templates, but now Aspects and Java's "Annotations"
But one of our biggest technical problems in software development is still accurate communication of the customers and users needs and wants to the folks that have to translate that into working software. All those "shall" and "should" statements and "formal methods" like VDM and Z werent as much help as we hoped.

Enter test-driven development, where we let the customers talk to us in their native language, but we try to write the technical requirements not as vague prose, but rather as executable code in the form of tests, so we can get immediate feedback and use short-cycles. Fit and FitNesse attempt some of the goals of intentional-programming by making a little DSL and GUI to let us write something that looks closer to prose but still generates source-code to make our executable tests.

What this also does, along with use-cases, is bring the world of encapsulation and abstraction into the requirements/analysis domain. We can more formally and precisely attempt to package or encapsulate requirements into logical entities more formally manage the dependencies between them.

Use-cases were a start at this, though it was rare to see the various use-case-relationships (extends, specializes, generalizes, uses, etc.) utilized all that much. Rick Lutowski’s little known FREEDOM development methodology also gets very formal about applying encapsulation to requirements – it might become a little more well known now that his book is out as of May 2005 Software Requirements: Encapsulation, Quality and Reuse

With DSLs and Intentional software combined with the ability to encapsulate requirements, we can then start talking about managing their dependencies much the same way we do for code/design today, which means we’ll be able to talk about things like Refactoring of requirements, and “Design Patterns” of Requirements Design/Engineering (and ultimately “requirements reuse”).

And if we ever get even close to that, much of what we call “traceability” today will become a thing of the past. The dependencies in the requirement will be precisely specified and apparent from how we express them in encapsulations and their formal relationships (just like C++/C#/Java code today specifies classes, interfaces, packages, and their interitance, composition, and uses relationships)

So, if it is true that:
  • Computers and software are useful, primarily to the extent that they allow us to visualize & virtually manipulate the concepts we can previously only imagine within our minds.
  • The evolution of programming languages has gone from trying to make it easier for the programmer to understand the machine’s representational language, to making it easier for the language to represent the thoughts of programmer’s and designer’s
  • And with software becoming ever more pervasive and ubiquitous along with the increasing demand for regulatory traceability
Then perhaps Aspects and Sarbanes-Oxley combined with DSLs (and/or Intentional Software) and Test-Driven development will get us from Object-Oriented past Aspect-Oriented or Service-Oriented to arrive at Customer-Oriented Requirements Architecture of encapsulated requirements and the corresponding automated "acceptance" tests.

Then maybe the next big thing for software development may well be Customer-Oriented Requirements Architecture (CORA) and the evolution of expressive environments (I wont call them languages because I don’t think they’ll be strictly textual) that allow the business analyst and customer to express their needs and requirements in terms closer to their own thoughts and vocabulary, and to be able to directly transform that into encapsulated entities from which tests can be automatically generated and executed.

It would basically be creating a simulation environment with rapid feedback for exploring thoughts about what the software might do and analyzing the consequences. And just maybe it would enable the kind of Craftsmanship that Bob Martin's been doing for programming and design, only with the actual requirements!

But will we ever get there? Or will we be too busy chasing the next “Next Big Thing”? Or maybe I just need more sleep! What do you think? Is it too far fetched?

Sunday, June 05, 2005

The Art of Project Management

I just received a new O'Reilly&Associates book in the mail: The Art of Project Management, by Scott Berkun.

I didn't ask for it, O'Reilly just sent it to me. I perused it briefly. Judging from the table of contents, it looks promising and seems to focus on the reality of managing software projects. This is not a book about using MS-Project or PERT/GANNT charts or estimation. This book is about pragmatic project management realities rather than project management science or theory.

Looking through some of the text, I saw a few things I had some strong negative feelings about, and some other things that strongly resonated with me. So I'm not sure yet if I'll end up loving it or hating it, but I'll definitely have a lot to ponder and learn from it.

Scott also has a new blog
that looks pretty good, and has some excellent essays on leadership and teamwork.

Sunday, May 29, 2005

The Trouble with Traceability

I've taken many of my thoughts from my previous blog-entries on traceability and expounded & expanded upon them in my May 2005 "Agile SCM" column of CMCrossroads Journal. The article is entitled The Trouble with Tracing: Traceability Dissected. It describes:
  • Ten Commandments of Traceability
  • Nine Complaints against Traceability
  • Eight Reasons for Traceability
  • The Seven Functions of SCM:
  • The Six Facets of Traceability
  • The Five Orders of Traceability
  • The Four Rings of Stakeholder Visibility
  • Three Driving Forces for Traceability
  • Two Overarching Objectives: Transparency & Identification
  • One Ultimate Mission: Fostering a Community of Trust

On a separate note ... I recently realized that the order/date on which I first author my blog-entries has been very different than the published date of the entry. I was saving them, but not publishing them.

Normally I may take a day or two to "clean-up" an entry (add the URLs, fix the formatting and fight with the WYSIWYG blog-composer) and publish them within a couple days of creating them. But we had a grave illness (and then death) in my family that consumed most of May for us, and I didnt realize until recently that my blog-entries for the last half of April and most of May hadnt been published yet.

So for that I must apologize. And I'll try to do better about publishing more regularly (ideally at least weekly).

Saturday, May 21, 2005

Situational Ownership: Code Stewardship Revisited

I had some interesting feedback on my previous blog-entry about Code Stewardship. Most apparent was that I utterly failed to successfully convey what it was. Despite repeated statements that stewardship was not about code access, it seems everyone who read it thought the code steward, as described, was basically a "gatekeeper" from whom others must acquire some "write-token" for permission to edit a class/module.

I am at a loss for what to say other than that is not at all how it works. The code-steward serves as a mentor and trail-guide to the knowledge in the code. Consulting the code-steward is not about getting permission, it is about receiving knowledge and guidance:
  • The purpose of the protocol for consulting the code-steward is to ensure two-way communication and learning (and foster collaboration). That communication is a dialogue rather than a mere "token" transaction. It's not a one-way transfer of "control", but a two-way transfer of knowledge!
Perhaps I would have been better off saying more about how Peter Block defines stewardship in his book of the same name (Stewardship: Choosing Service over Self-Interest, see an interesting review here and another one over here):
  • Stewardship is being accountable to the larger team or organization by "operating in service, rather than in control, of those around us."
  • "We choose service over self-interest most powerfully when we build the capacity of the next generation to govern themselves"
  • Stewardship offers a model of partnership that distributes the power of choice and resources to each individual.
  • Stewardship is personal - everyone is responsible for outcomes; mutual trust is the basic building block, and the willingness to risk and be vulnerable is a given.
  • Total honesty is critical. End secrecy. Give knowledge away because it is a form of power
When practiced properly, collective code ownership is in fact an ideal form of stewardship (but not the only form). Stewardship may ultimately end-up as collective-ownership if given a sufficient period of time with the same group of people.

However, early on I would expect stewards to have a more direct role. And I believe the form of code-ownership that Feature-Driven Development (FDD) practices may seem fairly strict at first, but is really intended to be the initial stages of code-stewardship in the first two quadrants of the situational leadership model.

I beleive the form in which stewardship should manifest itself is situational, depending on the skill and motivation of the team and its members. In Ken Blanchard's model of situational leadership, there are four quadrants of leadership-style, each of which should be used on the corresponding combination of hi-lo motivation and hi-lo skill for a given task:
  • Directing (hi directive + lo supportive, for "enthusiastic beginners")
  • Supporting (hi directive + hi supportive, for "disillusioned learners")
  • Coaching (lo directive + hi supportive, for "reluctant contributors")
  • Delegating (lo directive + lo supportive, for "peak performers")
If we apply the concepts and principles of "stewardship" using the appropiate situational leadership-style, the outwardly visible result may appear to transition from individual code ownership, to code guardians/gate-keepers, then code coaches/counselors, and finally to truly collective ownership.

So I would say it is the presence of stewardship which is the key to succeeding with either individual code ownership or collective code ownership. If stewardship is present, then both can succeed; If it is absent, it's likely that neither will succeed. And the collective and individual "styles" are the extreme ends of the spectrum, with "code counselors" as the style in between those two extremes.

Saturday, May 14, 2005

Dreamy SCM Patterns Superheroes!

My SCM Patterns co-author and I received an honorable mention in the letters-to-the-editor section of The June 2005 issue of Software Development Magazine. The April issue had an article about the software development "dream team," giving nicknames and characteristics of these fictitious "software super heroes."

In this month's letters section is an letter from Curtis Yanko entitled "Team Member Missing" ... Curtis writes:
I just read 'The Dream Team' (Apr 2005), and while I found it informative and entertaining, I couldnt help but feel a little empty. I expected Software Development, of all magazines, to recognize the importance of the configuration manager. This superhero would mix Martin Fowler's appreciation of adapting the right amount of agility for any given team with Stephen Berczuk and Brad Appleton's understanding of SCM patterns. Automated builds and continuous integration are what will really allow this Dream Team to make super-human progress!
Many thanks Curtis!

Friday, May 06, 2005

Single Codebase - How many Codelines?

On the YahooGroup for the 2nd edition of Kent's Beck's book Extreme Programming Explained, Kent described a practice he calls Single Code Base:

There is only one code stream. You can develop in a temporary branch, but never let it live longer than a few hours. Multiple code streams are an enormous source of waste in software development. I fix a defect in the currently deployed software. Then I haveto retrofit the fix to all the other deployed versions and the active development branch. Then you find that my fix broke something you were working on and you interrupt me to fix my fix. And on and on.

There are legitimate reasons for having multiple versions of the source-code active at one time. Sometimes, though, all that is at work is simple expedience, a micro-optimization taken without a view to the macro-consequences. If you have multiple code bases, put a plan in place for reducing them gradually. You can improve the build system to create several products from a single code base. You can move the variation into configuration files. Whatever you have to do, improve your process until you no longer need them. [... example removed ...]

Don't make more versions of your source code. Rather than add more codebases, fix the underlying design problem that is preventing you from running from a single code base. If you have a legitimate reason for having multiple versions, look at those reasons as assumptions to be challenged rather than absolutes. It might take a while to unravel deep assumptions, but that unraveling may open the door to the next round of improvement.
Kent is equating creation of a new codeline with establishing a new code base within the same repository. He does so with good reason: creating a new codeline for supporting a legacy release and/or multiple customer variants is indeed creating a new project instance, complete with its own separately evolving copy of the code.

I posted a lengthy response to Kent's initial description of a Single Code Base. I feel I understand Kent's position intimately well. At the same time I think that having to support one or more legacy releases is a business constraint that is far more unavoidable then Kent's post seems to suggest. I summarized my opinion as:
  1. Transient branches are fine (even ones that last more than a few hours) and do not cause the waste/retrofitting described. But you do need to follow some rules regarding integration structure and frequency
  2. Variant branches are "evil", and should be solved with good architecture/factoring or else configuration that happens at later-binding-time
  3. Multiple release branches are often a necessity in order to support multiple releases. And supporting multiple releases is highly undesirable, but often unavoidably mandated by the business/customer
Under the heading of "Transient Branching", I include patterns like Private Branch, Task Branch, and "organizational coping" branches like Docking Line, and the Release-Line + Active-Development-Line pair. Another example (tho not short-lived) is Third Party Codeline. And of course if any branching is done, then proper use of a Mainline is essential (I think Mainline does for branching what refactoring does for source-code).

So while I vigorously agree with the desire against adding new codelines for supporting multiple releases, and that it's certainly good idea to question if it is truly necessary (and to fight "like heck" against using branches as a long-term solution to handling multiple variants), I think challenging the multiple maintenance constraint too vehemently isnt a great idea however once you understand the business need that is driving it.

We might still disagree with doing it, but at that point I think we need to "bite the bullet" and do it while perhaps exploring softer communication alternatives to persuade the business in the future. Part of that can be getting the business to:
  • Fully acknowledge and appreciate that each additional supported release/variant is a bona fide new project with all the associated impliciations of added cost, effort, management, and administration. (Often a new variant-line or release-line adds 40%-80% additional effort to support and coordinate).
  • Agree that if Multi-Tasking an individual is something that decreases productivity and flow by increasing interruptions and context-switching, then Multi-Project-ing the same team/codebase is an even grander black-hole that sucks away resources and productivity for many of the same reasons
  • Agree that when we do decide to support an additional release/variant, the new project should have some sort of charter and/or service-level-agreement (SLA) that clearly defines the scope and duration of the agreed upon effort and its associated costs.

For some additional reading, see DualMaintenance and UseOneCodeLine on the original Wiki-web, and BranchingAndMerging, ContinuousIntegration and AgileSCMArticles on the CMWiki Web

Thursday, April 28, 2005

SCM Plan or SCM Architecture?

In an earlier blog-entry I described a 4+2 Model View of SCM Solution Architecture.

Much has been written about what a CM Plan should be. If an SCM Solution is really a work of "architecture", then wouldn't an SCM Plan would actually correspond to an (initial) architectural description or "blueprint" for an SCM solution? It would have not only descriptive text, but should show models/diagrams sketching the overall strategy for each of the various 4+2 Model Views, showing the key entities and their interrelationships and dependencies.

If Martin Fowler is correct about Software Architecture as an "emergent property" (see "Is Design Dead?"), then an "Agile" SCM solution should also have emergent properties. Granted there still needs to be some amount of up-front planning and design, but some of that should also be concerned with "emergence" and how to let the architecture be adaptable, extensible, and resilient in the face of change.

And if Josh Kerievsky is correct about Refactoring to Patterns (Josh's book won the 2005 Jolt Productivity award in its category), then it should also follow that many of the SCM Patterns are simply recognition of an SCM anti-pattern that violates one or more SCM Principles, and applies an SCM refactoring to resolve the imbalance. Other SCM Patterns would be about how to successfully grow and extend a simple SCM solution whose requirements become more complex in the face of increasing scale and diversity in any or all of the 4+2 model views.

This raises some interesting (to me at least) questions:
  • Would you agree that an SCM Plan is really the initial overview or blueprint of an overall "architectural description" for your SCM solution?
  • In what ways do you think such an architecture can and should be "emergent?"
  • In what ways shouldn't it be emergent? (What and How much really needs to be planned and designed "up-front" versus emerging later "on demand"?)

Wednesday, April 20, 2005

The Principles of SCM

I think many of the principles of good system/software design apply to the design of an "SCM Solution Architecture". In particular, I think many principles of sound Object-Oriented Design (OOD) are applicable since a lot of OOD deals with managing and minimizing dependencies through the use of techniques and mechanisms like: encapsulation, abstraction, information hiding, modularity, composition, delegation, inheritance, and polymorphism.

One set of fairly well known OOD Principles was written by Robert Martin in 1996-1997 as part of a series of articles in "The C++ Report". (See the ObjectMentor webpage for their OOD Principles training course -- halfway thru the page it has an "articles" section with links to copies of the original articles that appeared in the C++ Report.)

These Principles of OOD were also in his 1995 book "Designing O-O Applications Using the Booch Method". Then Robert started evolving and developing them more, and also working with patterns, and then eventually extreme programming practices. And what was going to be a 2nd edition of the book instead became a 10 year "project" that combined these principles, with patterns, and agile development "practices" into a new book that came out in November 2002 called "Agile Software Development: Principles, Patterns, and Practices"

These "Principles of OOD" are ...
Principles of Class Design:
  • (OCP) The Open-Closed Principle
  • (LSP) The Liskov Substitution Principle
  • (DIP) The Dependency Inversion Principle
  • (ISP) The Interface Segregation Principle
Principles of Package Cohesion:
  • (REP) The Reuse/Release Equivalency Principle
  • (CCP) The Common Closure Principle
  • (CRP) The Common Reuse Principle
Principles of Package Coupling:
  • (ADP) The Acyclic Dependencies Principle
  • (SDP) The Stable Dependencies Principle
  • (SAP) The Stable Abstraction Principle
Maybe it would be the case that, when "translated" into SCM concepts and terms ...
  • Class design principles would translate into baseline and/or configuration derivation and composition principles?
  • Package cohesion principles would translate into change-task and/or change-set cohesion principles?
  • Package coupling principles would translate into codeline and/or variant management principles?
And these would give me guidance on things like ...
  • How small or fine-grained should my change-tasks minimally be?
  • When & how often should I commit and/or promote new baselines?
  • How should baselines relate to other baselines and codelines?
  • How should codelines relate to other baselines and codelines?
  • How should I manage dependencies between changes/branches/codelines?
These guidelines would (ideally) manifest themselves as various "SCM patterns" for a given problem and context. Each "problem" would correspond to a situation that is a potential violation of one or more principles (anti-pattern), and the pattern solution would figure out which principles are being bent or broken and how to resolve that without bending or breaking one of the other principles too much as a result.

What do you think are some of the principles of Software CM?

Wednesday, April 13, 2005

Traceable + Transparent = Trustable?

After some more reflection ... I think Ive changed my mind regarding someting I wrote about Traceability and TRUST-ability:
Traceabilityis a means of providing transparency while facilitating impact-analysis, conformance, compliance, accountability, reproducibility and learning. I stopped short of including transparency as one of the goals because I guess Ive seen it fall far short of that too many times. I think that if done well, transparency is an effect of traceability. And at the same time I think Ive seen enough examples of people managing to achieve those other goals without being terribly transparent, that as much as I wanted to include transparency and as an overarching goal of traceability, I just couldn't do it!

I think Joe Farah's response in the "Why Traceability" discussion thread on CMCrossroads was right on target. Transparency is supposed to be a by-product of traceability, if its done "right." It's not a "goal" the same way that facilitating impact-analysis, conformance, compliance, accountability, reproducibility and learning are goals. Achieving transparency in each of those areas is what facilitates those goals:
  • Architectural (structural) transparency facilitates impacts analysis
  • Functional (behavioral) transparency facilitates product conformance
  • Process (procedural) transparency facilitates process compliance
  • Project (managerial) transparency facilitates project accountability
  • Build/Baseline (physical) transparency facilitates reproducibility
  • Decision-making (logical) transparency facilitates organizational learning and root-cause analysis

If my attempt at traceability (including the kind mandated by Sarbanes-Oxley) didnt achieve transparency, then I'll go so far as to say it wasnt done "right" or it wasnt done effectively. Traceability may not imply trustability, but traceability done "right" should achieve transparency, and it is transparency that engenders trust by visibly giving an open, honest, accurate and forthcoming accounting of decision-making and work-efforts.

So how can I do that without creating so much additional manual maintenance and administration as to become insurmountable and/or unwieldy? How do others feel they have achieved this?

Tuesday, April 05, 2005

A 4+2 Model View of SCM Solution Architecture

My interests in CM, architecture, and agility all overlap in my day-to-day work. I think the convergence is in developing what I call an "SCM Solution Architecture". Some might regard it as the SCM "component" of an overall Enterprise Architecture that includes SCM. I believe many of the principles, patterns, and practices of system architecture and software architecture apply to such an SCM solution architecture.

If we take the 4+1views approach of Rational's Unified Process (RUP), which defines the critical architectural stakeholder "views" as: logical, physical (implementation), processing (processing & parallelism), deployment, and use-cases/scenarios, and if we enhance those with one more view, that of the organization itself, then we arrive at a Zachman-like set of RUP-compatible views for an SCM solution architecture that I call a "4+2" Model View of SCM Solution Architecture.

The "4+2" Model View of SCM Solution Architecture contains 6 different views, that I characterize as follows:


  • 1. Project {Logical Change/Request Mgmt} -- e.g., change-requests, change-tasks, other CM "domain" abstractions and their inter-relationships, etc.)


  • 2. Environment {Solution Deployment Environment} -- e.g., repositories, servers/networks, workspaces, application integration


  • 3. Product {Physical/Implementation/Build} -- e.g., repository structure and organization, build-management scheme and structure/organization


  • 4. Evolution {Change-Flow and Parallelism} -- e.g., tasks, codelines, branching and merging/propagation, labeling/tagging


  • +1. Process {Contextual Scenarios/Use-cases} -- e.g., workflow, work processes, procedures and practices


  • +2. Organization {Social/People Structures} -- e.g., organizational structure for CCBs, work-groups, sites, and their interactions, and corresponding organizational metrics/reports for accounting and tracing back to the value-chain. (Mario Moreira's "SCM Implementation" book has a great chapter or two on the importance of this and some best-practices for it)




The fact that many of the views closely align with RUP suggest that UML might be a very suitable diagramming notation for modeling such an architecture. And I think that much of the current best-practices of enterprise architecture, agility,
object-oriented design principles, and service-oriented architectures apply to the creation of an agile CM environment that represents such a solution architecture.

Sunday, March 27, 2005

Individual vs Collective Code Ownership: Stewardship to the Rescue

I heard another argument today claiming that collective code ownership often results in "no ownership" of the code, and that individual code ownership is better for managing attempts at concurrent changes.

Then I heard someone try to counter by saying individual ownership inhibits refactoring and goes against the team ethic of XP and other Agile methods.

The problem I have with both of these is that they are each extreme positions. I know from experience there is a successful middle ground, one that is sometimes referred to as code stewardship.

Individual Code Ownership, in its purist form, means exclusive access control: no one but the "owner" is allowed to edit the particular module or class. When code ownership is interpreted so rigidly as to exlude allowing others to to make updates, I have definitely seen it lead to the following problems on a regularly recurring basis:
  • Causes lots of wait-time and productivity-blocks
  • Used as a parallel-development block to avoid merging concurrent changes
  • Holds changes/code "too close to the vest" and leads to defensiveness and/or local optimization at the expense of the "whole" team/system
  • Increasing the project team's "truck factor" by limiting the number of people with intimate knowledge of that module/class within the overall system
At the same time, I have seen cases where "Collective Code Ownership" degrades into "no ownership" when there is no collective sense of team ownership or accountability.

So what is the balance? I think it is Code Stewardship, where the code-steward is both guardian and mentor for the body of knowledge embodied by the module/class. The Code Steward's job is not to safeguard against concurrent-access, but to instead safeguard the integrity+consistency of that code (both conceptually and structurally) and to widely disseminate knowledge and expertise about it to others.

  • When a portion of a module or class needs to change, the task-owner needs the agreement of the code-steward.
  • The task-owner explains what they are trying to accomplish, the code-steward helps them arrive at the best way to accomplish it, and provides any insights about other activities that might be related to it.
  • The task-owner makes their changes, seeking advice/counsel from the code-steward if needed (possibly thru pairing or dialogue or other means)
  • Before commiting changes to the codebase, the code-steward is involved in review or desk-checking (and possible coordination of concurrent activities)
This can be accomplished by pairing with the code-steward, or simply by seeking them out as an FDD programmer would a Chief programmer/architect. The code-steward is like the "editor-in-chief" of the module or class. They do not make all the changes, but their knowledge and expertise is still applied throughout. The benefits are:

  • Concurrent changes can still be made and wait-times avoided while still permitting notifications and coordination.
  • Knowledge is still disseminated (rather than hoarded) and spread-around the team
  • Collective ownership and its practices, such as refactoring, are still enabled
  • Pair programming can still be done, where pairing assignments can be based in part on who the "steward" is for a piece of code. (At some point stewards can even hand-off-the baton to another)
I guess the bottom-line for me is that collaborative ownership and authorship is still essential, and code ownership isnt supposed to be about controlling concurrent access (and is very suboptimal as a concurrency-strategy, even though some merge-a-phobic shops will swear by it). If we take "ownership" to either extreme - the result is impractical and imbalanced.

Sunday, March 20, 2005

The Five Orders of Traceability

Traceability is supposed to help us track and link related pieces of knowledge as we progress thru the software development lifecycle. The software development lifecycle is devoted to creating knowledge that transforms fuzzy functional concepts into tangible working results. It is the process of transforming theoretical concepts into executable reality.

Phil Armour, in his magnificent book The Laws of Software Process, maintains that software is not a "product" in the usual production-oriented sense of the word, but that software is really a medium for capturing executable knowledge. He then uses this to derive that software is therefore not a product-producing activity but rather a knowledge creating and knowledge acquiring activity.

Armour goes on to describe "The Five Orders of Ignorance" and how, if software development is a process of knowledge-acquisition and knowledge-creation, then it is also ultimately a process of "ignorance reduction" whereby we progressively reduce our ignorance of what the system is, what it needs to do, how it needs to do it, and how we need to do it and manage it.

Perhaps it should follow then, as a corollary to all of this, that the mission of traceability is to connect all the dots between these various pieces and "orders" of knowledge to help those of us auditing or reviewing them to learn the lessons of how and why that knowledge came to be created in its current form. To that end, I hereby propose The Five Orders of Traceability:
  • 0th Order Traceability – Existence: Tracking Knowledge Content
  • 1st Order Traceability – Structure: Tracking Knowledge Composition
  • 2nd Order Traceability – History: Tracking Knowledge Context
  • 3rd Order Traceability – Transformation: Tracking Knowledge Creation (causal connection in the knowledge creation/derivation process)
  • 4th Order Traceability – Meta-Traceability: Tracking Knowledge of the Five Orders of Traceability.
It's not entirely clear to me whether I have the 2nd and 3rd items mixed-up in the above (perhaps structure should come after context). When deriving the ordering and identification of the above, I basically took my cue from Armour's 5 orders of ignorance, and made the assumption that it is the intent of a particular "order" of traceability to eliminate or address the corresponding "order" of ignorance! Hence 3rd order traceability should help resolve 3rd order ignorance, 2nd order traceability should help resolved 2nd order ignorance, and so on.

With that in mind, I'll elaborate further upon each of the five orders of traceability Ive "coined" above ...

0th Order Traceability is merely the existence of knowledge content. There are no additional linkages or associations to navigate through that content. There is no explicit traceability - the content is simply there.

1st Order Traceability is an attempt to structurally organize the knowledge content and provide links that navigate the decomposition from one level of organization/detail to another. This would be like an outline structure and cross-referencing/indexing capabilities. A number of tools give us a means of organizing a particular portion of system knowledge:
  • Basic requirements document management tools provide way of organizing and viewing the requirements, and even design documentation for a project
  • Modeling tools provide a way of organizing and viewing problem-domain and solution-domain abstractions
  • Many interactive development environments give a logical (e.g., object-based) view of the code and/or a physical (e.g., file-based) view of the file+folder structure of the codebase.
  • Many of these tools even provide a way to link from a section of a document to a model entity (e.g., a UML object or package) and/or a code-construct (e.g., a class, method, or module)

2nd Order Traceability goes beyond mere content and structure to provide contextual awareness. Not only are there links, but there is also contextual information (typically in the form of metadata) giving a clue as to the events that transpired to create it: who authored the content, when they did it, where they did it (physically or virtually).

This type of historical/log information assists in auditing and recovery, and is typically most-suited for automatic capture/recording by the application used to create/modify and store the information (perhaps using a mechanism like dynamic event-based traceability). The information might also include application integration data (for example, recording the identifier of a requirement or a change-request and associating it with the corresponding portion of the design, code, or tests when it is created or updated).

3rd Order Traceability is the "nirvana" of traceability for many. Not only do we have structure and context, but we have additional associations and attributes (e.g., metalinks between [meta]data) that capture the causal connection between related pieces of knowledge in the value chain. Some call this "rich traceability" and there are some high-end requirements management tools capable of doing this.

Still, tracing all the way through to design models, and code remains very effort-intensive unless all knowledge artifacts are captured in the same repository (requirements, models, code, tests, project-activities, change-requests) where these advanced "rich tracing" capabilities exist.[
MKS claims to have taken a step toward achieving this with its new RM tool that tracks requirements in the same repository as the source-code version control system].

With 3rd order traceability, we are effectively capturing important decisions, criteria, constraints, and rationale at the various points in the knowledge creation lifecycle where one form of knowledge (e.g., prose, model, code) is being either transformed into another form of knowledge, or is being elaborated to another level of decomposition within the same form of knowledge.

4th Order Traceability is meta-traceability, or tracking of knowledge about the five orders of traceability within or across systems. (Sorry - I couldn't come up with anything better that is analgous to Armour's 5th order of ignorance - a.k.a. meta-ignorance. If you have a different idea of what 4th order traceability should be, feel free to comment.)

What about Ontology or Epistemology? I don't honestly know. I would imagine there must be some way of tying together the above with terms and concepts from "knowledge management," such us transforming tacit knowledge to explicit knowledge, and maybe even relating XML schemas and ontologies back to all of this. I leave that as an undertaking for someone much more versed than I in those domains.

Tuesday, March 15, 2005

Traceability and TRUST-ability

Traceability is one of those words that evokes a strong "gag" reflex among many hardcore agilists. They are all in favor of tracing tests to features, which is extremely straightforward when one is doing test-driven development (TDD). When it comes to tracing requirements thru to design and code, images of manually maintained traceability matrices that are hopelessly effort-intensive and never quite up-to-date seem to spring to mind.

So what are the main goals that traceability supposedly serves? Based on sources like CMMI and SWEBOK, and several others, I think the goals of traceability are to assist or enable the following:
  1. change impact-analysis: assess the impact and risk of a proposed change to facilitate communication, coordination and estimation [show the "know-how" to "know-what" to change, "know-where" to change it, and "know-who" will change it]

  2. product conformance: assure that necessary and sufficient requirements were implemented, and ensure the implementation of each requirement was verified/validated [prove we "did the right things" and "did the things right"]

  3. process compliance: assure that the necessary procedural activities (e.g., reviews and tests) were executed for each feature/requirement/code-change and ensure they were executed satisfactorily [prove we "walk the walk" and not just "talk the talk"]

  4. project accountability: assure that each change of each artifact was authorized and ensure that they correspond to requested functionality/business-value [safeguard against "gold plating"]

  5. baseline reproducibility: assure that the elements necessary to reproduce each baseline have been captured and ensure that the baselines can be reproduced [so "all the king's horses and all the king's men" can put your fallen "humpty-dumpty" build back together again]

  6. organizational learning: assure that the elements necessary to rediscover the knowledge of the system have been captured and ensure that the rationale behind critical decisions can be reproduced -- e.g., for root-cause analysis, or to transfer system knowledge to a deployment/support team. ["know-why" you did what you did when you did it]

Many would argue that these all boil down to questions of trust, and communication:
  • Do I trust the analysts/architects to do what I said and say what they did?

  • Do I trust the product or its producers to correctly realize the correct requirements?

  • Do I trust the process or the engineers following it?

  • Do I trust the project or the people managing it?

  • Do I trust the environment in which it is built and tested?

  • Do I trust the organization to be able to remember what they learned and learn from their mistakes?

Whether or not traceability efforts successfully achieve any of these goals is another matter entirely. Often, traceability efforts achieve at most one of these (conformance), and sometimes not even that. Many question whether the amount of effort required actually adds more value than what it subtracts in terms of effort (particularly when traceability efforts themselves may create additional artifacts and activities that require tracing).

Traceability is often desired because it's presumed to somehow alleviate fears and give us "TRUSTability." I suspect that's really an illusion. Traceability is useful to the extent that it facilitates more effective communication & interaction between individuals, teams, and stakeholders. And traceability can be useful to the extent that its helps us successfully convey, codify and validate system knowledge.

So to the extent that traceability helps us more quickly identify the right people to interact with and the right information to initiate that interaction, it can be extremely useful. It is the people and the quality of their interactions that provide the desired "trustability." Beyond that, unless unintrusively automated, it can quickly turn into "busy work" with very high "friction" on development of new functionality realized as "working software".

To that end, I have a few slides in a presentation I gave on Agile Configuration Management Environments (see slides 24-25) that talk about "Lean Traceability", and a big part of that is using encapsulation, modularity, and dependency management (a.k.a refactoring) to track at the coarse-grained level rather than a fine-grained one. This alone can reduce the traceability burden by 1-2 orders of magnitude. A more detailed discussion is near the end of the paper The Agile Difference for SCM.

The LoRD Principle is also mentioned (LoRD := Locality of Reference Documentation). You can read more about LoRD on the Wiki-Web, in Volume 3 of the Agile Times (pp. 37-40), and in Scott Ambler's essay Single Source Information. For automating traceability, there is some interesting work from the DePaul University Applied Requirements Engineering Lab and the SABRE Project (Software Architecture-Based Requirements Engineering) on the subject of dynamic event-based traceability.

Thursday, March 10, 2005

Building Trust with Transparency

I was catching up on some of my trade journals last weekend and I came across a coincidental confluence of several seemingly divergent streams of thought, all revolving around this recurring theme of building trust ...

First, The January 2005 issue of Software Development, with the featured them "RFID Everywhere: a primer for the new-age of self-tracking products" plastered on the cover. Then the February 2005 CACM had an article entitled "Trust in E-Commerce". Lastly, the February 2005 issue of Software Development had two fantastic articles, one by Kirk Knoernschild entitled "Benefits of the Build" and an interview profiling David Anderson about his new position "Managing at Microsoft."

The CACM article on e-commerce trust was about building trust between the vendor and the consumer. But it got me thinking about building trust between the agile development "vendor" (e.g., an agile team in a large organization) and the quite probably non-agile "consumer" with whom they need to develop a trustworthy working relationship.

Then Kirk's article on the benefits of the build talked about one such benefit being the Regular Frequency Indicating Development status/health (which of course made me think of RFID). A frequent build usually happens at regular intervals that many refer to as the rhythm, pulse, heartbeat, or cadence of the team's development progress and health. When the results of the build are visibly reported in a public location/webpage, it's like a "blinking beacon" that the team and its stakeholders can monitor.

Such build status reports are a part of the part of CM more formally known as configuration status accounting. Basically, status accounting lets us account for the status of any change-request, development-task, build, iteration, or release, at any time (sounds even more like RFID doesnt it).

Then I read the interview with David Anderson. I know David. I had occasion to meet him in July of last year when he gave a presentation about FDD and its relationship/history from Peter Coad and "Color Modeling" Pattern to the Chicago Agile Developers group (ChAD), and then a few days later he presented a Business-Case for Agile Management to the "Agile track" at the 2004 Motorola Engineering Symposium. [I was the "track chair" that arranged to have David speak both to ChAD and to Motorola, and I also had the chance to chat with him at length. I'm unendingly impressed by David and am in awe of his mindfulness, vast knowledge, keen insight, and systems-thinking abilities when it comes to the union of agility, lean, theory of constraints, software development and management theory.]

Anyways, near the end of his interview with SDMagazine, David says the following little gem about how Transparency Enhances Trust when he was asked how a manager can introduce and encourage transparency in a culture that is opaque and compartmentalized:

Economists talk about measuring the level of trust in a society. It's been proven that societies where there is greater trust experience faster economic growth—"My word is my bond." The same is true in software. To have trust, you must not be afraid that someone is hiding something from you. Transparency enhances trust.

However, to have transparency, you must drive out fear. Many people believe that data hiding is required because figures can be misinterpreted. Hence, they hide information from people whom they believe will draw the wrong conclusions ... I prefer to educate people to draw the right conclusions ... Information sharing (or transparency) is an enabler to team power. Team software development is about collaboration ....

And there you have it: transparency engenders trust! When we do regular builds and other regular development activities and publish build-status reports and burndown-charts and cumulative flow diagrams in a visible fashion, we are giving agile stakeholders a form of RFID for each task, feature, build, iteration and release.

This type of frequent and regular feedback gives transparency of an agile development team's activities to groups like CM, V&V/QA, Systems Engineering, and Program Management. That transparency helps establish trust, which in turn enhances cooperation, and ultimately enables collaboration.

Wednesday, March 02, 2005

Building Trust with Trustworthy Builds

More on "trusting the organization" ... Roger Session’s “Enterprise Rings” bear a striking resemblance to the different kinds of builds and promotion levels I describe in “Agile Build Promotion: Navigating the Ocean of Promotion Notions” where each kind of build corresponds to a different level of visibility within the organization:
  • Private Build: Individual / Task
  • Integration Build: Team / Project
  • QA/CM Release Build: Organizational / Program
  • Customer Release Build: Everybody else (business customer, portfolio investor, etc.)
As I wrote in my previous blog-entry on building organizational trust, each one of these levels of scope corresponds to a boundary between stakeholders that must be bridged with communication and trust

One way for development to build trust with the CM organization is to engage CM up front and elicit "stories" about what "trust" means to them for handoffs from development to CM. What are their biggest complaints/concerns about "trusting" development? Do they fear development will:
  • Break the build if they do their own integration/merging and "commits"
  • Hand off a build that is not reproducible and reliable?
  • Use/create unstable developmental builds that are inconsistent, incorrect, incomplete, or incoherent?
  • Neglect to create named stable baselevels at appropriate times
  • Monopolize the build servers/resources and software licenses that CM needs to use
If CM doesnt trust development about these things (and others), find out why, and partner together to create "shared stories" that will evolve into the acceptance criteria for development to hand off builds to CM. For example:
  • If developers use the SCM Patterns of Private Build, Task-Level Commit, and passing all the automated tests, will that eliminate the fear of development breaking the build? If not, what else? Do they need a way to be sure that these are actually followed?
  • If there is an automated build that development systematically uses to reliably ensure that a sandbox environment and build-time options/flags are "sufficiently comparable" to what CM uses, will that eliminate fears of inconsistency and unreliability of the build and its reproducibility?
  • If there are business requirements to be able to support multiple releases or (customer special" variations), many agilists/extremists may not want to hear this, but they must learn to respect it as both a business requirement and a CM concern. Then they must learn the requirements to be met so that a solution can be proposed (which might be different from the solution CM had been assuming should be used)
What other things must CM do with the build that they are afraid they wont be able to do if development does their own merging/building? How can they be incorporated into the automated build+commit protocol that development will use?

Once these stories are gathered, fears have been expressed, and needs have been elaborated, then development (with CM's assistance) should develop a systematic mechanism for automating and reliably reproducing such "trustworthy builds" and codelines.

What might be some comparable sorts of concerns and solutions for building trust with V&V, or Systems Engineering?