Sunday, July 30, 2006

The New Rules: Agile beats Big

The July 24 issue of Fortune Magazine has an article entitled The New Rules as the cover story, with the cover saying "Sorry Jack! Welch's Rules for Winning Don't Work Anymore (But We've Got 7 New Ones That Do)"

I think the new rules it discusses are very much about "Agile is better than Bigger!" and "Bigger isn't necessarily better!" The list of new rules is:
Old Rule: Big Dog Owns the Street.
New Rule: Being Agile is Best; Being Big can Bite You!

Old Rule: Be #1 or #2 in Your Market.
New Rule: Find a Niche, Create Something New.

Old Rule: Shareholders Rule.
New Rule: The Customer is King.

Old Rule: Be Lean and Mean.
New Rule: Look Out, Not In.

Old Rule: Rank your Players; Go with the A's.
New Rule: Hire Passionate People.

Old Rule: Hire a Charismatic CEO.
New Rule: Hire a Courageous CEO.

Old Rule: Admire my Might.
New Rule: Admire my Soul.
All in all I thought it was pretty fair-minded. There was even a sidebar to the article that gave Welch a chance to respond to the criticisms. You'll probably need to read the article for further insight into what exactly is meant by each of the "new rules" above. There was plenty of commentary across the industry on the article! (Just Google on "The New Rules" +Fortune +"Sorry Jack" and look through the results)

Monday, July 24, 2006

Agile SCM Principles - From OOD to TBD+CBV+POB

I finally finished a set of articles I'd been working on for almost 10 years on and off on the subject of "translating" principles of OOD into principles of SCM. See the following:
The principles of OOD translated into principles of Task-Based Development (TBD), Container-based Versioning (CBV), and Project-Oriented Branching (POB).

Here are the principles that I translated. Most of them are from Robert Martin's book Agile Software Development: Principles, Patterns, and Practices, but a couple of them are from The Pragmatic Programmers:


Here is what I ended-up translating them into. Note that some of the principles translated into more than one principle for version control because they applied to more than one of changes/workspaces, baselines, and codelines. I'm not real thrilled about the names & acronyms for several of them and am open to alternative names & acronyms:

    General Principles of Container-Based Versioning
    The Content Encapsulation Principle (CEP) All version-control knowledge should have a single authoritative, unambiguous representation within the system that is its "container. In all other contexts, the container should be referenced instead of duplicating or referencing its content.
    The Container-Based Dependency Principle (CBDP) Depend upon named containers, not upon their specific contents or context. More specifically, the contents of changes and workspaces should depend upon named configurations/codelines.
    The Identification Insulation Principle (IDIP) A unique name should not identify any parts of its context nor or of its related containers (parent, child or sibling) that are subject to evolutionary change.
    The Acyclic Dependencies Principle (ADP) The dependency graph of changes, configurations, and codelines should have no cycles.
    Principles of Task-Based Development
    The Single-Threaded Workspace Principle (STWP) A private workspace should be used for one and only one development change at a time.
    The Change Identification Principle (CHIP) A change should clearly correspond to one, and only one, development task.
    The Change Auditability Principle (CHAP) A change should be made auditably visible within its resulting configuration.
    The Change/Task Transaction Principle (CHTP) The granule of work is the transaction of change.
    Principles of Baseline Management
    The Baseline Integrity Principle (BLIP) A baseline's historical integrity must be preserved - it must always accurately correspond to what its content was at the time it was baselined.
    The Promotion Leveling Principle (PLP) Define fine-grained promotion-levels that are consumer/role-specific.
    The Integration/Promotion Principle (IPP) The scope of promotion is the unit of integration & baselining
    Principles of Codeline Management
    The Serial Commit Principle (SCP) A codeline, or workspace, should receive changes (commits/updates) to a component from only one source at a time.
    The Codeline Flow Principle (CLFP) A codeline's flow of value must be maintained - it should be open for evolution, but closed against disruption of the progress/collaboration of its users.
    The Codeline Integrity Principle (CLIP) Newly committed versions of a codeline should consistently be no less correct or complete than the previous version of the codeline.
    The Collaboration/Flow Integration Principle (CFLIP) The throughput of collaboration is the cumulative flow of integrated changes.
    The Incremental Integration Principle (IIP) Define frequent integration milestones that are client-valued.
    Principles of Branching & Merging
    The Codeline Nesting Principle (CLNP) Child codelines should merge and converge back to (and be shorter-lived than) their base/parent codeline.
    The Progressive-Synchronization Principle (PSP) Synchronizing change should flow in the direction of historical progress (from past to present, or from present to future): more conservative codelines should not sync-up with more progressive codelines; more progressive codelines should sync-up with more conservative codelines.
    The Codeline Branching Principle (CLBP) Create child branches for value-streams that cannot "go with the flow" of the parent.
    The Stable Promotion Principle (SPP) Changes and configurations should be promoted in the direction of increasing stability.
    The Stable History Principle (SHIP) A codeline should be as stable as it is "historical": The less evolved it is (and hence more mature/conservative), the more stable it must be.

You can read the 2nd article to see which version-control principles were derived from which OOD principles. Like I mentioned before, I'm not real thrilled about the names & acronyms for several of them and am open to alternative names & acronyms. So please share your feedback on that (or on any of the principles, and how they were "derived").

Saturday, July 22, 2006

Agile SCM Principles - Design Smells

I finally finished a set of articles I'd been working on for almost 10 years on and off on the subject of "translating" principles of OOD into principles of SCM. See the following:
In an August 2005 blog-entry on SCM Design smells, I tried to translate the design smells that Robert Martin. wrote up in his book on Agile principles, Patterns and Practices. In the first of the two articles above, I think I was much more successful at translating them into the version-control domain. Here is what I came up with ...

Symptoms of Poor Version Control
Rigidity/Inertia:
The software is difficult to integrate and deploy/upgrade because every update impacts, or is impacted by, dependencies upon other parts of the development, integration, or deployment environment

Fragility/Inconsistency:
Builds are easily “broken” because integrating new changes impacts other fixes/features/enhancements that are seemingly unrelated to the change that was just made, or changes keep disappearing/reappearing or are difficult to identify/reproduce

Immobility/Inactivity:
New fixes/features/enhancements take too long to develop because configurations and their updates take a long time to integrate/propagate or build & test.

Viscosity/Friction:
The “friction” of software process against the development flow of client-valued changes is too high because the process has an inappropriate degree of ceremony or control

Needless Complexity/Tedium:
Procedures and tasks are overly onerous and/or error-prone because of too many procedural roles/steps/artifacts, too fine-grained “micro” tracking/status-accounting, overly strict enforcement or rigid and inflexible workflow

Needless Repetition/Redundancy:
The version-control process exhibits excessive branching/merging, workspaces or baselining in order to copy the contents of files, changes and versions to maintain multiple views of the codebase.

Opacity/Incomprehensibility:
It is difficult to understand and disentangle the branching/merging and baselining schemes into a simple and transparent flow of changes for multiple tasks and features developed from multiple collaborating teams working on multiple projects from multiple locations for multiple customers

In my next entry I'll start describing the actual principles and their translations.

Thursday, July 20, 2006

Codeline Flow, Availability and Throughput

There has been an interesting discussion on codeline build+commit contention on the XP yahoogroup initiated by Jay Flowers' post about a proposed build contention equation ...

The basic problem is that there have been some commit contention issues where someone is ready to commit their changes, but someone else is already in the process of committing changes and is still building/testing the result to guarantee that they didnt break the codeline. So the issue isnt that they are trying to merge changes to the codeline at the same time, the issue is that there is overlap in the time-window it takes to merge+build+test (the overall integration process for "accepting" a change to the codeline).

Jay is being very agile to the extent that he wants to promote and sustain "flow" on the codeline (see my previous blog-entry on the 5 Cs of Agile SCM Codelines). He is looking at the [average] number of change-packages committed in a day, and taking into account build+test time, as well as some preparation and buffer time. Here the "buffer time" is to help reduce contention. It makes me think of the "buffer" in the Drum-Buffer-Rope strategy of critical-chain project management (CCPM) and theory-of-constraints (TOC).

Several interesting concepts were mentioned that seem to be closely related (and useful):
If we regard a codeline as a production system, its availability to the team is a critical resource. If the codeline is unavailable, it represents a "network outage" and critical block/bottleneck of the flow of value through the system. This relates to the above as follows:
  • Throughput of the codeline is the [average] number of change "transactions" per unit of time. In this case we'll use hours or days. So the number of change-tasks committed per day or per hour is the throughput (note that the "value" associated with each change is not part of the equation, just the rate at which changes flow thru the system).

  • Process Batch-size is all the changes made for a single change-task to "commit" and ...
  • Transfer Batch-size would be the number of change-tasks we allow to be queued-up (submitted) prior to merging+building+testing the result. In this case, Jay is targeting a one change-task per commit (which is basically attempting single-piece flow).

  • Processing-time is average duration of a development-task form the time it begins up until it is ready-to-commit. And ...
  • Transfer-time is the time it takes to transfer (merge) and then verify (build+test) the result.

  • Takt time in this case would regard the development as the "customers" and would be (if I understand it correctly) the [average] number of changes the team can complete during a given day/hour if they didnt have to wait-around for someone else's changes to be committed.

  • System outage would occur if the codeline/build is broken. It could also be unavailable for other reasons, like if corresponding network or hardware of version-control tool was "down", but for now let's just assume that outages are due to failure of the codeline to build and/or pass its tests (we can call these "breakages" rather than "outages" :-)

  • MTTR (Mean-time to repair) is the average time to fix codeline "breakage," and ...
  • MTBF (Mean-time before failure) is the average time between "breakages" of the codeline
Note that if full builds (rather than incremental builds) are used for verifying commits, then build-time is independent of the number of changes. Also note that it might be useful to capture the [average] number of people blocked by a "breakage," as well as the number of people recruited (and total effort expended) to fix it. That will helps us determine the severity (cost) of the breakage, and whether or not we're better off trying to have the whole team try to fix it, or just one person (ostensibly the person who broke it), or somewhere in between (maybe just the set of folks who are blocked).

Anyway, it's an interesting service-model of codeline availability and reliability for optimizing the throughput of codeline changes and maximizing collaborative "flow."

Has anyone ever captured these kinds of measures and calculations before? How did you decide the desired commit-frequency and how did you minimize build+test times? Did you resort to using incremental builds or testing?

I think that giving developers a repeatable way of doing a private development build in their workspace, even if its only incremental building+testing, gives developers a safe way to fail early+fast prior to commiting their changes, while sustaining flow.

I don't particularly care for the practice "build-breaker fixes build-breakage." At the very least I think everybody who is blocked should probably try to help (unless the number of people blocked is more than recommended size of a single team), and I'm sure the person who broke the build probably feels bad enough for causing the blockage (maybe even more so if multiple people help fix it). I think the build-breaker should certainly be a primary contributor to fixing the build and may be most familiar with the offending code, but they may need some help too, as they might not be as familiar with why/how the breakage happened in the first place since it slipped past them (unless of course it was "the stupid stuff" - which I suppose happens quite a bit :-)

So is anyone out their measuring serviceability, availability, reliability of their codelines? Are any of you using these other concepts to help balance the load on the codeline and maximize its throughput? I think that same of the more recent build automation tools (BuildForge, Maven, MS Build + TeamSystem, ParaBuild, etc.) on the market these days could help capture this kind of data fairly inobtrusively (except for maybe MTTR, and the number of people blocked and people+effort needed to effect the repair).

Saturday, July 15, 2006

The 5C's of Agile SCM

Lean has it's 5S technique. And while Im certain there's a way to translate those into SCM terms (which I may try and do someday, if someone hasn't already), I'm thinking about five important C-words for Agile SCM:
  • Correctness -- the property that the current configuration of the codeline executes correctly and passes its tests.

  • Consistency -- the property that the current configuration of the codeline builds correctly

  • Completeness -- the property that the curent configuration contains all that it should, builds all that it should, and tests all that it should

  • Cadence -- the property that the codeline has a regular rhythm/heartbeat/pulse that represents a healthy flow of progress (and creates a new resulting "current configuration" every time a new change is commited, integrated, and "promoted")

  • Collaboration -- the property that the balance of the above achieves a productive and sustainable degree of collaboration that serves as the source of value-generation
I think that the above represents all the properties that need to be present to a significant degree in order for the codeline to achieve smooth flow and accumulate increasing business value at a steady rate.

Am I missing anything? What about Concordance (via audits or with stakeholders)? Or Customer? Content? Context? (dare I use the word "Control"?)

Monday, July 10, 2006

Agile CMMI and Dancing Elephants

[updated June 1, 2007]

CMMI on the surface is definitely not very inviting to Agile. CMMI can be done in an agile fashion however. If CMMI is something you have a need for, then for secrets of how to do it "Agile-style", and details of success stories and lessons learned, take a look at the following links:

Also see "Integrating Agile Methods", and "Teaching the Elephant to Dance: Agility Meets Systems of Systems Engineering and Acquisition" (and others) from the CSE 2005 Annual Research Review.

Friday, July 07, 2006

Trustworthy Transparency over Tiresome Traceability

If there was an Agile CM Manifesto, then this statement should be part of it!
Trustworthy Transparency over Tiresome Traceability

Note that my position on traceability is more "lean" than "agile" I suspect. I base this on the XP & Scrum centric views that were expressed in the March 2004 YahooGroup discussion thread Why Traceability? Can it be Agile? I think the "tests over traceability" is probably a valid summary of the XP/Scrum perspective from that thread.

I think myself and David Anderson would probably say something more along the lines of "transparency over traceability", where we acknowledge the important goals that traceability is trying to fulfill (I'm not sure the XP community embraces all of those that I identified as the "8 reasons" and "6 facets" that I identified in my paper on traceability dissected). David in particular has written in the past about "trustworthy transparency" and "naked projects" (projects that are so transparent and visible in their status/accounting that they seem "naked").

I also differ strongly with the many vocal opinions expressed in XP community when it comes to the use of tools for tracking requests/changes: I'm strongly in favor of using a "good" tracking tool. I think index cards are a great and valuable "tool" for eliciting dialogue and interaction with the "customer" (and I use them for this purpose, along with post-it notes). But I believe index cards simply do not "cut it" as a serious means of storing, tracking, sorting, searching, slicing & dicing development/change requests).

I do believe an extent of traceability is necessary, and that it's not necessarily "agile", but that it can be, and should be, "lean" and streamlined, and should serve the purpose of transparency, visibility and status-accounting rather than being a goal in itself. And I think there are several strategies and tactics that can be employed to achieve "lean" traceability in service to "trustworthy transparency and friction-free metrics."

I think that a "lean" approach of traceability would focus on the following:
  1. Flow: If one uses "single piece flow" and does changes at the granularity that TDD mandates, then software-level requirements, design, coding, and testing are all part of the same task, and tracking them to a single record-id in the change-tracking system and version-control tool would actually go a long ways toward traceability (its much more work & intermediate artifacts when those activities are all separated over time (different lifecycle phases), space (different artifacts) and people (different roles/organizations). When traceability efforts noticeably interfere with "flow" is when agilists will start screaming.

  2. Minimizing intermediate artifacts and other perceived forms of "waste" (overspecifying requirements or too much requirements "up front") because fewer artifacts means fewer things to trace.

  3. Collocating both people & artifacts (the former for communication, the latter for "locality of reference") for those artifacts that are deemed necessary.

  4. Coarse-Granularity and Modularity/Factoring of what is traced: tracing at the highest practical level of granularity (e.g., is it practical to trace to the individual requirement or the use-case? To the line of code, or to the method/subroutine, or to the class/module) - this would be about "simple design" and "(re)factoring)" as it applies to the structure of the traced entities and their relationships.

  5. Transparent, frictionless automation of the terribly taxing and tiresome tedium of traceability: focus on taking the tedium out of manual traceability and have it streamlined and automated as much as possible, ideally happening seamlessly behind the seems (like with Jane Huang's event-based traceability (EBT), or thru the use of a common environment "event" catcher within Eclipse or MS Team System server), probably using a task-based, test-driven (TDD), or feature-driven (FDD) approach.
Many of these concepts and more are embodied in Sam Guckenheimer's recent book on Software Engineering with Microsoft Visual Studio Team System. I found this book to be surprisingly good (outstanding even), and not at all what I was expecting given the apparent tool/vendor-specific nature suggested by the title. The value-up paradigm and most of the other concepts and values in the book are very well aligned with agility while still meeting the needs of more rigorous ceremony in their software and systems engineering efforts.

I'll close with a description of a recent presentation by David Anderson on Changing the Software Engineering Culture with Trustworthy Transparency:
"Modern software tooling innovation allows the tracking of work performed by engineers and transparent reporting of that work in various formats suitable for everything from day-to-day management and team organization to monthly and quarterly senior executive reporting. Modern work item tracking is coupled to version control systems and aware of analysis, design, coding and testing transitions. This makes it not only transparent but trustworthy. Not only can a tool tell you the health of a project based on the state of completion of every work item, but this information is reliable and trustworthy because it is tightly coupled to the system of software engineering and the artifacts produced by it.

The age of trustworthy transparency in software engineering is upon us. Trustworthy transparency changes the culture in an organization and enables change that unleashes significant gains in productivity and initial quality. However, transparency and managing based on objective study of reality strains existing software engineering culture as all the old rules, obfuscation, economies of truth, wishful thinking and subjective decision making must be cast aside. What can you expect, how will you cope and how can you harness the power of trustworthy transparency in your organization?
"
As someone with a strong Unix and Open-Source heritage, I've long regarded Microsoft as "the evil empire" and loathed their operating system and browser and ALM tools. But in the last 3 years or so they've acquired a number of people in the Agile and ALM community that I highly respect (Brian White, Sam Guckenheimer, David Anderson, Ward Cunningham, James Newkirk) and the products these folks have worked on look incredibly impressive to me (even tho not all of them are still with Microsoft), plus I'm quite impressed with the whole of their Software Factories vision and approach ...

I actually may have to start liking them (or at least part of them :-). Don't get me wrong! I'm still a big fan Unix (and Mac OS/X), Open-Source, and more recently Eclipse, ALF and Corona; But the competing stuff from the folks in Redmond is looking exceedingly more and more impressive to me. Working on those kinds of things with those people would be an incredible experience I think (now if only I could do that without having to relocate from Chicago or spend 25% or more of my time traveling ;-).

Wednesday, July 05, 2006

Leadership/EQ Rites of Passage and the Mythical Manager Month

A bit of a follow-up on my previous blog-entry about Matthew Edwards and his recently published a book on Creating Globally Competitive Software: The Fundamentals for Regular People.

I wrote:
I have a lot of respect for Matt, he and I went thru a lot of "stuff" together over a very short+intense period (more on that in a moment) and managed to come through it while spreading a little bit of light. During that time I also pointed Matt in the direction of Agile development as a possible "way out of the madness", and he did his part to help make that a reality.
Here's the story on that ... I worked with Matt back in 1999-2002 on what was then a hideously dysfunctional "death march" project that we were trying to pull out of it's own self-created and self-perpetuated hole. The product was an internal one, and Matt, a former testing Guru, was one of my key customer reps. The project suffered from just about everything under the sun:
  • Bad management (failure to set+manage expectations & appropriate interfaces)
  • Dysfunctional customer & internal organization (warring tribes, turf wars, political silos, and a severe lack of trusting/trustworthy mgmt leadership),
  • Management that felt senior architects/designers aren't supposed to get their hands dirty in "coding"
  • A tech-lead with great technical & project knowledge/skill/experience and strong passion for quality design but with an equally great reluctance to lead, overly trusting and possessing piss-poor leadership & communication skills at that time (me)
  • Managers that had great communication skills, but no clue about successful software development, and no interest in learning it
  • A highly talented team of young, promising developers, but with a total lack of software development experience/maturity (which wouldnt necessarily be a bad thing if not combined with all of the above)
And so much more ... in fact that project managed to take two of the best-known worst practices ("the mythical man-month", and "too many chiefs/generals, not enough indians/soldiers") and combine them into an even worse one that I dubbed "The Mythical Manager-Month":
The Mythical Manager Month -- adding more managament to a late & failing project just makes everything thing worse and everyone more miserable.
I have to say, that project really taught me a lot about leadership and communication, particularly ...
  • how leadership differs from management, and from cheerleading
  • the importance of planning your communication and having a communication plan
  • the huge impact of really good managers versus really bad ones,
  • the difference between credibility and trust
  • the difference between power/influence and authority
  • how incredibly selfish, two-faced, and despicably unethical some folks can be
  • how to recognize malevolent manipulators who appear to "befriend" you to gain your trust, but will betray and backstab to get what they want
  • and how to recognize (and handle) a demagogue masquerading as a "heroic manager."
The first two years of that project were both a painfully magnficent failure, and a painfully magnificent teacher. It was definitely a leadership "rite of passage" for me, and leading the successful turnaround of project (in which agility played a large part) was a deeply educational and visceral personal experience that has largely shaped my career & objectives since.

The books by Patrick Lencioni on team dysfunctions and how to overcome them, as well as organizational silos, politics & turf-wars would have done me a world of good back then if they'd been available (and if I'd had enough previous appreciation of those problems to have read-up on them and other works related discovering and raising my Emotional Intelligence).

That project marked my transition from "unconscious incompetence" about leadership & communication to "conscious incompetence" and really motivated me to navigate the path to "conscious competence." I yearn for the day when it becomes unconscious competence.

I'm not quite there yet. It's been a long leadership journey (much longer in experience and learning than in actual years) since that project, and I still have a long ways to go. But these days my bookshelf at home is replete with just as many books about leadership, EQ, influence, and communication as my technical bookshelf at work is with books on software development, and I think about a lot more than just the technical strategies/techniques/practices and lessons learned in my day-to-day work.

Monday, July 03, 2006

Creating Globally Competitive Software

A friend of mine, Matthew Edwards, recently published a book on Creating Globally Competitive Software: The Fundamentals for Regular People. I can't wait to get my copy and start reading through it.

I have a lot of respect for Matt, he and I went thru a lot of "stuff" together over a very short+intense period (more on that in a later blog-entry) and managed to come through it while spreading a little bit of light. During that time I also pointed Matt in the direction of Agile development as a possible "way out of the madness," and he did his part to help make that a reality.


Since then Matt has had a few other "gigs" that have advanced his experience and insights into software development (in a very Gerry Weinberg-esque fashion). He later co-founded Ajilus, which works and consults in global software development with a strong socio-technical perspective, having embraced the ideas of Agility, Scrum, Theory of Constraints, and systems thinking about the organizational/social roots of most seemingly technical problems.

So I'm really looking forward to reading what Matt has to say, as someone who has seen all of that from many perspectives, and has seen the light regarding agility, collaboration, organization, globalization and how to convey those lessons to "regular people." As part of his bio, Matt writes:
"I consult, teach, speak, write and deliver in the software solution delivery space with a focus on helping teams simplify the software delivery lifecycle - and deliver. Time, cost, team solidarity and structures, organizational behavior, ability to deliver, pulling projects out of the hole ... everything is interdependent and is usually social, not technical."
-- Matthew Edwards,
http://www.ajilus.com/
Like I said, I'm definitely looking forward to reading through this one and seeing how it can help folks like me "connect" with "regular people."