Monday, July 10, 2006

Agile CMMI and Dancing Elephants

[updated June 1, 2007]

CMMI on the surface is definitely not very inviting to Agile. CMMI can be done in an agile fashion however. If CMMI is something you have a need for, then for secrets of how to do it "Agile-style", and details of success stories and lessons learned, take a look at the following links:

Also see "Integrating Agile Methods", and "Teaching the Elephant to Dance: Agility Meets Systems of Systems Engineering and Acquisition" (and others) from the CSE 2005 Annual Research Review.

Friday, July 07, 2006

Trustworthy Transparency over Tiresome Traceability

If there was an Agile CM Manifesto, then this statement should be part of it!
Trustworthy Transparency over Tiresome Traceability

Note that my position on traceability is more "lean" than "agile" I suspect. I base this on the XP & Scrum centric views that were expressed in the March 2004 YahooGroup discussion thread Why Traceability? Can it be Agile? I think the "tests over traceability" is probably a valid summary of the XP/Scrum perspective from that thread.

I think myself and David Anderson would probably say something more along the lines of "transparency over traceability", where we acknowledge the important goals that traceability is trying to fulfill (I'm not sure the XP community embraces all of those that I identified as the "8 reasons" and "6 facets" that I identified in my paper on traceability dissected). David in particular has written in the past about "trustworthy transparency" and "naked projects" (projects that are so transparent and visible in their status/accounting that they seem "naked").

I also differ strongly with the many vocal opinions expressed in XP community when it comes to the use of tools for tracking requests/changes: I'm strongly in favor of using a "good" tracking tool. I think index cards are a great and valuable "tool" for eliciting dialogue and interaction with the "customer" (and I use them for this purpose, along with post-it notes). But I believe index cards simply do not "cut it" as a serious means of storing, tracking, sorting, searching, slicing & dicing development/change requests).

I do believe an extent of traceability is necessary, and that it's not necessarily "agile", but that it can be, and should be, "lean" and streamlined, and should serve the purpose of transparency, visibility and status-accounting rather than being a goal in itself. And I think there are several strategies and tactics that can be employed to achieve "lean" traceability in service to "trustworthy transparency and friction-free metrics."

I think that a "lean" approach of traceability would focus on the following:
  1. Flow: If one uses "single piece flow" and does changes at the granularity that TDD mandates, then software-level requirements, design, coding, and testing are all part of the same task, and tracking them to a single record-id in the change-tracking system and version-control tool would actually go a long ways toward traceability (its much more work & intermediate artifacts when those activities are all separated over time (different lifecycle phases), space (different artifacts) and people (different roles/organizations). When traceability efforts noticeably interfere with "flow" is when agilists will start screaming.

  2. Minimizing intermediate artifacts and other perceived forms of "waste" (overspecifying requirements or too much requirements "up front") because fewer artifacts means fewer things to trace.

  3. Collocating both people & artifacts (the former for communication, the latter for "locality of reference") for those artifacts that are deemed necessary.

  4. Coarse-Granularity and Modularity/Factoring of what is traced: tracing at the highest practical level of granularity (e.g., is it practical to trace to the individual requirement or the use-case? To the line of code, or to the method/subroutine, or to the class/module) - this would be about "simple design" and "(re)factoring)" as it applies to the structure of the traced entities and their relationships.

  5. Transparent, frictionless automation of the terribly taxing and tiresome tedium of traceability: focus on taking the tedium out of manual traceability and have it streamlined and automated as much as possible, ideally happening seamlessly behind the seems (like with Jane Huang's event-based traceability (EBT), or thru the use of a common environment "event" catcher within Eclipse or MS Team System server), probably using a task-based, test-driven (TDD), or feature-driven (FDD) approach.
Many of these concepts and more are embodied in Sam Guckenheimer's recent book on Software Engineering with Microsoft Visual Studio Team System. I found this book to be surprisingly good (outstanding even), and not at all what I was expecting given the apparent tool/vendor-specific nature suggested by the title. The value-up paradigm and most of the other concepts and values in the book are very well aligned with agility while still meeting the needs of more rigorous ceremony in their software and systems engineering efforts.

I'll close with a description of a recent presentation by David Anderson on Changing the Software Engineering Culture with Trustworthy Transparency:
"Modern software tooling innovation allows the tracking of work performed by engineers and transparent reporting of that work in various formats suitable for everything from day-to-day management and team organization to monthly and quarterly senior executive reporting. Modern work item tracking is coupled to version control systems and aware of analysis, design, coding and testing transitions. This makes it not only transparent but trustworthy. Not only can a tool tell you the health of a project based on the state of completion of every work item, but this information is reliable and trustworthy because it is tightly coupled to the system of software engineering and the artifacts produced by it.

The age of trustworthy transparency in software engineering is upon us. Trustworthy transparency changes the culture in an organization and enables change that unleashes significant gains in productivity and initial quality. However, transparency and managing based on objective study of reality strains existing software engineering culture as all the old rules, obfuscation, economies of truth, wishful thinking and subjective decision making must be cast aside. What can you expect, how will you cope and how can you harness the power of trustworthy transparency in your organization?
"
As someone with a strong Unix and Open-Source heritage, I've long regarded Microsoft as "the evil empire" and loathed their operating system and browser and ALM tools. But in the last 3 years or so they've acquired a number of people in the Agile and ALM community that I highly respect (Brian White, Sam Guckenheimer, David Anderson, Ward Cunningham, James Newkirk) and the products these folks have worked on look incredibly impressive to me (even tho not all of them are still with Microsoft), plus I'm quite impressed with the whole of their Software Factories vision and approach ...

I actually may have to start liking them (or at least part of them :-). Don't get me wrong! I'm still a big fan Unix (and Mac OS/X), Open-Source, and more recently Eclipse, ALF and Corona; But the competing stuff from the folks in Redmond is looking exceedingly more and more impressive to me. Working on those kinds of things with those people would be an incredible experience I think (now if only I could do that without having to relocate from Chicago or spend 25% or more of my time traveling ;-).

Wednesday, July 05, 2006

Leadership/EQ Rites of Passage and the Mythical Manager Month

A bit of a follow-up on my previous blog-entry about Matthew Edwards and his recently published a book on Creating Globally Competitive Software: The Fundamentals for Regular People.

I wrote:
I have a lot of respect for Matt, he and I went thru a lot of "stuff" together over a very short+intense period (more on that in a moment) and managed to come through it while spreading a little bit of light. During that time I also pointed Matt in the direction of Agile development as a possible "way out of the madness", and he did his part to help make that a reality.
Here's the story on that ... I worked with Matt back in 1999-2002 on what was then a hideously dysfunctional "death march" project that we were trying to pull out of it's own self-created and self-perpetuated hole. The product was an internal one, and Matt, a former testing Guru, was one of my key customer reps. The project suffered from just about everything under the sun:
  • Bad management (failure to set+manage expectations & appropriate interfaces)
  • Dysfunctional customer & internal organization (warring tribes, turf wars, political silos, and a severe lack of trusting/trustworthy mgmt leadership),
  • Management that felt senior architects/designers aren't supposed to get their hands dirty in "coding"
  • A tech-lead with great technical & project knowledge/skill/experience and strong passion for quality design but with an equally great reluctance to lead, overly trusting and possessing piss-poor leadership & communication skills at that time (me)
  • Managers that had great communication skills, but no clue about successful software development, and no interest in learning it
  • A highly talented team of young, promising developers, but with a total lack of software development experience/maturity (which wouldnt necessarily be a bad thing if not combined with all of the above)
And so much more ... in fact that project managed to take two of the best-known worst practices ("the mythical man-month", and "too many chiefs/generals, not enough indians/soldiers") and combine them into an even worse one that I dubbed "The Mythical Manager-Month":
The Mythical Manager Month -- adding more managament to a late & failing project just makes everything thing worse and everyone more miserable.
I have to say, that project really taught me a lot about leadership and communication, particularly ...
  • how leadership differs from management, and from cheerleading
  • the importance of planning your communication and having a communication plan
  • the huge impact of really good managers versus really bad ones,
  • the difference between credibility and trust
  • the difference between power/influence and authority
  • how incredibly selfish, two-faced, and despicably unethical some folks can be
  • how to recognize malevolent manipulators who appear to "befriend" you to gain your trust, but will betray and backstab to get what they want
  • and how to recognize (and handle) a demagogue masquerading as a "heroic manager."
The first two years of that project were both a painfully magnficent failure, and a painfully magnificent teacher. It was definitely a leadership "rite of passage" for me, and leading the successful turnaround of project (in which agility played a large part) was a deeply educational and visceral personal experience that has largely shaped my career & objectives since.

The books by Patrick Lencioni on team dysfunctions and how to overcome them, as well as organizational silos, politics & turf-wars would have done me a world of good back then if they'd been available (and if I'd had enough previous appreciation of those problems to have read-up on them and other works related discovering and raising my Emotional Intelligence).

That project marked my transition from "unconscious incompetence" about leadership & communication to "conscious incompetence" and really motivated me to navigate the path to "conscious competence." I yearn for the day when it becomes unconscious competence.

I'm not quite there yet. It's been a long leadership journey (much longer in experience and learning than in actual years) since that project, and I still have a long ways to go. But these days my bookshelf at home is replete with just as many books about leadership, EQ, influence, and communication as my technical bookshelf at work is with books on software development, and I think about a lot more than just the technical strategies/techniques/practices and lessons learned in my day-to-day work.

Monday, July 03, 2006

Creating Globally Competitive Software

A friend of mine, Matthew Edwards, recently published a book on Creating Globally Competitive Software: The Fundamentals for Regular People. I can't wait to get my copy and start reading through it.

I have a lot of respect for Matt, he and I went thru a lot of "stuff" together over a very short+intense period (more on that in a later blog-entry) and managed to come through it while spreading a little bit of light. During that time I also pointed Matt in the direction of Agile development as a possible "way out of the madness," and he did his part to help make that a reality.


Since then Matt has had a few other "gigs" that have advanced his experience and insights into software development (in a very Gerry Weinberg-esque fashion). He later co-founded Ajilus, which works and consults in global software development with a strong socio-technical perspective, having embraced the ideas of Agility, Scrum, Theory of Constraints, and systems thinking about the organizational/social roots of most seemingly technical problems.

So I'm really looking forward to reading what Matt has to say, as someone who has seen all of that from many perspectives, and has seen the light regarding agility, collaboration, organization, globalization and how to convey those lessons to "regular people." As part of his bio, Matt writes:
"I consult, teach, speak, write and deliver in the software solution delivery space with a focus on helping teams simplify the software delivery lifecycle - and deliver. Time, cost, team solidarity and structures, organizational behavior, ability to deliver, pulling projects out of the hole ... everything is interdependent and is usually social, not technical."
-- Matthew Edwards,
http://www.ajilus.com/
Like I said, I'm definitely looking forward to reading through this one and seeing how it can help folks like me "connect" with "regular people."

Sunday, June 25, 2006

Nested Synchronization and Harmonic Cadences

I was reading David Anderson's weblog and his recent entry on good versus bad variation (which references an earlier blog-entry of mine on the same subject). Apparently this was a recurring theme at the recent Lean Summit in Chicago, and the consensus there was:
  • Organizing for routine work: Drive out variation (and automate profusely)
  • Organizing for innovative work: Encourage variation (and collaborate profusely)

One of the links was to Don Reinertsen's website (he is the author of Managing the Design Factory), and at the top of the page was the "tip of the month" for June 2006 on the subject of Synchronization:
The practical economics of different processes may demand different batch sizes and different cadences. Whenever we operate coupled processes using different cadences it is best to synchronize these cadences as harmonic multiples of the slowest cadence. You can see this if you consider how you would synchronize the arrival of frequent commuter flights with less frequent long haul flights at an airline hub.

Also, Mary Poppendieck was mentioning "Nested Synchronization" in the Lean Development YahooGroup while she was working on her latest book Implementing Lean Software Development: From Concept to Cash where she advised to use continuous integration and nested synchronization instead of infrequent, big-bang integration.

I think both of these apply directly to "Lean" SCM!
  • Harmonic cadences address nested synchronization of integration/build frequencies, both in the case of
    1. different types of builds (private build, integration build, release build), and ...
    2. different levels of builds (component builds, product-builds)
    3. and also in the case of connected supplier/consumer "queues" where builds or components are received from an (internal or external) supplier and incorporated into our own product/components builds.

  • Harmonic cadences would also address release cycle planning for a product-line of products that are releases of multiple (internal & external) component releases.

  • Nested synchronization would seem to apply to branching structures where development branches feed into integration/release branches and their relation to mainline branches, and the direction and frequency with which changes get merged or propagated across codelines.

Of course, when you can manage without the "nesting", that is ideal for continuous integration. Continuous integration together with test-driven development seems to approximate what Lean calls one piece flow. An article from Strategos discusses when one-piece flow is and isn't applicable.

In the context of SCM, particularly continuous integration and TDD, one piece flow would correspond to developing the smallest possible testable behavior, then integrating it once it is working, and then doing the next elementary "piece", and so on. This is typically bounded by:
  1. the time it takes to [correctly] code the test and the behavior
  2. the time it takes to sync-up (merge) your code with the codeline prior to building+testing it, and ...
  3. the time it takes to verify (build + test) the result
Working in such extremely fine-grained increments might not always work well if the one-piece-flow cycle-time was dominated by the time to sync-merge or to build+test, and/or if it always had a substantially disruptive/destabilizing effect on the codeline.

In those two cases, if the time/cost "hit" was more or less the same (independent of the size/duration of the change), then since the penalty per "batch" is roughly the same for a batch-size of one piece as it is for a larger batch-size, then it makes sense to develop in larger increments before integrating and committing your code to the codeline.

Monday, June 19, 2006

Agile Metrics in the Agile Journal

The June issue of the Agile Journal is devoted to the subject of Agile Metrics. Check it out!

There is also a review of David Anderson's book Agile Management for Software Engineering. Little did I know that while I was working on the review, David would be honoring me with praise at his own weblog.

I swear I knew nothing of it when I wrote my review, and that David had no knowledge that I was writing the review of his book (much less what I would say in it). We simply share a very deep admiration and respect for each other's work and ideas.

Wednesday, June 14, 2006

Agile Ideation and Lean Innovation?

More on "agile futures" from some of my earlier posts on globalization 3.0 and extreme competition and how the only way to stay competitive will be to innovate faster and more frequently than the competition ...

So does that mean that the "most valuable features" to implement first will be the ones that are considered "innovative"? Before we can execute on doing agile development for innovative features we have to have some kind of initial "innovation clearinghouse" in the organization where we have a buffer of potential innovation idea-candidates. Those "gestational" ideas will need to undergo some kind of evaluation to decide which ones to toss, which ones to develop a business case for, and which ones to do some early prototyping of, which ones to "incubate", etc.

Eventually, I see two queues, one feeding the other. The "Candidate Innovations" queue will need to "feed" the agile/lean software development queue. Things on the candidate innovations queue will have to go thru some equivalent of iterating, test-first, pairing/brainstorming, refactoring, and continuous innovation integration so that the queue can take "raw" and "half-baked" ideas in the front and churn out fully-baked, concrete, actionable ideas to then feed the development queue.

So the one queue will exist to create "actionable knowledge" (ideation) and will then go into the queue that cranks out "executable knowledge" in the form of working software. Given this two-queued system, how does this work where the "software queue" has both a request (product) backlog and a sprint (iteration) backlog. Lots of things on the product-backlog might be viewed as waste. And yet if they have made it thru the "ideation" backlog to produce an actionable concept and business-case, then that will indeed have value (but it will be perishable value).

What would Lean+TOC say about how to unconstrain and eliminate waste and maximize flow of the innovation flow that feeds the agile development flow? (I'm assuming the innovation flow would be a bigger bottleneck than the software development flow)

Friday, June 09, 2006

Extreme Economic Gloom and Doom

According to a number of different sources, the US economy is going to have its bottom fall out somewhere around 2010 due to a variety of reasons that are converging all around that same time:
  • Massive trade deficit, soaring personal and government debt, a housing bubble, runaway military expenditures, and skyrocketing healthcare costs with employers' insurance plans covering less and less these days (the usual)

  • Globalization 3.0 and the commoditization of knowledge-work & knowledge-workers

  • Peak oil supply will have been breached (some say it has already, others say it will happen anywhere between 2004 and 2010), resulting in soaring oil prices (far more than they are today) and the race for efficient mass production & distribution of low-cost alternative energy sources

  • Retirement of the "baby-boom" generation (starting in 2007 and peaking between 2010-2020) and its impact upon social security reserves (because of ERISA) and US supply of knowledge-workers

  • Global warming and depletion of the environment will reach the point of no return sometime between 2010 and 202o (if you believe Al Gore in the recent documentary "An Inconvenient Truth")

  • Likelihood of global pandemic flu (possibly bird-flu, but possibly any other kind of flu) happening within the next 5-10 years, and its global impact on medical and industrial/business supply chains (how far away will we be from harnessing nano-biotechnonology when it hits?)

I gleaned all of this just from browsing a bunch of books on amazon.com, like the following:

There are LOTS more saying the same things. On the other hand, a few authors hold out hope that we will finally focus on some of the right things (like the environment and alternative energy sources, and turning to nature itself for innovation):

These things are all converging together (coming to a "head") within the next 10 years. I wonder what the state of Agility will be like then ...
  • Will little/no inventory be desirable amidst the threat of global supply chain disruptions due to pandemic health crisis?
  • Or will agile business partnerships and the resulting "agile business ecosystems" somehow be "autonomic" by that time.
  • As for oil and transportation, might not the threat of pandemic flu end up fueling "virtual" travel and telecommuting?
  • Or will that just give people more time to use their cars for non-work reasons.
  • Who will want to go to the mall or the grocery store if they're worried about contracting life-threatening illnesses?
  • What about emerging markets that are going to "boom" but haven't yet? (Many say nanotechnology and biotechnology will do this eventually - but when?)

I won't be eligible for retirement for ~30 years, and within ~15 years I want to be able to finance a college education for my two children (less than 2 years apart in age). All this sort of makes me want to say "Beam me up Scottie!", or "Why oh why didn't I take the blue pill!"

Monday, June 05, 2006

Vexed by Variation: Eliminate or Encapsulate (with TOC+Lean+GoF)

I had some positive feedback on my previous entry about Six Sigma and Good vs. Bad Variation.

The Six Sigma methodology is largely about eliminating or reducing [destructive] process variation. In the case of software design (and what should be the case in software process design, but all too often is not) we recognize that change and uncertainty is inevitable and we use techniques to minimize and localize the impacts of changes. Three of the most common techniques come straight out of the classic Design patterns book from the "Gang of Four":
  • Identify what changes and Encapsulate the thing that varies
  • Program to an interface, not to an implementation
  • Prefer composition over inheritance
These principles could just as easily apply to process design and the design of process-families (a Product-Family or Product-Line for a family of processes). I attempted this in an earlier blog-entry entitled CM to an interface, not to an implementation.

So how do we find this variation, and how to we know what to do with it? Both Lean and TOC give us some tools to do this (as does Six Sigma). Six Sigma's process-maps are similar to (and quite possibly borrowed from) Lean's Value-stream maps. These are one way to form the "current reality tree" of TOC's Thinking Process and then look for things like waste, non-conformance, or conflict (e.g. a "process" impedance mismatch).

When we find something, what do we do? ...
  • If it is waste, we attempt to eliminate it or reduce it using Lean.

  • If it is variation, we should ask if it is destructive variation (causing poor quality) Is the variation the cause of the problem? Or is it the inherent uncertainty and/or our inability to adapt in the face of change?

  • If the variation is destructive, seek to eliminate it. Sometimes automation helps to eliminate variation as long as there is enough certainty in the procedural details and outputs of what to automate.

  • If the variation is not destructive, or if the problem is uncertainty and/or our ability to be resilient and adapt to change, then try to isolate and localize the variation by encapsulating it. Use the GoF principles and patterns to separate policy (process interface) from mechanism (procedural implementation).

  • If the problem is conflict, then look to see if it is destructive or constructive interference, or impedance mismatch.

  • Impedance mismatch needs to be corrected, and destructive interference (friction) needs to be eliminated or minimized.

  • Constructive interference, and possibly other forms of conflict may need to be retained, in which case we would again look to encapsulate the conflict by trying to separate policy from mechanism, or interface from implementation, and discovering the higher-level rules/forces that they still have in common.

In all the above cases, TOC's Five Focusing Steps can help with the identification of the appropriate patterns/practices to apply.

Comments? Did that make sense? Did it seem like a reasonable application of using OOD patterns and principles in conjunction with TOC, Lean and Six Sigma?

Tuesday, May 30, 2006

Simple ain't Easy: Myths and Misunderstandings about Simplicity

Obviously not all of us have the same idea of what Simple or Simplicity actually mean, specifically in the context of system design (including software, and processes). Here are some common misunderstandings that I frequently encounter about the meaning of "simple design":
"Simple" is not the same thing as "easy to do/understand."

Sometimes something that is "simple" is easy to do or easy to understand. Whether or not it is easy, is often more closely related to how familiar or intuitive it is. Eventually, it may be quite simple to do or understand. But initial attempts to do or understand it may be anything but easy!

The simpler solution may require us to learn something new, and think about something in a way that hasn't occurred to us before. Closed-minds will often close-doors on new ideas (simple or otherwise) because they simply don't want to entertain changing their current views and beliefs.


"Simple design" is not the same thing as "simple to develop/deploy."

If it's simple from the get-go, then it may perhaps be simple to develop/deploy. If the solution is already in place, then making it simpler may involve changing a lot of minds/behaviors as well as a lot of the system, and both of those may be anything but easy to do (especially changing minds/behaviors).


"Simple" is not the same thing as "good enough!"

Put another way, Simplicity != Sufficiency. "Good enough" has more to do with whether something is sufficiently workable "for now" while still fostering subsequent improvement or refinement for later. That doesn't mean the deployed result is simple/simpler; it just means we may be better served by getting to that point in an incremental and evolutionary (emergent) fashion.

In order for that to be true however, it means that the partial/incremental solution must be easy to subsequently change in order to refine and improve!!!

If I install something that is incomplete with the intent of making it more complete, and if it is very hard/painful to change, then I may end-up with the current-state-of-affairs for a number of IT solutions and business-processes: short-sighted, insufficient solutions that the organization defends, and chooses to suffer and impose on others because they don't want to suffer the impact of the change that could bring about relief from the suffering they have become accustomed to living with.


"Simple" is not the same thing as "simplistic!"

A simplistic solution often does not work! One definition of "simplistic" might be the false appearance of simplicity. It's not enough to seem/appear simple. It also has to work (successfully) in order to actually be simple!

Many times someone will discard or exclude a suggestion because it introduces something additional or new into the current design, and they don't want to add anything more/new in the name of simplicity, but they may be being simplistic instead. If the new/added thing is a rightful part of the problem that needs to be solved, then its introduction is correcting an act of omission in the solution design that neglected something essential in the problem-domain.

Sometimes we exclude things that we don't want to see (in the name of simplicity) which are nonetheless a real-world element of the problem we need to solve. Dismissing them in the case where ignoring them has failed to solve the problem, is not simplicity; it is ignorance. It is okay to want something that is "stupidly simple," but not at the expense of being simply stupid!

If the result doesn't do what it's supposed to do when it's supposed to do it, it may seem simple, but, as Gerry Weinberg points out, it's likely that something crucial to the problem statement was omitted or misunderstood either in the design, or in the problem statement itself.


What is "simple" for one request may not be "simple" for the whole!

When faced with a single, seemingly simple request to enhance a system, the requestor may want the specific solution to be simple for their particular situation and perspective (this is sometimes called "point-based thinking"). But what usually needs to be simple is the resulting overall system. Making it simple from one view or situation may just compromise other parts (and stakeholders) of the system. That's not eliminating complexity; it's not even "sweeping it under the rug"; it's just sweeping it onto someone else's doorstep.

Note how a lot of these myths/misunderstandings are more about resistance to changing our thinking/behavior than about being simple.

The Agile Manifesto defines simplicity as "maximizing the amount of work not done." But I think that's a more accurate characterization of Lean than of simplicity.

Recently, I looked though a number of sources of information about what is the meaning of "simplicity" and what are its principles, and I came across a number of interesting resources:
After mulling-over all of those, I think it's fair to say that while "Simplicity" may be, well, "simple", truly understanding "simplicity" is in fact quite hard!
  • Simplicity involves being able to see the whole from a systems thinking perspective while at the same time being able to focus in on what is relevant and essential and how it impacts the rest of the system.
  • Sustainable simplicity often has to evolve or emerge on it's own from a set of simple guiding rules.
  • The opposite of simplicity is complexity (as opposed to "hard" or "difficult" or "time-consuming" or "labor-intensive")
  • In mathematics, simplicity is often "elegance" and is more than just the intersection of "what is necessary" and "what is sufficient"
  • In architecture, "simplicity" is often synonymous with "beauty"
  • Hiding complexity isn't the same as removing complexity.
  • Many of the tools we use to manage complexity in systems design may in fact add more/new objects to hide complexity or separate concerns
  • Minimizing dependencies throughout a system is more critical to simplicity than minimizing the number/types of objects
  • Occam's Razor does not in fact say that the "simpler explanation is better" ... it says that the explanation that makes the fewest assumptions and poses the fewest hypotheticals (i.e., minimizes the number of "given" and "unproven" conditions) is the most preferable because it is easier to comprehensively test in order to prove/disprove.

I think that true simplicity is about minimizing and managing overall complexity. Complexity in software design and development comes from the sheer size and complexity of the problem we are being asked to solve, and the richness and vastness of the interconnected maze of interactions within the system and between the system and its environment.
  • The overall complexity of the system is dominated far more by the interactions between the parts that makeup the whole than it is by the parts alone.
  • For any non-trivial system, simplicity often has less to do with the number and kind of different things involved and more to do with the number and kind of interdependencies between them.
So achieving simplicity is less about managing "parts" of "things" or individual point-solutions, and is more about managing rules and relationships between the parts and things and solution-sets.

When dealing with large or complex systems (like most software, and software processes) the number of things (scale) and different types of things (diversity) that need to be managed is overwhelming. If we can come up with a modicum of modest, simple rules & principles to govern our design decisions in ways that help us minimize and manage interdependencies, eliminate constraints, and remove waste, then we are on the path to solving the real problem and meeting stakeholder needs in a way that is both simple and sustainable.

I'll close with my collection of favorite quotes on simplicity and simple design from the sources I culled above.


Everything should be made as simple as possible, but not simpler.
-- Albert Einstein
Three Rules of Work: Out of clutter find simplicity; From discord find harmony; In the middle of difficulty lies opportunity.
-- Albert Einstein
For every problem there is a solution which is simple, clean and wrong.
-- Henry Louis Mencken
Think simple as my old master used to say - meaning reduce the whole of its parts into the simplest terms, getting back to first principles.
-- Frank Lloyd Wright
Beauty of style and harmony and grace and good rhythm depend on simplicity.
-- Plato
The ability to simplify means to eliminate the unnecessary so that the necessary may speak.
-- Hans Hofmann
Making the simple complicated is commonplace; making the complicated simple, awesomely simple, that's creativity.
-- Charles Mingus
Everything is both simpler than we can imagine, and more complicated that we can conceive.
-- Goethe
The whole is simpler than the sum of its parts.
-- Willard Gibbs
The pure and simple truth is rarely pure, and never simple.
-- Oscar Wilde
Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius--and a lot of courage--to move in the opposite direction.
-- E. F. Schumacker
Besides the noble art of getting things done, there is the noble art of leaving things undone. The wisdom of life consists in the elimination of nonessentials.
-- Lin Yu Tang
Very often, people confuse simple with simplistic. The nuance is lost on most.-- Clement Mok
You can't force simplicity; but you can invite it in by finding as much richness as possible in the few things at hand. Simplicity doesn't mean meagerness but rather a certain kind of richness, the fullness that appears when we stop stuffing the world with things.
-- Thomas Moore
The point of philosophy is to start with something so simple as not to seem worth stating, and to end with something so paradoxical that no one will believe it.
-- Bertrand Russell
The aspects of things that are most important to us are hidden because of their simplicity and familiarity.
-- Ludwig Wittgenstein
Manifest plainness, Embrace simplicity, Reduce selfishness, Have few desires.
-- Lao-Tzu, Tao Te Ching
Simple things should be simple and complex things should be possible.
-- Alan Kay
The key to performance is elegance, not battalions of special cases. The terrible temptation to tweak should be resisted unless the payoff is really noticeable.
-- Jon Bentley and Doug McIlroy
... the purpose of abstraction is not to be vague, but to create a new semantic level in which one can be absolutely precise.
-- Edsger W. Dijkstra
Simplicity and elegance are unpopular because they require hard work and discipline to achieve and education to be appreciated.
-- Edsger W. Dijkstra
Beauty is more important in computing than anywhere else in technology because software is so complicated. Beauty is the ultimate defense against complexity.
-- David Gelernter
Fools ignore complexity; pragmatists suffer it; experts avoid it; geniuses remove it.
-- Alan Perlis
Technical skill is mastery of complexity, while creativity is mastery of simplicity.
-- E. Christopher Zeeman
Architect: Someone who knows the difference between that which could be done and that which should be done.
-- Larry McVoy
One of the great enemies of design is when systems or objects become more complex than a person - or even a team of people - can keep in their heads. This is why software is generally beneath contempt.
-- Bran Ferren
A complex system that works is invariably found to have evolved from a simple system that worked.
-- John Gall
The most powerful designs are always the result of a continuous process of simplification and refinement.
-- Kevin Mullet
There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult.
-- C.A.R. Hoare
Perfection (in design) is achieved not when there is nothing more to add, but rather when there is nothing more to take away.
-- Antoine de Saint-Exupéry
Simple, clear purpose and principles give rise to complex intelligent behavior. Complex rules and regulations give rise to simple stupid behavior.
-- Dee Hock

Wednesday, May 24, 2006

Six Sigma and Good vs Bad Variation

I briefly touched on Six Sigma (the methdology, not the number/measure) in my previous blog-entry on Cost, Cruft and Constraints, and how Six Sigma methods are about reducing what I called destructive variation:
Six Sigma is about eliminating process variation, but not just any kind of process variation. It's about eliminating "destructive" variation. Destructive variation is value-subtracting variation (rather than value-adding).
Some go too far with Six Sigma and interpret it to mean that ALL process variation is "bad." I don't subscribe to that belief. I think there is intentional variation that adds value, as well as "essential" variation that is either unavoidable, or else emerges naturally (and beneficially) out of the creative & learning processes.

Many have tried for repeatability & reproducibility where it isnt always appropriate. The knowledge creating aspects of software development aren't always appropriate places for procedural repeatability and removing variation. The feedback/validating aspects often are. I think Software CM and testing are appropriate places for this: anything that is appropriate and desirable to automate (like building and testing) would be an appropriate for removing variation and SixSigma techniques.

So just like not all conflict is bad, not all variation is bad! And Six Sigma efforts should (I think) limit their focus to destructive variation for the portions of their processes where it make sense to have both repeatable procedures and reproducible results.

I think I'm actually starting to come up with a combination of Agile, Lean, TOC and Six Sigma that is both cohesive and coherent. I credit David Anderson with starting me down that path. (Look out! CMMI might be next :-)

Tuesday, May 23, 2006

Business Agility Defined

The last few blog entries on agile definitions, and Agile + Lean + TOC have inspired me to proffer up this aliterative definition of Agility in a Business context.

Business Agility is ...
Rapid Response to Change with Optimal Efficiency in Motion, Economy of Effort, Energy in Execution, and Efficacy of Impact!

Is that too verbose? Is it enough to say simply:
Rapid response with optimal efficiency, economy, energy and efficacy!

Note that I dont say anything above about keen sense of timing & awareness to change in one's Environment. Should it? What about Entrusting and Empowering others?

Monday, May 22, 2006

Cost, Cruft and Constraints

My earlier blog-entry on "Feedback, Flow, and Friction" met with mixed reviews.

Many commented that agile development is ultimately about lowering the cost of change. Others noted feedback is important, but unless you respond to it, and take appropriate action, it's not the be-all and end-all it may seem. A few felt the that "friction" wasnt quite close enough to "contraints."

It seems when it comes right down to it, Agile, Lean and TOC are all about trying to eliminate and/or remove the following:
  • Cost-of-change: Agile is about reducing the cost of change by making change easy and catching the need to change much earlier thru continous feedback and close, frequent stakeholder interaction.

  • Cruft: It (cruft) is waste! And Lean is all about eliminating waste in order to maximize flow.

  • Constraints: TOC is all about eliminating constraints, and using the five focusing steps to do it.
While we're at it - Six Sigma is also about eliminating something: variation (but not just any kind of variation). It's about eliminating "destructive" variation. Destructive variation is value-subtracting variation (rather than value-adding).

The phrase "Cost, Cruft and Constraints" doesnt sound as attractive as "Feedback, Flow and Friction." A large part of that may be due to its nonconstructive focus on what to eliminate rather than on what to create.

For each thing I'm allegedly eliminating, I'm gaining something else:
  • Reducing the cost-of-change makes it easier to accommodate change and be adaptive/responsive

  • Eliminating waste helps maximize flow of the production pipeline.

  • Eliminating constraints help maximize throughput

  • Eliminating destructive variation helps maximize quality in terms of correctness, reliability, availability, operability, maintainability, etc.

Monday, May 15, 2006

Pragmatic Multi-Variant Management

I had a rather lengthy post on this subject on the Pragmatic Programming Yahoo group ...

My advice as a CM guy is you do NOT want to use branches or branching to solve this problem. If you have to manage/support multiple "configurations" of functionality for multiple markets/customers, branching is just about the worst way to do it (and this is coming from someone who is an advocate of branching, when done the right way for the right reasons).

The issue is likely to be one of "Binding Time". In your case, you would prefer the binding-time to be either at build-time (such as conditional compilation or linking), or release-engineering time (packaging up the "right" sets of files, including configuration files, for distributing to a customer), install/upgrade-time, or run-time.

One of the prefered ways to do this is with Business-Rules; particularly when it comes to decisions of "variable" policies and/or functionality. With GUI's depending on the kind of variation, you might resort to either conditional compilation or else simply creating separate source-files (instead of branching the same file).

There were some substantial discussions on this topic on CMCrossroads forums that yielded many practical insights and further resources. I recommend them:fantamango77 wrote:
I can say I tried out different strategies in earlier projects already. But not with such a big project. And none of the ways I took made me happy. It always ended up in a kind of hell: Configuration hell, branching hell, inheritance hell.
Yes - and some of those "circles" of h*ll are far worse than others when you look at the overall costs and effort. The bottom-line is that if you have to support multiple "variants" rather than a single evolving "line", you are adding complexity, and there is no way you are going to be able to sweep it under the rug or make it go away.

So it all comes down to which techniques and strategies in which "dimensions" of the project/product will prove most effective at minimizing and managing dependencies, effort and impact by most effectively allowing you to leverage proven principles such as encapsulation, cohesion, modularity, abstraction, etc. in the most efficient ways.

Think about what aspects or dimensions of your project/product are required to be "variable" in order for you to support multiple variants. Do your "variants" need to differ ...
  • in functional/behavioral dimensions
  • along organizational boundaries
  • along multiple platforms/environments
  • along temporal (evolution) dimensions
  • along project dimensions
Figure out which one or two of these are the primary "dimensions" of variability you need to support. Then find the set of mechanisms and techniques that apply to it.

For example, if you need to allow variability primarily along functional and environmental "dimensions", then which of those "dimensions" does version branching operate within? Version branching operates primarily within the space of evolution/time (concurrent, parallel, and distributed development).

So temporally-based techniques are not going to be best suited to handling variation in the behavioral or environmental dimensions, as the isolation/insulation they provide does not most effectively minimize, localize or encapsulate the kinds of dependencies in the non-time-based aspects of the system.

Differences in policy and/or mechanism are typically best handled using a business-rules approach to deliver a single codebase with multiple possible configurations of rules and rule-settings.

Differences in platforms are best handled by well known design and architecture patterns like Wrapper-Facade, and numerous patterns from the Gang-of-Four design patterns book.

Differences in behavior may be best handled by functional/feature selection and deselection "design patterns" like Configurator, to enable or disable features and/or services at post-development binding-times.

Inheritance may be useful in some cases, if the type of configuration needed really does fit a single hierarchical model of increasing specialization. In other cases, an aspect-oriented approach might be better.

Also think about the following in terms of what needs to "vary" and what needs to stay the same:
  • Interface -vs- Implementation -vs- Integration
  • Container -vs- Content -vs- Context
This kind of commonality and variability analysis helps isolate the fundamental dimensions or aspects of variation that need to apply to your project. If something needs to be able to vary while something else doesnt, then "encapsulate the thing that varies" using techniques that separate interface from implementation (or implementation from integration, or etc.) in ways that "keep it [structure] shy, DRY, and tell the other guy."

You might end up using a combination of strategies depending on the different aspects of variation you require and the "dimension" in which each one operates.

Monday, May 08, 2006

Nutshell definitions of Agile development

Over on the Extreme Programming Mailing list, someone asked "Could any one tell me what's exactly Agile Methodology(method) is?", a couple really good responses followed that I rather liked (italics are mine in the excerpts below).

Phlip wrote:
Agile development means using feedback to prevent waste and optimize development.

Software development follows an inner cycle of writing code for new features, and an outer cycle of delivering versions to users.
  • The inner cycle risks waste when we debug too much, or write too many lines. Larger teams also cause waste when one person must wait for another to upgrade their modules.
  • The outer cycle risks waste simply because when customers wait a long time for new features, the odds increase that they will request changes to existing code.
Feedback" means you set up a cycle so your project can tell you about its current status, early and often.
  • The most important feedback for the inner cycle is automated tests. They prevent wasting time debugging, and they help you remove lines of code that are not needed. And they make your code safe for others to change, so they don't need to wait for you.
  • The most important feedback for the outer cycle is frequent releases to end users. If you give them the high value features first, then the odds they will request rework are low. (High value features tend to have obvious specifications.) Then, the odds they will request new features are high, so you repeat. Each new release will reinforce and fine-tune the high value features.
Put together, these processes allow teams to move rapidly to new territory, and to not trip or make mistakes getting there. The dictionary's word for such fleet and sure-footed behavior is "agile".

John Roth wrote:
Agile is a style of project management which lets the project respond quickly to changing requirements. It also usually incorporates frequent releases, or at least the possibility of frequent releases without undue extra project management overhead.

Ease of handling changing requirements naturally leads to leaving the definition of detail to the last responsible moment, rather than having to have all details completely specified up front.

Most named agile methodologies don't specify much outside of project management. Scrum, for example, specifies a lot of project management, a small amount of team practice (put the developers in a room and do daily standups) and gives essentially no guidance about how to actually construct the software.

XP is the only one I know of that specifies all three (project management, team practice and software construction practice) in detail.

C. Keith Ray wrote:
Agile methods react well to changing requirements, permit delivery of working, tested, software at frequent intervals, and generally require less "overhead" than older document-driven formal methods. To enable this reduction of overhead, Agile methods rely on "social" tools and practices (such as shared-workspace, "information radiators", and retrospectives) as well as technical tools and practices like automated builds and automated tests.

Someone referred to James Bach's definition of an agile methodology:
agile methodology: a system of methods designed to minimize the cost of change, especially in a context where important facts emerge late in a project, or where we are obliged to adapt to important uncontrolled factors.

A non-agile methodology, by comparison, is one that seeks to achieve efficiency by anticipating, controlling, or eliminating variables so as to eliminate the need for changes and associated costs of changing.

Steven Gordon wrote:
What makes this question so vexing to so many is that being agile is contextual rather than definitive.

Being lean and principled is not necessarily sufficient to be agile - one must garner feedback from the context (from customers and management as well as ourselves) and adapt appropriately to that feedback to remain agile. Doing the exact same things could be agile in one context and not particularly agile in another.


Tuesday, May 02, 2006

Feedback, Flow and Friction

I think those three words may just possibly distill the essence of Agile, Lean, and Theory of Constraints: Feedback, Flow and Friction!
  • Feedback is, to a large-extent, what Agile is all about. It is about getting continuous feedback quickly and frequently as possible (at all levels of scale) to promote collaboration and synergy and synthesis of the knowledge we gain thru learning and discovery so we can validate it early and often.

  • Flow is what "Lean" is all about. It is about ensuring smooth continuous flow of the value stream, from creation thru delivery, and eliminating redundancy and waste wherever possible.

  • Friction is what "TOC" is all about: identifying and removing the constraints that impose friction on the flow of value and the feedback cycle of discovery and learning.


What do you think? How far off the mark is this? How badly am I oversimplifying?

Friday, April 28, 2006

Impacts of Extreme Globalization and Extreme Competition

I promised to say something about how I think all this stuff about globalization, innovation and extreme competition will impact CM and Agility. But first, a refresher on the books that I blogged about throughout most of March and April:
My blog entries on the subject thus far have been as follows:
So here are my jumbled thoughts on what I think it all portends for Agility and CM ...

  • Extreme Globalization will continue to drive the need for Extremely Distributed Development teams and team members.

  • Extreme Collaboration will increase the trend for needing human-interaction management over workflow enforcement. Tools will need to be less prescriptive and predictive and more enabling and empowering (particularly of "rich" virtual communication where face-to-face is not possible).

  • The Internet as the new personal computing platform will meld deployment CM with development CM and with ITIL. It will also make managing dynamic dependencies of components and services an absolute nightmare, especially for Service-Oriented Architecture. But it will be a necessary nightmare to face.

  • All of the above, plus regulatory compliance and accounting (such as Sarbanes-Oxley) will combine to make issues of security and fine-grained access control of electronically stored assets and business-intelligence (and processes) an even bigger concern than it already is, which tools will need to do a better job of addressing.

  • The collaboration and innovation process will itself need to be "Agile", and agile practices will need to extend from "delivering software" to "delivering innovation." Applying principles of Agile, Lean, Theory of Constraints (TOC), and even SixSigma to the "innovation supply chain" will become increasingly important. We'll need to struggle to understand what refactoring, test-driven (trust-driven), continuous integration, pairing, and being adaptive/responsive to change mean in this new context.

  • All of the above will also make automating extreme traceability an even bigger concern. But a more agile-style of development that is task-based and event-driven, will help to automate extreme traceability as pragmatically as possible. Extreme "big brother" environments that know and log everything we are doing, in the appropriate context (with the appropriate privacy/security levels) will be an increasing trend in properties of the IDEs and ALM tools we use.

  • All of the above will also make for "extreme configuration identification." We'll have so many things to try to track, it will be almost intractable to try and identify and track them (and their dependencies) in the usual ways. Search engines will help us out here with "tagging and searching" rather than "capturing, filing and sorting", with multiple views and scopes and transparency/trust levels.

  • Extreme time-based competition and focus on the customer-experience will bring "Lean" and TOC (particularly "Lean") methods even more prominently into the limelight. Creating and managing baselines will be more challenging as signficant CM events (such as baselines) blur closer and closer together (what I term the 'collaboration-dilation' effect) and transform discrete events into continuous flow. The logging and tagging+searching capabilities previously mentioned will need to help us with this.

  • Extreme Visualization for how to conceptualize these things in our heads and share those concepts with others, will be another important focus of the next generation of "extreme" ALM tools and services to assist us.

What else? I know I'm missing something else that I thought of earlier but am drawing a blank now. There's gotta be more than this!

Thursday, April 20, 2006

CMCrossroads articles on FDD and Situational Code Ownership

The following two articles of mine appear in the April 2006 issue of CM Crossroads Journal (the month's theme is "Agile Development Practices"):
If parts of them look familiar it's because these articles evolved out of multiple entries from this blog over the past year :-)

The more I look at FDD, the more I really like it as an agile method suitable for a large projects and companies, especially those striving for CMM/CMMI. I also really like the work David Anderson has done tying FDD together with Lean, TOC, and SixSigma.

I'd be real interested in any work on melding FDD together with many of the project management practices of Scrum, and some of the programming practices of XP (e.g., continuous integration, TDD, collective ownership, and refactoring).

Sunday, April 16, 2006

More Extreme Competition: Business Drivers, Realities and Strategies

Another a follow-up to my previous blog-entry about Peter Fingar's book Extreme Competition ...

In the book, Fingar talks about 5 "unstoppable" drivers/transformers, 16 "new realities of business", and 13 "strategy patterns" to consider. Rajesh Jain, who wrote the foreward to the book, has a very well-written series of blog-entries collected together in Tech Talk: Extreme Competition.

I've already blogged quite a bit about this book, so I wont go into detail about each of the items listed below other than to simply list them. For more details, I recommend looking at Rajesh Jain's Tech Talk: Extreme Competition and also the numerous materials available from the homepage for the book Extreme Competition

The Five Unstoppable Drivers Transforming Competition
  1. Knowledge as Business Capital
  2. The Internet
  3. Jumbo Transportation
  4. Three Billion New Capitalists
  5. The New IT
The Sixteen New Realities of Business
  1. Extreme Customers
  2. Extreme Innovation
  3. Extreme Individuals
  4. Extreme Customization
  5. Extreme Business Processes
  6. Extreme Teams
  7. Extreme Supply Chains
  8. Extreme Experiences & Self-Service
  9. Extreme Industry Blur
  10. Extreme Education & Learning
  11. Extreme Government
  12. Extreme Health Care
  13. Extreme Time
  14. Extreme Change
  15. Extreme Specialization
  16. Extreme Branding
Thirteen Strategies for Extreme Competition
  1. The Time to Act is Now!
  2. Be Slavishly Devoted to Your Customers
  3. Think Globally, Act Globally
  4. Be a Superspecialist
  5. Connect with the Superpsecialists
  6. Be a Brand Master, Fight Brand Bullies
  7. Embrace Time-Based Competition
  8. Grok Process!
  9. Embrace The New IT
  10. Offer Process-Powered Self-Service
  11. Offer Product-Services and Experiences
  12. Systematize Innovation
  13. Be a Good Citizen

Wednesday, April 12, 2006

Book Review: Collaboration Explained

My review of Jean Tabaka's Collaboration Explained: Facilitation Skills for Software Project Leaders appear's in this month's issue (April 2006) of The Agile Journal.

I think the book has a lot of useful information, methods and techniques about creating collaborative software teams that are pretty hard to find in other books because most other books on the subject aren't targeted specifically at software development and software project/technical leadership. See the Featured Book section for the review.

Sunday, April 09, 2006

Extreme Traceability

I gave the following response in a March 24 posting to the extreme programming list on the subject of XP-style traceability:

dsrkkk wrote:
Whatever process we use, we need to track requirements, user stories to design and test cases. The traceability provides obviously many benefits. Why don't we automate traceability? How can we humanly track these complex relationships? If we do manually, we may miss something.
I responded . . .

The less you work in a waterfallish-way, and the more you work in an agile fashion with very short feedback loops, the less necessary it becomes to make additional efforts to track/trace requirements.

If I first do lots of requirements, then lots of design, then lots of code, etc., I have to "put the pieces together." The task I performed to write requirements was different from the one that did the design which was different from the one that did code, tests, etc.

If I work in very short cycles (particularly using test-driven development), then an individual development task looks something like:
  • write a single test
  • write the code to pass the test and no more
  • refactor the code
  • commit my changes (Note I might commit my changes before refactoring too).
If, when I commit my changes to the repository, I associate my "commit" with the name or id of the story the test was for, then I have a single, fine-grained change-task that went thru the complete cycle of specification, implementation/design, verification all in a matter of minutes for a single "commit" and user-story.

Working in such fine-grained full-lifecycled tasks automatically associates all the elements that were created & updated as part of the task, which in turn is associated with a user-story. Ta da!!!! Traceability comes along for the ride almost automatically.

From Extreme Locality, in the Feb 2004 "Agile Times", pp.37-40
In Agile methods, we often hear a lot about the importance of simplicity: simple design; do the simplest thing that could possibly work; simple tools; minimal documentation ... Much of the documentation and artifacts created in larger software development methods is for the sake of capturing historical knowledge: the rhyme and reasons behind why something is there, or is designed a certain way.

The desire for such information is often used to justify the need for formal traceability and additional documentation for the sake of maintainability and comprehension. Some very powerful and sophisticated tools exist to do this sort of thing. And yet, there are basic fundamental principles and simple tactics to apply that can eliminate much of this burden.

WHENCE FORMAL TRACEABILITY?
The mandate for formal traceability originated from the days of Department of Defense (DoD) development with very large systems that included both hardware and software, and encompassed many geographically dispersed teams collaborating together on different pieces of the whole system. The systems were often mission critical in that a typical “bug” might very likely result in catastrophic loss of some kind (loss of life, limb, livelihood, national security, or obscenely large sums of money/funding).

At a high level, the purpose of formal traceability was three-fold:
  1. Aid project management by improving change Impact Analysis (to help estimate effort/cost, and assess risk)
  2. Help ensure Product Conformance to requirements specs (i.e. ensure the design covers every requirement, the implementation realizes every design element and every requirement)
  3. Help ensure Process Compliance (only the authorized individuals worked on the things [requirements, tasks, etc.] they were supposed to do)
On a typical Agile project, there is a single team of typically less than two-dozen. And that team is likely to be working with less than 10 million lines of code (probably less than 1 million). In such situations, many of the aforementioned needs for formal traceability can be satisfactorily ensured without the additional rigor and overhead of full-fledged formal requirements tracing.

Rigorous traceability isn’t always necessary for the typical Agile project, except for the conformance auditing, which some Agile methods accomplish via test-driven design (TDD). A “coach” would be responsible for process conformance via good practices and good “teaming,” but likely would not need to have any kind of formal audit (unless obligated to do so by contract or by market demand or industry standards).

Agile methodologies turn the knob up to 10 on product conformance by being close to the customer, by working on micro-sized changes/increments to ensure that minimal artifacts are produced (and hence with minimal reconciliation) and that communication feedback loops are small and tight. Fewer artifacts, better communication, pebble-sized change-tasks with frequent iterations tame the tiger of complexity!
For more then you ever wanted to know about the why&wherefore behind traceability, see The Trouble with Tracing: Traceability Dissected

Tuesday, April 04, 2006

Trends in SCM Predictions for 2006

The January 2006 CMCrossroads Journal came out a few months back. Each January issue of CMCrossroads Journal usually tries to make some predictions about SCM in the coming year (or years).

My predictions on The Future or Agile SCM (from the January 2005 CMCrossroads Journal) actually tried to project thru 2008 (and even 2010 and beyond for one particular prediction). This year we scaled it back to just looking at 2006 (and possibly early 2007).

There were several recurring themes that I noticed among the half-dozen or so articles. No big surprises of course, but the degree of concurrence among the different contributors was very noteworthy (at least I think so):
Globally Distributed development is the new "normal"
Globalization is kicking into high gear in the "flattened world" (see Thomas Friedman's The World is Flat and Barry Lynn's End of the Line : The Rise and Coming Fall of the Global Corporation). The trend of globally distributed development is increasing at a rapid rate and will be a more important focus (and more frequent "buzzword", along with "collaboration").


ALM is the "new" SCM!
Application Lifecycle Management (or ALM) is becoming the new/trendy term for the full-spectrum of Software CM. Vendors have spent the last 6 years or so buying-up individual tools to provide "full-lifecycle" suites to enhance an IDE: modeling tools (that do their own versioning), requirements tools, version control, test management, change-request tracking, and others are all being offered as singly-packaged suites of integrated tools. And the vendors are using the term "ALM" for the result.


TeamSystem, and Eclipse ALF & Corona are the new "Vision"
Microsoft's new Visual Studio Team System (and Team Server) will make a huge splash! Look's like the splash already started, with TeamSystem winning a "Jolt" award (tho not in the SCM category). The fact that it's Microsoft and integrated with VisualStudio is enough, by itself, to warrant other vendors "taking notice." With a single integrated tool-set running off the same server and integrated with the IDE, you can do all sorts of amazing things with automated logging and traceability (especially if it's web-services enabled, or the .NET equivalent).

In order to compete, other vendors are aligning with the Eclipse ALF project, which just recently had its "proof of concept demo." ALF is the Application Lifecycle Framework, an under-development set of web-service and interoperability standards for vendor-independent integration of tools in the ALM space (version control, change/defect tracking, requirements mgmt, test management, project tracking, build/release mgmt, and even IDEs and modeling tools). And at EclipseCon 2006 the Eclipse Corona project was announced as a recent "spinoff" of ALF. According to a March 20 InfoWorld article:

    "ALF addresses the issue of integration and communication between developer tools across the lifecycle; Corona enables Eclipse-based tools to integrate with ALF, according to Eclipse. Also known as the Tools Services Framework, Corona provides frameworks for collaboration among Eclipse clients."

Other related Eclipse projects are BIRT (business-intelligence & reporting tools), the Data Tools Platform, the Test & Performance Tools Platform and the SOA Tools Platform.


Integration of CM with ITIL is the new "Enterprise CM"
More and more SCM services departments are having to deal with many of the same issues of IT (and even being lumped together with IT) as they support not just the functions for CM, but also provide deployment, training and support for the corresponding ALM tools, repositories and servers. The integrated CMDB is also useful for both traceabilty and accounting/accountability (think Sarbanes-Oxley).

Also, more and more folks are needing to extend CM into their deployment sites at their customer's operations to help them handle field-issues and upgrades, even monitoring, service-tracking/licensing/monitoring and possibly some CRM functions.

So it makes sense that CM professionals who need to deploy, develop and support "the whole shmeer" are having to learn and understand CM, ALM and ITIL. Word has it that integration of CMMI with ITIL is coming soon.


SOA, BPM and TCO are emerging priorities on the horizon
There wasn't quite as much mention about these, but it seems clear that they will be growing more important this year. Service-Oriented Architecture (SOA) will be important to the extent that TeamSystem and ALF are important, and some vendors are already using SOA-based integration for the tools within their own suites.

As the integrations between ALM tools and with the IDE become more seamless, Business Process Management (BPM) and Workflow will become a bigger concern as folks try to more readily define and automate their processes, particularly for deployment to multiple distributed sites.

And Total Cost of Ownership (TCO) is becoming an increasingly more prevalent factor in selection of CM/ALM tools. The spiffy features and functionality are nice, but just as much weight (if not more) is being given to business issues of global licensing and upgrade/support, administration/maintenance, along with total cost of ownership.

So where does that leave us today? And how does this relate to the subject of last month's blog-entries about globalization and the resulting business imperatives of continuously connecting and collaborating to create, innovate and dominate?

Saturday, April 01, 2006

The Design of Things to Come: Pragmatic Innovation

This book, by Craig M. Vogel, Jonathan Cagan, Peter Boatwright, sort of naturally follows from previous blog-entries "connecting the dots" from globalization 3.0, to continuous collaboration + innovation, to the sets of right-brained skills Daniel Pink says will be essential to prepare us to move from the information age to the conceptual age.

The Design of Things to Come : How Ordinary People Create Extraordinary Products, is about "people-focused" innovation and how to use the skills Dan Pink describes to create a new breed of innovation that forges an emotional connection to the customer to provide a thrilling customer experience in the various forms it may manifest itself for business.

Throughout the book, the term Pragmatic Innovation is used, enough so that it hopes to become a new "catch phrase" (time will tell if it succeeds). Much of what it describes, including its focus on people and pragmatism, seem very well aligned with the values described in the Agile Manifesto.

Put this book together with the previously mentioned works from Peter Fingar (Extreme Competition), John Hagel and John Seely Brown (The Only Sustainable Edge) and Dan Pink (A Whole New Mind), and you may very well have a blueprint for both strategy and tactics on how to connect, collaborate and innovate and in what areas (and in what ways) to attempt it [such as focusing on "the customer experience" and Human Interaction Management while companies like Google transform the internet into our ubiquitous personal computing platform].

Add a dash of Fanning the Creative Spirit and Innovation at the Speed of Laughter, and maybe we'll get a solid picture of what the ideal future workplace may look like: agile, lean, connective, distributed, collaborative, pragmatic, people-centered, innovative, and dominating the competition.