Thursday, July 03, 2008

Summer of Books

I'm going on some long needed (and hard earned) vacation. I won't be blogging again for about one month (so this will likely be my only entry for July).

I've got a lot of REALLY GREAT and interesting books to try and catch up on. I hope to blog about them when I return. Here is what's on my summer reading list:

Theory U: Leading from the Future as it Emerges, by C. Otto Scharmer
I saw the executive summary and other chapters at www.TheoryU.com (also see www.dialogonleadership.org). This looks to be THE penultimate book on leadership for the Agile Organization. It doesn't even use the word "Agile" anywhere, but the values and principles from the book are so well aligned with Agile values and principles, it is positively uncanny.

Advanced Rails Recipes (by Mike Clark), and Deploying Rails Applications (by Ezra Zygmuntowicz, Bruce Tate, and Clinton Begin)
The latest Ruby & Rails books from the Pragmatic Bookshelf.

Implementing SOA: Total Architecture in Practice, by Paul C. Brown
I was extremely impressed with Paul Brown's earlier book on Succeeding with SOA: Realizing Business Value through Total Architecture, and am looking forward to this follow-up work.

Eating the IT Elephant: Moving from Greenfield Development to Brownfield, by Richard Hopkins and Kevin Jenkins
I received a review copy of this. Don't exactly know what to make of it just yet - but it does seem intriguing

Emergent Design: The Evolutionary Nature of Professional Software Development, by Scott L. Bain
This books looks to be an excellent "one stop shop" for learning the theory and practice of Test-Driven Development, Refactoring, Simple Design, and Design Patterns, all as part of a single integrated and coherent method

Agile Adoption Patterns: A Roadmap to Organizational Success, by Amr Elssamadisy
Amr has been writing for the Agile Journal and InfoQ.com and I have been looking forward to seeing this book in hardcopy.

The Software Project Manager's Bridge to Agility, by Michelle Sliger and Stacia Broderick
I think this is going to be THE book for all traditional PMPs trying to make the transition to agile project management.

Changing Software Development: Learning to Become Agile, by Alan Kelly
This books seems to have an interesting take on the connection between Agile, Lean, and Learning Organizations.
I have a few other books too, but they are much shorter :-)
Have a great July everybody!!!

Thursday, June 26, 2008

Assigning Code Ownership-Policy Ownership

Jurgen Appelo has an interesting article on StickyMinds entitled "Code Ownership Re-Visited"

Jurgen prefers the term "artifact assignment" rather than "code ownership" and explains there are 4 methods of artifact assignment:
  1. Local Artifact Assignment (LAA) delegates policy to subsystems and subsubsystems (etc.)
  2. Authoritarian Artifact Assignment (AAA) assigns change/access-control access-control of of ALL related artifacts to a single individual "benevolent dictator" who approves/authorizes all changes and who may also assign individual change-tasks to developers
  3. Collective Artifact Assignment (CAA) assigns the whole team (rather than any one person) as collectively accountable for all its artifacts
  4. Individual Artifact Assignment (IAA)
He also provides a nice set of criteria to help decide which policy to use.

This seems very different from the 4 kinds of ownership I described in "Situational Code Ownership: Dynamically Balancing Individual -vs- Collective Ownership" where I define what amounts to Non-Ownership (which can sometimes be the result of Dictatorship), Individual Ownership, Stewardship, and Collective Ownership and show how each maps to a corresponding leadership-style of the Situational Leadership Model.

So what gives? What explains this difference?
  • I think Jurgen would probably consider Stewardship as a weak form of Individual ownership (many others would too, though I staunchly disagree for reasons elaborated in the aforementioned article).
  • Authoritarian Assignment would be akin to the form of Non-Ownership that results from Dictatorship (or "director-ship" to be more precise) where assignments are made per modification/tasks by a director (or "benevolent dictator")
  • I would argue that the first two methods Jurgen describes above aren't really artifact-assignment policies, but instead are assignment-ownership policies: They're not so much about making a decision among ownership-policies as they are about making a policy for ownership decisions. Rather than deciding "who should own which artifacts", they decide "who should own the decision" to make such artifact assignments.

In other words, the first two policies Jurgen defines are about decision-rights to assign modification-rights to owners, and not about the modification-rights (or ownership assignments) themselves. As such, it raises an important point taken for granted in my article and in so many other discussions on this topic. Most of the prior discussion probably has assumed that the decision about which ownership-policy to adopt was made either "by the team" or by the team's "leadership" (that might be a manager, a technical-lead, a project-lead, or any combination thereof).

Another common assumption is that such ownership is defined along "architectural" boundaries such as individual artifacts/files, classes/modules, or packages, components and subsystems. Other possibilities are:
  • Functional boundaries (e.g., by feature/feature-set, story/theme/epoch, or use-cases)
  • Project boundaries (e.g., work-breakdown-structures, tasks, activities)
  • Role/discipline specialization boundaries (e.g., requirements, tests, models, database, UI, programming-language, user/help documentation, etc.), and even ...
  • Configuration/variation boundaries (e.g., version, variant, branch, platform). In fact some of these stretch across multiple dimensions of a project/product and might even be used in combination.
With Agile development, the emphasis is to break-down communication boundaries and any corresponding separation related to role, or phase, or physical boundaries and to instead prefer customer-determined boundaries of "scope" and "deliverable value" (e.g., stories, features or MMFs, use-cases, etc.).

So you will see a definite (but time-constrained) assigning of things like tasks to owners and stories (though the "owners" sign-up rather than "being assigned"). Those kinds of boundaries encourage closer and more frequent communication rather than separate & isolated (and less frequent) communication.

In the end, with Agile methods, it's all about maximizing learning and knowledge sharing & transfer rather than compartmentalizing knowledge into pigeon-holed roles and responsibilities. One opts for "separation of concerns" without "separation of those concerned" (work is isolated and separated, but not people).

Thursday, June 19, 2008

Four Rules for Simple Codelines

Some of you may be aware of Kent Beck's Four Rules of Simple Code that state simple code:
  1. Correctly runs (and passes) all the tests
  2. Contains no duplication (OnceAndOnlyOnce and The DRY Principle)
  3. Clearly expresses all the ideas/intentions we needed to express (reveals all intent and intends all it reveals)
  4. Minimizes the number of classes and methods (has no superfluous parts)
(I've seen some boil this down into some of the same rules for writing clear prose: correct, consistent, clear, and concise.)

Lately I've been noticing some parallels to the above and rules for what I would call "simple codelines" and I think there may be a similar way of expressing them...

Simple codelines:
  1. Correctly build, run (and pass) all the tests
  2. Contain no duplicate work/work-products
  3. Transparently contain all the changes we needed to make (and none of the ones we didn't)
  4. Minimize the number and length of sub-branches and unsynchronized work/changes

To elaborate further...

Correctly build, run (and pass) all the tests

This is of course the most obvious and basic of necessities for any codeline. If the codeline (or the "build") is broken, then integration is basically blocked, and starting new work/changes for the codeline is hindered.

Contains no duplicate work/products

The same work and work-products should be done OnceAndOnlyOnce! Sometimes effort is spent more than once to introduce the same change/functionality. This is sometimes because of miscoordination, or simply lack of realization that what two different developers were working on required each of them doing some of the same things (and perhaps should have been accomplished in smaller chunks).

Other times, rather than modify or refactor a common file, some will simply copy-and-paste the contents of one or more files (or directories/folders) because they don't want to have to worry about reconciling what would otherwise be merges of concurrent changes to the common files.

This is akin to neglecting to refactor at the "physical" level (of files and folders) as opposed to the "logical" level of classes and methods. It adds more complexity and (over time) inconsistency to the set of artifacts and versions that make up the codeline, and also eventually adds to the time it takes to merge, build, and test any integrated changes.

If content is being added to the codeline, we want that content to have to be added only once, without any duplicate or redundant human effort.

Transparently contains all the changes we needed to make (and none of the ones we didn't)

The above is sometimes the cause of much undesirable additional effort that is imposed for the sake of attaining traceability and ensuring process compliance/enforcement. Here, I mean to focus on the ends rather than the means, and I say transparency rather than traceability for that very reason.

If people are working in a task-based and test-driven manner, it should be simple to report what changes have been made since a previous commit and that only intended tasks were worked-on and integrated.

If a codeline is truly simple, then it should be very simple and easy to reveal all the changes that went into it without adding a lot of overhead and constraints to development. It should be easy to tell which changes/tasks have been integrated and what functionality and tests they correspond to. One very simple and basic means of tying checkins (or "commits") to backlog-tasks and their tests can be found here; others are mentioned in this article.

Minimizes the number and length of sub-branches and unsynchronized work/changes

Branching can be a boon when used properly and miserly. It can also add a heck of a lot of complexity and redundancy for maintaining two or more evolving variants of the project. The additional effort to track and merge and build many of the same fixes and enhancements in multiple configurations can be staggering.

Sometimes such branches are useful or even necessary (and can help with what Lean calls nested synchronization and harmonic cadences). But they should be as few and as short-lived as possible, preferably living no longer than the time it takes to complete a fine-grained task or to integrate several fine-grained tasks.

Even when there are no sub-codelines of a branch, there can still be un-integrated (unsynchronized) work-in-progress in the form of long-lived or large-grained tasks with changes that have not yet been checked-in or synced-up with the codeline. Keeping tasks short-lived and fine-grained (e.g., on the order of minutes & hours instead of hours & days) helps ensure the codeline is continuously integrated and synchronized with all the work that is taking place.

Another (possibly less obvious form) of unsynchronized work is when there is a discrepancy between the latest version of code checked-in to the codeline, and the latest version of code that constitutes the "last good build." Developer's lives are "simpler" when the latest version of the codeline (the "tip") is the version they need to use to base new work off of, and to update their existing workspace (a.k.a. "sandbox").

When the latest "good" version of the codeline is not the same (less recent) than the latest version, it can be less obvious to developers which version to use and become less likely that they use/select it correctly. Some use "floating tags" or "floating labels" for this purpose where they "move" the LAST_GOOD_BUILD tag from its previous set of versions to the current set of versions for a newly passed/promoted build. Sometimes the developers always use this "tag" and never use the "tip" (except when they have to merge their changes to the codeline of course).

Even with floating tags however, it is still simpler and more desirable when the last good version IS the latest version. Even if the latest version is known to be "broken", the lag between "latest" and "last good" version of a codeline can be a source of waste and complexity in the effort required to build, verify and promote a version to be "good" (and can introduce more complexity when having to merge to "latest" if your work has only been synchronized with "last good").

Plus, this lag-time often leads many a development shop to separate merging (and integration & test) responsibilities between development and so called integrators/build-meisters, where the best developers can attempt is to sync-up their work with the "last good build" and then "submit" that work to a manually initiated build rather than being directly responsible for ensuring the task is "done done" by being fully integrated and passing all its tests.

Such separation often leads to territorial disputes between roles and build/merge responsibilities. This in turn often leads to adversarial (rather than cooperative and collaborative) relationships and isolated, compartmentalized (rather than shared) knowledge for the execution and success of those responsibilities.

So there we have it! Four rules of simple codelines.

Simple Codelines should:
  1. Correctly build, run (and pass) all the tests
  2. Contain no duplicate work/work-products
  3. Transparently contain all the changes we needed to make (and none of the ones we didn't)
  4. Minimize the number and length of sub-branches and unsynchronized work/changes

Sometimes there are legitimate reasons why some of the rules need to be bent, and there are important SCM patterns to know about in order to do it successfully. But any time you do that, it makes your codeline less simple. So you want those scenarios to be few and far between, and to keep striving for the goal of simplicity. (Other SCM patterns, such as Mainline, can help you refactor your codelines/branches to be more simple.)

Thursday, June 12, 2008

Traceability Matrix in an Agile Project

InfoQ.com summarized an email-list discussion thread on the subject of using a Traceability Matrix in an Agile Project.

I contributed quite a lot to the thread, and InfoQ apparently included many of the key things I said along with the related URLs to articles I've written. (Thanks guys!)

Sunday, June 08, 2008

Iterative and Incremental redefined redux

The agile community has written much about this in the past year or so:
Apologies in advance for being a "stick in the mud" on this one - I'm not particularly happy with the definitions so far. I searched around some more on the WWW and came across one I like a lot that I think better meets our needs.

It is from the paper What is Iterative Development? (part 1), by Ian Spence and Kurt Bittner,
Iterative and Incremental Development:
A style of development that involves the iterative application of a set of activities to evaluate a set of assertions, resolve a set of risks, accomplish a set of development objectives, and incrementally produce and refine an effective solution:
  • It is iterative in that it involves the successive refinement of the understanding of the problem, the solution's definition, and the solution's implementation by the repetitive application of the core development activities.2

  • It is incremental in that each pass through the iterative cycle grows the understanding of the problem and the capability offered by the solution.

  • Several or more applications of the iterative cycle are sequentially arranged to compose a project.
Sadly, development can be iterative without being incremental. For example, the activities can be applied over and over again in an iterative fashion without growing the understanding of the problem or the extent of the solution, in effect leaving the project where it was before the iteration started.

It can also be incremental without being truly iterative. For example, the development of a large solution can be broken up into a number of increments without the repetitive application of the core development activities.

To be truly effective the development must be both iterative and incremental. The need for iterative and incremental development arises out of the need to predictably deliver results in an uncertain world. Since we cannot wish the uncertainty away, we need a technique to master it. Iterative and incremental development provides us with a technique that enables us to master this uncertainty, or at least to systematically bring it sufficiently under control to achieve our desired results.

I like that this definition separated iterative from incremental and then defines them together. I would summarize it as follows (but I like the above better, even if it is longer):
Iterative development is the cyclical process of repeating a set of development activities to progressively elaborate and refine a complete solution. The “unit” of iterative development is an “iteration”, which represents one complete cycle through the set of activities.

Incremental development
is the process of developing and integrating the parts of a system in multiple stages, where each stage implements a working, executable subset of the final system. The “unit” of incremental development is an “increment”, which represents the executable subset of the system resulting from a particular stage

Iterative and Incremental development
is
therefore ...
the application of an iterative development lifecycle to successively develop and refine working, executable subsets (increments) of a solution that evolves incrementally (from iteration to iteration) into the final product.
  • Each iteration successively elaborates and refines the understanding of the problem, and of the solution's definition & implementation by learning and adapting to feedback from the previous iterations of the core development lifecycle (analysis, design, implementation & test).
  • Each increment successively elaborates and refines the capability offered by the solution in the form of tangible working results that can be demonstrated to stakeholders for evaluation.
An Agile Iteration is a planned, time-boxed interval (typically measured in weeks) whose output is a working result that can be demonstrated to stakeholders:

  • Agile Iterations focus the whole team on collaborating and communicating effectively for the purpose of rapidly delivering incremental value to stakeholders in a predictable fashion.
  • After each iteration, the resulting feedback and data can be examined, and project scope & priorities can be re-evaluated to adapt the project's overall performance and optimize its return-on-investment
So in addition to the non-agile-specific definitions above, we see that Agile iterations are adaptive, in that they use the previous results and feedback to learn, adjust and recalibrate for the next iteration. And Agile increments are tangible, in that they can be executed and made accessible to stakeholders for demonstration and evaluation.

That's my story and I'm sticking to it!

Monday, June 02, 2008

The Laws of Codeline (Thermo)Dynamics

Some of the discussion with my co-authors on our May 2008 CM Journal article on Agile Release Management spurred some additional thoughts by me that I hope to refine and work into a subsequent article later this year.

Release Management is about so much more than just the code/codeline (and it being "shippable") it's not even funny. Some other articles to reference and mention some key points from are:
Kevin Lee has written some GREAT stuff on Release Management that relates to Agile. The best is from the first and last chapters of his book on "The Java™ Developer's Guide to Accelerating and Automating the Build Process" but bits of pieces of it can also be found at:

ANY discussion about Release Management also needs to acknowledge that there is no single "the codeline", not just because I may have different codelines (Development-Line plus Release-Line) working toward the same product-release, but ESPECIALLY because no matter how Agile you are, the reality is that you will typically need to support MULTIPLE releases at the same time (at the very least the latest released version and the current under development version, but often even Agile projects need to support more than one release in the field)

So, when dealing with multiple release-line, and any "active development lines" for each of those, and the overall mainline, we really should say something overall about how to manage this "big picture" of all these codelines across multiple releases and iterations:
  • What is the relationship between development line, release-line and release-prep codeline?
  • How do the above three "lines" relate to "mainline"
  • What is the relationship between the different release-lines for the different supported releases
  • What is the overall relationship between the mainline and the release-lines (and if the mainline is also a release-line, which release is it?)
The above questions and the ability to give some big picture "advice" on relating it all together (the stuff of pattern languages) is precisely where Laura Wingerd's writing on "channeling the flow of change" and her chapter on "How Software Evolves" fits in! It tells us
  • The overall Mainline model
  • The different types of codelines ("line" patterns), and what kinds of builds take place on each of them
  • The relationships of those to the mainline
  • When+Why to branch (and from which "types" of codelines)
  • When+Why to merge across codelines (as a general rule)

These are where Laura's rules for "the flow of change" apply. And her concept of "change flow" is very much applicable to the Lean/Agile concept of "flow of value". The Tofu scale and "change flow" rules/protocol have to do with order+flow of codeline policies across the entire branching structure when it comes to making decisions about stability -vs- speed. One codeline's policy might make a certain tradeoff, but it is the presence of multiple codelines and how they work together, and how their policies define the overall flow of change across codelines, that forms the "putting it all together" advice that is key to release management across multiple releases+codelines.

In some way's you could make an overall analogy to the Laws or Thermodynamics and the realities of codeline management. Software and codelines tend, over time, to grow more complex and, if unchecked,
"Entropy" (instability) quickly becomes the most dominating force to contend with in their maintenance. See

The "entropy" (instability) doesnt just happen within a codeline. It can actually get far more hideous when it happens across codelines via indiscriminate branching from, or merging to, other codelines. This is what happens when you don't respect the principles and rules of "change flow" (from Wingerd) which ultimately stem from the rules of smooth and steady (value-stream) flow from Lean.

The Laws of Thermodynamics are about energy, entropy, and enthalpy. In the case of release management and codelines ...
  • energy relates to effort & productivity
  • entropy relates to stability/quality versus complexity
  • enthalpy relates to "order" (i.e., in the sense of structure and architecture as Christopher Alexander uses the term "order"). It is the "inverse" of entropy.

We could call them them "Laws of Codeline Dynamics" :-)

Energy misspent degrades flow, creates waste, and hurts productivity/velocity. In traditional development, we often see "fixed scope" with resources and schedule having to vary in order to meet the "scope" constraint. IN Agile development we deliberately "flip" that triangle upside down (see the picture in the article at here under the title "The Biggest Change: Scope versus Schedule - Schedule Wins"). So we are fixing "resources" and "schedule" and allowing scope to vary.

This might be one way of viewing the law of conservation of energy. If we fix resources and time (and insist on "sustainable pace" or "40hr work week") then we're basically putting in the same amount of effort over that time-box, but the key difference is how much of that effort results in "giving off energy" in the form of waste ("heat" or "friction") versus how much of that energy directly adds value. Both "Value" and "Enthalpy" degrade or depreciate over time, and adding more energy (effort) doesnt necessarily mean value is increased.

To make sure that energy goes toward adding value (and minimizing waste) we need to focus on the flow of value, and hence the flow of change/efforts to create value (the latter is one reasonable definition of a "codeline" or a "workstream"). to ensure a smooth, steady, and regular/frequent flow, there are certain rules we need to impose and regulate stability within and across codelines to better manage all those releases.

Zeroth Law of Thermodynamics (from Wikipedia)
- If two thermodynamic systems are each in thermal equilibrium with a third, then they are in thermal equilibrium with each other.
Translation to codelines ... this law of "thermal equilibrium" is a law of "codeline equilibrium" of sorts. (Does this mean If two codelines are are "in equlibrium" with a third codeline, then they are "in sync"? and with each other? here "in sync" doesnt mean they have the same frequency, it means their is some synchronization pattern regarding their relative stability and velocity. In Lean Terms, this would refer to "nested synchronization" and "harmonic cadence"). This might imply the "mainline" rule/pattern or one of Wingerd's rules of change-flow.

First Law of Thermodynamics
- In any process, the total energy of the universe remains the same.
This is the statement of conservation of energy for a thermodynamic system. It refers to the two ways that a closed system transfers energy to and from its surroundings - by the process of heating (or cooling) and the process of mechanical work.

This relates to effort & changes expended resulting in the creation of value and/or the creation of waste. We have activities that add value (which we hope is development), activities that preserve value (which is what much of SCM attempts do, given that it doesnt directly create the changes, but tries to ensure that changes happen and are built/integrated with minimal loss of energy/productivity/quality), and then we have activities (or portions of activities) that create waste (and increase entropy rather than preserving or increasing enthalpy/order)

Second Law of Thermodynamics
- In any isolated system that is not in equilibrium, entropy will increase over time
So this is the law of increasing instability/complexity/disorder. The "key" to preventing this from happening is achieving and then maintaining/preserving "equilibrium". How do we achieve such equlibrium? we do it with the "release enabler" patterns for codeline management (which help ensure "nested synchronization" and "harmonic cadence" in addition to achieving a balance or equilibrium between stability and velocity (to smooth out flow).

Third Law of Thermodynamics
- As temperature approaches absolute zero, the entropy of a system
approaches a constant minimum.
In our case, "Temperature" could be regarded as a measure of "energy" or "activity". As the energy/activity of a codeline approaches zero (such as a release in the field that youve been supporting and would LOVE to be able to retire that codeline sometime real soon), it's instability approaches a constant minimum.

This is perhaps another more polite way of saying something we already said in our article on "The Unchangeable Rules of Software Change", namely that "absolute stability" means dead (as in, "no activity"), and should serve as a reminder that our goals is not the prevention of change in order to achieve some ideal "absolute stability", for such an absolute would mean the project not just "done" but "dead".

On the other hand, it also speaks to us as a guideline for when it is safe to retire old codelines, and when to change their policy in accordance with their "energy level"

Monday, May 26, 2008

An Agile Approach to Release Management

My Agile SCM co-authors Rob Cowham, Steve Berczuk, and myself have written an article for the May CM Journal on An Agile Approach to Release Management

We're relatively pleased with the article, and all collaborated together quite well.

Monday, May 19, 2008

BOOK: Software Teamwork - Taking Ownership for Success

My review of Jim Brosseau's Software Teamwork: Taking Ownership for Success is available in the May issue of the Agile Journal. It is nothing less than outstanding!

I found Software Teamwork to be an immensely helpful, intensely practical, profusely insightful field guide to improving team outcomes and changing team behaviors by focusing on interpersonal action and personal leadership. This book belongs on any software team-leader's bookshelf, along with Jean Tabaka's Collaboration Explained and Murray Cantor's Software Leadership.

Other articles in this issue on the theme of "Challenges with Distributed Agile" are:

Tuesday, May 13, 2008

Distributed Version-Control Guide on InfoQ.com

Nice little guide on InfoQ.com about Distributed Version Control - that's twice in two months that the "agile" section of InfoQ.com has had a decent article on the subject!

Tuesday, May 06, 2008

From PMBoK to Agility

I recently learned that Michelle Sliger, author of the wonderful 4 part series of articles on Relating PMBoK to Agile Practices, is co-authoring a book with Stacia Broderick entitled the Software Project Manager's Bridge to Agility. You can even download an excerpt from her website.

I'm looking forward to this book a great deal, judging by the excellent articles and presentations of hers that I've read.

Wednesday, April 30, 2008

BOOK: Implementing ITIL Configuration Management

I started reading through the book Implementing ITIL Configuration Management, by Larry Klosterboer. I'm really not what I'd consider an expert on ITIL nor IT Service Management, but I've had more than my fair share of exposure to it and am certainly no "slouch" in that area either.

This book looks to be an overview of ITIL and to how it applies to configuration management. From there one can extrapolate how much of it relates to CM for not just IT assets and infrastructure but to the software development environment and to software development itself.

The book includes coverage of the following (from the back cover):

  • Assessing your current configuration management maturity and setting goals for improvement
  • Gathering and managing requirements to align ITIL with organizational needs
  • Describing the schema of your configuration management database (CMDB)
  • Identifying, capturing, and organizing configuration data
  • Choosing the best tools for your requirements
  • Integrating data and processes to create a unified logical CMDB and configuration management service
  • Implementing pilot projects to demonstrate the value of configuration management and to test your planning
  • Moving from a pilot to wide-scale enterprise deployment
  • Defining roles for deployment and ongoing staffing
  • Leveraging configuration management information: Reporting and beyond
  • Measuring and improving CMDB data accuracy

To take the next step, and for a REALLY thorough treatment of how IT service management and CM comes full circle to embrace all of enterprise architecture and software development, I highly recommend Charles Betz' book Architecture and Patterns for IT Service Management, Resource Planning, and Governance: Making Shoes for the Cobbler's Children. As I mentioned in a blog-entry early last year, this book "really ought to be required reading for anyone that fancies themselves a 'CM professional' (especially Software CM) or an 'Enterprise Architect.'"

Tuesday, April 22, 2008

Three Pivotal Practices to Eliminate Waste

I received my program for the Better Software Conference & Expo this coming June 9-12 in Las Vegas (alas, I will be unable to attend). The description for the keynote that will be given by Jean Tabaka caught my eye. Jean Tabaka is an Agile Coach from Rally Software Development and the author of Collaboration Explained: Facilitation Skills for Software Project Leaders).

Her keynote is entitled "Attacking Waste in Software: Three Practices We Must Embrace Now" and the abstract is as follows:

One of the seven principles of Lean Thinking is “eliminate waste.” Eliminating waste means minimizing the cost of the resources we use to deliver software to our stakeholders. Jean Tabaka proposes three pivotal practices that we must embrace to aggressively attack waste in software delivery —- Software as a Service (SaaS), Community, and Fast Feature Throughput:
  1. Software as a Service (SaaS) eliminates waste by deploying software-based services without the cost inherent in traditional software delivery—materials, shipping, time delay, and more.

  2. Community involves stakeholders working together to create products rather than competing among themselves for limited resources. Community eliminates waste by democratizing software development to obviate the need for multiple systems with the same functionality.

  3. Fast Feature Throughput refers to development methods that embrace change and quickly deliver value to customers. It eliminates waste by responding to market pull with short, incremental delivery cycles.
When IT and all software organizations embrace these practices, they will eliminate waste within their organizations, reduce the waste that consumes our entire industry, and ultimately support the broad 21st century global mandate to manage our scarce resources.

I can't help but think how these same "pivotal practices" apply equally well to Agile CM, resulting (presumably?) in Software CM as a Service (SCMaaS), Community, and Rapid Change-Flow (where the latter refers to both quickness and responsiveness of change assessment and approval, as well as to development velocity as the changes flow through codelines and become integrated, built, promoted and released).

Tuesday, April 15, 2008

Rise of the Development Environment Architect

Peter Eeles and I must be subconsciously on the same page. Because at the same time I was blogging about Software Architecture Views and Perspectives and Software Architecture Quality Attributes and their direct applicability to SCM/ALM solution architecture (and software process in general), Peter was working on an article for IBM developerworks entitled The Rise of the Development Environment Architect:

[Development] environments present challenges; and, interestingly, these challenges are similar to those of the systems they support. For example, development environments have to deliver against the required functionality and properties (such as performance and usability), often have to coexist with legacy systems (such as, in the case of a development environment, existing methods and tools), and have to acknowledge other constraints (such as the distributed nature of development teams, and existing skills and infrastructure).

All in all, creating a well-oiled development environment that accelerates, rather than hinders, project performance is a science unto itself. This is why IBM® Rational® has spent many years specifically developing a services capability that understands the challenges faced by organizations that 1) want to improve developer productivity, and 2) regard their development organization as a strategic differentiator, rather than simply a cost center.

Our experience has led the Rational team to define a role within the software development lifecycle called the "development environment architect." In October 2007, one hundred of Rational's most experienced development environment architects from across the globe gathered together in the first conference dedicated to this role to share their experiences. This article is a result of that conference and the discussions that took place.

As you read the concepts presented here, you may well question whether the development environment architect should be a role itself, or whether the individual or team who normally functions in the software or systems architect role should simply add consideration of the development environment to their list of architectural concerns. I believe that both propositions are valid. Furthermore, whenever the role of the "architect" is discussed, it is always qualified with the domain under consideration; thus we speak of a "building architect," "software architect," "systems architect," "enterprise architect," etc. The development environment is simply one of these domains, and one that is not traditionally a concern for the "software architect" role. I therefore believe that the "development environment architect" role is one that hasn't been emphasized before -- hence this article.

This article has several audiences and objectives. It is relevant to organizations undertaking an improvement to their development environment and who need to understand the value of a development environment architect to help guide their initiative. It is also relevant to those who are responsible for the technical content of the development environment -- i.e., development environment architects -- because this article introduces this responsibility as a role not previously defined. Finally, this article may supplement material contained within a development environment, in helping communicate its content, the role of its architect, and the benefits of having such a role in place.


Read the full article here. I may blog later about the similarities and differences between the sort of architecture that Eeles describes versus my 4+2 Views Model of SCM/ALM Solution Architecture

Update - July 2009: Peter gave a keynote presentation on this topic at RSDC2008 (PDF slides)

Wednesday, April 09, 2008

BOOK: Outside-in Software Development

My review of Outside-In Software Development is in this month's edition of The Agile Journal.

Kessler and Sweitzer's Outside-in Software Development should resonate deeply with all those who genuinely value the principle of customer collaboration in the Agile Manifesto, and with anyone who has played the role of Product Manager for a software project. This 2008 Jolt award Finalist is not a book about eliciting or prioritizing requirements (or "user stories") for an Agile project. This book goes beyond mere user-stories and their ranking or velocity to focus on uncovering the underlying needs and goals of your stakeholders and understanding what truly adds value for the customer and the business.

... I think Outside-in Software Development is a profoundly important book for anyone in the Agile or Lean "camps" because it addresses and embraces the often neglected pieces of the customer-relationship puzzle that emerge from the stakeholders' perspective, often after the software is released. It shows us how many of those same Lean and Agile values of collaboration, responsiveness, waste-elimination, and respect for people can be successfully applied to the users' experience with our software, and to the stakeholders' experience with ourselves in the service of realizing the very business value we strive to deliver.

Read the full review.

Wednesday, April 02, 2008

BOOK: Programming Groovy and Groovy Recipes

I just received an advance copy of Programming Groovy from the Pragmatic Programmer's Bookshelf. This complements their work that came out last month on Groovy Recipes.

From the
Programming Groovy book webpage:
Groovy brings you the best of both worlds: a flexible, highly productive, agile, dynamic language that runs on the rich framework of the Java Platform. Groovy preserves the Java semantics and extends the JDK to give you true dynamic language capabilities⎯programming in Groovy feels like you’re using an augmented Java. Programming Groovy will help you learn and take advantage of the latest version of this rich dynamic language, so you can be a more productive Java Platform developer.
From the
Groovy Recipes book webpage:
If you’re a busy Java professional who needs quick solutions to everyday problems, then Groovy Recipes is for you. The Groovy language and Grails web framework give you seamless integration with your legacy Java code while adding the flexibility and dynamism of a scripting language and giving you modern, agile, time-saving techniques. Groovy allows you to write code the way you always thought you should—you’ll never look at Java the same way again.
For those who like Ruby and Rails and the ability to access other Java frameworks and APIs, but also really want their Java-like syntax (and hence more than just JRuby), these are the books to read. Groovy even has its own answer to Rails called "Grails"

See also:

Monday, March 31, 2008

Software Process-Line Architecture and Common Processes

Extending the analogy of software architecture views and quality attributes for software process architecture, I'd like to spend some time discussing how software product lines relate to software process architecture and "common processes" across an enterprise.

Many organizations strive for standard common processes, often as part of a CMM/CMMI-based process improvement. All too often I have seen the mantra of "common process" misused and abused to make the practitioners serve the process instead of the other way around.
Processes don't create great software. People and Teams do!
And while the process needs to meet the needs of the business and the needs of the customer, it has to first and foremost serve the needs of the practitioners so that they in turn may better serve the needs of the business to deliver operational business value.

Many in management seem to have the mis-impression that "common process" means "no tailoring" and everyone does everything the same way across products and projects throughout the organization. Process variation across products and projects is regarded as something to eschewed and stamped out, beating the offenders into compliance with top-down dictates and mandates and sanctions. If everyone does everything the same way then the people are more or less "plug-and-play replaceable" and can quickly and easily be reallocated to another project or product with zero learning-curve and associated start-up costs.

This is a dangerous myth that causes irreparable harm to process improvement and common/standard process efforts. Anything that focuses on individuals and interactions as subservient to common processes and standard tools is doomed to fail, and those organizations often end-up with the processes they deserve (along with many disgruntled, frustrated workers).

The purpose of such common processes and tools is not to be a rigid restrictive straightjacket for replaceable people. The intended purpose is to recognize that such people are irreplaceable and to provide a flexible knowledge framework to guide and enable them as the help each other collaborate to learn, grow, and lead in the discovery, practical application and effective execution of practices and improvements that are the best fit for a particular product, product, community and business-environment.

The intended purpose common software processes is quite simply that of process and knowledge reuse! And as such it shares many of the same fundamental problems and solutions as that of software reuse. Indeed it could even be argued that software process reuse is but a special case of software reuse. And current prevailing industry wisdom on the subject suggests that software product-lines show the greatest promise of leveraging software reuse for greatest business value.

In software reuse, we seem to recognize that "one size does not fit all." We acknowledge that even though different products, components, and platforms may share common features, that each one may have different project parameters and environments with different quality attributes and engineering-tradeoffs that need to be "preferred" and optimized that particular application: dynamic versus static, performance versus memory, storage versus latency, throughput versus bandwidth, single versus multi processing, optimistic versus pessimistic concurrency, security versus availability, and on and on.

Software process reuse is no different. Different products and projects have their own uniquely differentiating value proposition (if not there would be no business-need to attempt them in the first place). And those differentiating aspects warrant many kinds of process variation across projects, products, technologies and teams.

Those coming from a SixSigma background may point out how SixSigma strives for reducing process variation. But it's all too easy to forget the context for that is when repeatably reproducing the same output from the same inputs for the same desired set of quality attributes and tradeoffs (not to mention the "kinds" of variation SigSigma is appropriate for trying to eliminate).

So I would advocate the translation and application of software product-line practices to software processes (software "Process-Lines" or "Process Families" if you will) and the treatment of such common processes as first class architectures that need to accommodate the views and perspectives of ALL their critical stakeholders, and which should identify their essential quality attributes and tradeoffs, approaches to managing commonality and variability, and apply appropriate patterns and tactics (such as modifiability tactics for software processes and projects) to meet those objectives.

In light of the above, Lean & Agile software development define an architectural style for such process families and their architecture. Lean and Agile principles identify some of the critical process-quality attributes for such efforts. And the corresponding enterprise and its product offerings and their market segments may identify additional quality attributes that needs to met (such as security, regulatory auditability/compliance, large-scale and/or distributed projects and teams, etc.)

Monday, March 24, 2008

Commonality and Variability Management

Continuing the previous discussion on software product-lines ...

Central to the notion of product-lines and product-families are tracking and managing three different kinds of software assets:
  • common/core assets that are shared by all the products in the product-line
  • shared assets that are common to some products but not others, and ...
  • product-specific assets (or custom-components) that are specific to a single product in the product-line.
Architecture for such product-lines is all about managing commonality and variability, and easing their evolution to achieve a diverse family of products to achieve economies of scale from reusing common assets. Change/Configuration Management for SPLs is a very challenging problem. And variability management techniques often come down to a matter of binding-times. There are also more advanced strategies (some involving mathematical models).

A few resources on the subject of Commonality and Variability are as follows:
In July 2006 I presented at the Dr Dobbs' Architecture & Design World conference about SCM Patterns for Agile Architectures, which included a section on managing variations. I summarized that portion of the presentation as follows:
    Use Late-Binding instead of Branching:
    • Build/Package Options
    • Feature Configuration/Selection
    • Business Rules

    Think about which of the following needs to "vary" and what needs to stay the same:
    • Interface vs. Implementation vs. Integration
    • Container vs. Content vs. Context

    Commonality & Variability analysis helps identify the core dimensions of variation for your project

    Use a combination of strategies based on the different types of needed variation and the "dimension" in which each one operates

Monday, March 17, 2008

Software Product-Line Architecture and Product-Families

Extending the analogy of software architecture views and quality attributes for software process architecture, I'd like to spend some time discussing software product lines. According to the SEI website on software product-lines, A Software Product-Line is defines as follows:

A software product line (SPL) is a set of software-intensive systems that share a common, managed set of features satisfying the specific needs of a particular market segment or mission and that are developed from a common set of core assets in a prescribed way.

At SoftwareProductsLines.com, Charles Krueger defines them as follows:

Software product lines refers to engineering techniques for creating a portfolio of similar software systems from a shared set of software assets using a common means of production.

The key objectives of software product lines are: to capitalize on commonality and manage variation in order to reduce the time, effort, cost and complexity of creating and maintaining a product line of similar software systems.
  • Capitalize on commonality through consolidation and sharing within the software asset inputs, thereby avoiding duplication and divergence.
  • Manage variation by clearly defining the variation points and decision model, thereby making the location, rationale, and dependencies for variation explicit.

Closely related to software product lines is the notion of software product families and Product Family Engineering. In many cases the terms product-line and product-family are used interchangeably. Sometimes a product-family is slightly more general in that a product-family may comprise one or more product-lines. The SEI has established a Framework for Software Product-Line Practices that encompasses topics such as architecture, organization, patterns, business-case, and even a section on configuration management for software product-lines.

Sunday, March 09, 2008

Software Process Architecture Views and Quality Attributes

After my previous postings on Software Architecture Views and Perspectives, Software Architecture Quality Attributes and Software Modifiability Tactics, the question remains as to what all this has to do with Agile processes or with CM.

Well, about a year ago I wrote that Software CM is NOT a Process! ...

Software CM creates the medium through which software development changes & activities must flow. Therefore, Software CM is the intentional architecture of software development change-flow.

The elements of this Software CM architecture include practices, tools & technology, teams & organizations, valued deliverables & intermediate work-products, changes to and assemblies of these deliverables & work-products, and the set of needed status/tracking reports & measures.

So now I want to take the perspective of Software CM as an architecture, and I want to consider questions like:

  • What are the views and perspectives of an SCM solution architecture?

  • How do software architecture quality attributes relate to software CM (or even process) quality attributes?

  • What are the quality attributes that, if attained, will make the resulting software CM and/or process environment Agile?

I think I answered the first of these questions in my Dimensions and Views of SCM Architecture. I take the perspective of a 4+2 views model comprising Product, Project, Evolution, Environment, Process (+1), and Enterprise (+2). These views straddle the conceptual and physical aspect of both the content and context of the different kinds of "containers" that are to be managed and interrelated. And the dimensions of SCM complexity that prove most challenging are those of scale and diversity, and of differences between artifact change/creation time and decision binding-time.

The next question involves translating what Availability, Modifiability, Performance, Security, Testability, and Usability mean for a "process" architecture. I'll make an initial stab at that (feedback is encouraged):
  • Process Availability might correspond to the availability of the underlying tools and technology that support the process. But it might also need to include both the physical and cognitive "availability" of the process itself. It probably also needs to include the availability of key information (i.e., metrics and reports) to create information radiators, and big visible charts & reports.

  • Process Modifiability is the ease with which the process itself can be adapted, extended/contracted, perhaps even "refactored", and ultimately improved. Rigid processes can't be changed very rapidly in response to a change in business need or direction.

  • Process Performance is probably what most closely translates to flow or throughput of software development. (Although for tools and the supporting computing environment, it clearly has the usual meaning there.)

  • Process Security is ... hmmn, that's a tough one! Would it mean "safety" as in keeping the practitioner safe/secure? Would it mean process quality? Or might it mean the security of the process itself in terms of making sure that only authorized/authenticated persons have access to the knowledge of the system (its requirements, designs, etc.) which may be proprietary/confidential and a significant competitive advantage, and that only those authorized individuals are allowed to execute the roles & workflows that create & modify that system knowledge? Perhaps "security" in this context is all about trust and trustworthiness: How well does the process ensure the trust and integrity of the system and of itself? How well does it foster trust among its practitioners and consumers?

  • Process Testability might correspond to the ease with which the process and its results can be tracked/reported (transparency) and audited (auditability). Perhaps it is also related to the ease with which parts of the process can be automated.

  • Process Usability probably has to do with the amount of "friction" the process imposes on the flow/throughput of development. Is it too big? too complex? a poor fit? easy to understand and execute? easy to tell if you did it correctly?


What are the "quality" attributes of an "agile" process? Do they include ALL of the above? what about: adaptive? lean? result-driven (i.e. "working software")? self-organization? iterative? collaborative?

How about some of the traditional "quality" attributes of a CM system: traceability (vs. transparency?), reproduceability? repeatability?

Sunday, March 02, 2008

Software Modifiability Tactics

Getting back to the subject of my previous blog-entries on Software Architecture Views and Perspectives and Software Architecture Quality Attributes, I wanted to talk more specifically about the quality attribute of Modifiability.

The Modifiability of a software system is related to how minimal is the cost/effort to develop and deploy changes to the software. This relates to well known concepts and principles of coupling, cohesion, maintainability, etc. and is the basis for many of the elements of object-oriented, component-based, and aspect-oriented design ("reuse" is a close cousin for all these as well).

Software Modifiability Tactics are presented in section 5.3 of Software Architecture in Practice. A taxonomy is given which relates architectural tactics to architectural patterns ("styles") and the design patterns which are largely concerned with achieving the attribute (in this case "modifiability") for various types of products and contexts. The article Understanding Architectural Patterns in Terms of Tactics and Models even has a nice matrix that maps architectural patterns or styles to the various kinds of modifiability tactics.

The taxonomy for software modifiability tactics is broken down as follows:
  • Localize Changes (increase cohesion)
    • Maintain Semantic Coherence
    • Anticipate Expected [types of] Changes
    • Generalize the Module
    • Limit Possible Options
    • Abstract Common Services

  • Prevent Ripple Effects (reduce coupling)
    • Hide Information
    • Maintain Existing Interfaces
    • Restrict Communication Paths
    • Use an Intermediary

  • Defer Binding-time (defer decision-making)
    • Run-time Registration
    • Configuration Files
    • Polymorphism/Delegation
    • Component Replacement
    • Adhere to Defined Protocols

In September 2007, an SEI Technical Report on Software Modifiability Tactics was published that provides a comprehensive discussion of these modifiability tactics and the architecture/design patterns that can be used to implement them (and some of the tradeoffs involved).

Links and Resources on Software Modifiability Tactics:

In a subsequent blog-entry I will muse about how Modifiability relates to Agility in both software architecture/design and in software process architecture/design.

Monday, February 25, 2008

Distributed Version Control Systems

A colleague of mine had a question for me about Distributed Versions Control Systems (or DVCS). There are a growing number of such systems these days: Mercurial, Bazaar, git, svk, BitKeeper, Gnu Arch, darcs, Monotone, Codeville, Arx, just to name a few. I referred them to a good essay by David Wheeler that talks about the fundamental differences between distributed vs centralized VCS (among other things).

I also Googled on the topic and came across some interesting links:
Anyone else have any links they recommend on the topic? (please, no spam/marketing)

Monday, February 18, 2008

BOOK: Lean Project Management

My review of Lean Project Management is in the February 2008 issue of the Agile Journal.
    Lean Project Management: Eight Principles for Success, is actually a second edition of the eBook Eight Secrets to Supercharge your Project with CCPM. It is available both in hardcopy and eBook formats. Lawrence Leach (www.advanced-projects.com) is perhaps best known as author of one of the most comprehensive texts on the subject of Critical Chain Project Management (CCPM). In this book, subtitled "Combining CCPM and Lean tools to accelerate project results," the author essentially integrates Lean Thinking into CCPM, along with elements from the Theory of Constraints (TOC) and PMBoK/PMI. Leach calls the result Lean Project Management or LPM.

    ... All in all, I found Lean Project Management to be a fairly quick read providing a good overview of some TOC and CCPM fundamentals and how they align with Lean thinking, as well as how Lean thinking can be applied to some of more traditional PMBoK methods. Someone looking for a more comprehensive reference on TOC thinking processes and CCPM would probably be better off reading Goldratt's books, the work of William H. Dettmer, and the 2nd edition of Leach's Critical Chain Project Management. But for those wanting the bird's eye overview with a brief "zoom in" on some of the details, along with how Lean thinking helps tie it all together with some of the more traditional project management methods, Lawrence Leach's Lean Project Management is a nice overview text describing some of the most powerful aspects of TOC and CCPM through "Lean eyes for the PM guy!"

Read the full review

Thursday, February 14, 2008

Software Architecture Quality Attributes

Following on to my previous blog-entry about Software Architecture Views and Perspectives, the book "Software Architecture in Practice" also describes a method called Attribute-Driven Design or ADD. This is not yet-another-design-method like BDD or TDD. ADD is concerned software architectural design (so it's at the "architecture-level" rather than what we might normally think of as the "design-level").

ADD is concerned with explicitly identifying the desired quality attributes of an architecture. Many of us know that simply implementing the (functional) requirements correctly is just the beginning of any good design, and possibly not even the most important attribute of the design. In addition to other attributes like security or availability, there are also attributes like modifiability of an architecture. And it is often these attributes that, if attained, are the true indication of whether or not we've done a good job.


Some of the more commonly desired quality attributes are:
  • Availability
  • Modifiability
  • Performance
  • Security
  • Testability
  • Usability
Many of us have also often had difficulty trying to explain to management the importance of things like "refactoring" and what that modifiability gives us in return. ADD makes such quality attributes an explicit goal of the architecture design process. One of the first things it asks us to do is the necessary homework (research and interviews) to analyze and identify what are the desired quality attributes for our architecture and its stakeholders. It also defines use-case-like entities called Quality Attribute Scenarios, which is a way of expressing such non-functional requirements as an explicit use-case (which can then have an associated business-value).

ADD also explicitly mentions the use of patterns and tactics as part of its methodology for achieving quality attributes and making design tradeoffs (quality attributes are often the "forces" for a pattern). For each of the common quality attributes above, it identifies a taxonomy of common tactics and patterns used to make good tradeoff decisions for particular aspects of the design.

Although it look s a bit heavyweight, ADD looks promising in its use of patterns and tactics and recursive/iterative nature. But what I like most about it is that fact that it makes explicit the kinds of quality attributes and their non-functional use-cases or "stories" which can then have business value/priority associated with it, and thereby justify the existence of activities that help realize those attributes of the system.

For some more information on ADD, see the following:
Now I want to ask the question of applying this same idea to software processes and software process architecture: Seems to me that in Agile development, we're really not anti-process, but there are some really important things (quality attributes of a process?) that we feel are often left out to pasture by a lot of the very formal, CMMI-based processes we witness. Perhaps they're ignoring these important process-design quality attributes? (as embodied in the Agile Manifesto?)

Sunday, February 10, 2008

Software Architecture Views and Perspectives

I'm fairly interested in the literature on Software Architecture Views and Perspectives. Folks here may remember my work on Dimensions and Views of SCM Architecture as one of the reasons why ...

The text of the entire 2nd edition of the "Software Architecture in Practice" textbook is available online as one source fo information on the subject (among others). I found another good link (& book reference) at http://www.viewpoints-and-perspectives.info/

It's the website for the book "Software Systems Architecture: Working With Stakeholders Using Viewpoints and Perspectives" by Nick Rozanski and Eoin Woods. They classify/use "Viewpoints" and "Perspectives" as follows:

Viewpoints:
  • Functional
  • Information
  • Concurrency
  • Development
  • Deployment
  • Operational
Perspectives:
  • Security
  • Performance and Scalability
  • Availability and Resilience
  • Evolution
  • Accessibility
  • Development Resource
  • Internationalization
  • Location
  • Regulation
  • Usability
Found a few other links too:

Monday, February 04, 2008

KanBan is NOT Iteration-Free!

Regarding my previous posting about Software KanBan, much as I really do like it and have nothing but the utmost respect for the likes of Corey Ladas and David Anderson, there is one major quibble I have with some of the stuff being said ...

I think all the stuff saying it is Iteration-less and Iteration-free is a bunch of hooey! I don't agree at all and I think it is extremely misleading to say that Software KanBan doesn't use or need iterative development.

Don't get me wrong - I think I understand where they are coming from. There is often a great deal of recurring and heated discussion on numerous Agile forums about "Ideal" iteration length. I understand how folks can be sick and tired of that (to be honest, I myself never really paid too much attention to those particular discussion threads about ideal iteration-size).

The idea that is new or revolutionary for some is that release/feature content is decoupled from development! One or more features/requests are being worked in parallel, and every two weeks (or however long) some combination of newly developed functionality from those that are ready is selected to be released (rather than before development is underway).

But it seems to me that this is just an application of Agile-style development iterations applied to multi-project management. It is Releases that are decoupled from iterations (and hence iteration-free). But as I see it, the iterations are still present: they are where the development is, on the various feature "projects" that are being developed in an incremental and iterative manner, each at their own rhythm (some might be every two weeks, others might be less frequent, but they all find their pace).

I don't believe for one second that each of those features/requests are specifying and elaborating 100% of their requirements before they start coding anything. Looks to me like, for all but the smallest of requests, they may flesh out a certain amount of requirements up-front (be it lightweight use-cases, or even something a bit more formal), but only to a high-level or medium-level of detail. And from that point on through, the detailed requirements, implementation, and feature-level testing and integration-testing they are proceeding in a VERY MUCH iterative fashion.

It may not be a strict fixed length, in that the rhythm may fluctuate and readjust from time-to-time, but it definitely does have a regular rhythm! The length of any given "iteration" is fixed to the cadence of the feature-team (even if it is a team of 1 or 2). Not all iterations may be the same length, but any given iteration is working to a fixed due-date rather than letting that cycle stretch out until the "scope" is complete.

So don't let anyone tell you that Agile development need not require working in an iterative manner. It most definitely does (and at multiple levels of scale). Just don't assume it means that iterations must always be a property of a "release" as opposed to some other related chunk of work (possibly multiple ones proceeding in parallel).

It is not the releasing that needs to be iterative, it is the development. And if the releases are decoupled form development (which I think is a GREAT idea), then the development of any non-trivial sized feature or request still will need to proceed in an iterative manner according to some regular cadence that gets established.