Tuesday, February 28, 2006

Unchangeable Rules of Software Change - Redux

I put together a couple of my earlier blog-entries on the topic of software change and iterative development and developed them into an article in the February 2006 issue of CMCrossroads Journal. The article is entitled The Unchangeable Rules of Software Change (just like the earlier blog-entry) and updates some of what I had blogged about earlier.

In addition to the first three commonly recurring pitfalls encountered when first faced with the reality of these "unchangeable rules", I identified three additional pitfalls that typically occur when first attempting iterative development. It also has a few more iterative development resources than my previous follow-up blog-entry. Lastly, I expanded the rule-set by one, adding the "quicksilver" rule to the "quicksand" rule as noted below:

The Unchangeable Rules of Software Change
Rule #0: Change is Inevitable!
The Requirements/Plans ARE going to change!

Rule #1: Resistance is Futile!
There isn't a darn thing you can do to prevent Rule #0.

Rule #2: Change is like Quicksand -- Fighting it only makes it worse!
The more you try to deny and defy rule #1 by attempting to prevent rule #0, the worse things will get.

Rule #3: Change is like Quicksilver -- Tightening your grip makes it slip from your grasp!
The more you try to deny and defy rule #2 by attempting to precisely predict, or rigidly control change, the more erratic and onerous the result will be.

Rule #4: Embrace change to control change
The more flexible and adaptive you are at accommodating change, the more control you will have over your outcomes
You can read the whole article here!

Thursday, February 23, 2006

More SCM Blogs

In my last blog-entry of 2005, I posted a list Software CM and Version-Control Blogs and asked for others that any of you recommend. I know of a few more now:
Austin Hastings now has a blog! Check it out at Doing Better!

Austin is incredibly knowledgeable about CM and architecture, and I also think he's not only much more concise than I am [I'm (in)famous for being verbose] but he's also a lot more insightful and much more quickly sees the 'whole' system and gets at the crux of the matter. I'm expecting great and inspiring things from this blog, and judging from his entries on Defining Baseline and Table Data Gateway, I won't be disappointed!

Kevin Lee has a forthcoming blog!

Okay - so it's not quite a blog yet. But it supposedly will be very soon. Kevin has a forthcoming book on Continuous Integration using ClearCase, ANT, and CruiseControl that looks to be pretty good. And he has some nice articles and downloads from his "buildmeister" website.

Rob Caron writes that Robert Horvick started a blog about Team System's version control features and API

Sunday, February 19, 2006

Agile IT Organization Refactoring?

On the agilenterprise YahooGroup, someone asked for advice about how to structure the whole enterprise/organization, including core competencies for development, support, testing/QA/V&V, business/marketing analysts, systems engineering/architecture, deployment, PMO, CM, IT, etc...

I asked if he was looking for things like the following:
Mishkin Berteig wrote that he thinks that "this is one of those places where Lean and Agile really start to blur into one-another via queuing theory." I mentioned that I think Theory of Constraints (TOC) also blurs-in with Agile and Lean via queuing theory as well, as evidenced by the work of David J. Anderson.

Mishkin also wrote:
"The answer is that in a lean portfolio management situation this is a mandated constraint for projects. Projects simply are not approved unless they are able to fit inside that timebox. If you have a larger project, you must break it into two.... and you must not make it fit by making a larger team.... which leads to the other side: all teams should be roughly the same size, team composition should change very slowly, and people should be dedicated to a single team at a time."
I replied that, rather than the above, the practice of "pairing" and "refactoring" might actually scale up by refactoring people across projects and teams every 3-6 months. I'm thinking about the case of an IT department that supports several so called "products", and in any 3-6 month period, they of course get requests against those projects, as well as requests for new projects.

Now, not every request and/or project has the exact same priority. So having each project or product prioritize it's backlog and then work on whatever "fits" into the next iteration sort of assumes that each project has the same priority (if all the teams are more-or-less the same size and experience mix).

If, instead of each project/product separately prioritizing it's backlog, they might instead do something like:
  • Form a combined backlog list across the entire department
  • Have representatives [governance] from each customer organization in the enterprise meet, and prioritize the department-wide backlog list
  • And whatever shows up as the topmost requests that can be handled in the next financial quarter with the available staffing is what gets worked.
If that means that some projects or products get more of their requests worked on in that time-frame, then so be it. And people might be "refactored" across teams and projects within the department, adding more staff to "feed" the projects that have the "lion's share" of the most highly prioritized requests from the backlog.

Wouldnt that be essentially creating a "pull" system for "allocating resources" to projects?

If pairing were used, it would help the "refactored" folks come up-to-speed more quickly on the new product or project. And after awhile, I'd think most folks in the department would have a reasonably high knowledge level and awareness (and appreciation) about all the "important" projects going on in the department, and understand the overall "business" big-picture a little better (at least for that department).

That would still seem agile to me. It looks similar to some matrixed approaches, but I think its a bit different because it is more fine-grained and incremental. I'm thinking it would help "scale" a single agile project and team into an overall "agile" department servicing an entire portfolio of projects, and making sure that those projects that are most valued for the given quarter get the appropriate amount of resources relative to how the "Customer" prioritized the backlog across the entire portfolio.

Wouldn't it? Or would team-dynamics and other people-issues make it too darn hard to incrementally rotate/refactor people in that manner?

Isn't this also an example of using the Five Focusing Steps of TOC? (Would this be an example of a team/project constraint elevated to the level of the department and using dynamic allocation of the entire department's staff to subordinate to the constraint and place the most staff on the projects with the most valued requests?)

Friday, February 10, 2006

Agile vs MDE: XP, AMDD, FDD and Color Modeling

The February 2006 issue of IEEE Computer is devoted to Model-Driven Engineering (MDE). MDE is actually a bit broader than MDA/MDD, because MDE (allegedly) covers more of the lifecycle, and corresponding process and analysis. Doug Schmidt's Guest Editor's Introduction to MDE is a pretty good overview of the current theory and practice and the obstacles to overcome.

A co-worker of mine is very interested in combining Agile methods with Model-Driven Engineering. He feels that the benefits of agility and of model-driven code-generation show tremendous promise as a breakhrough combination in productivity and quality and he is stymied that there aren't a lot more folks out there trying to do it.

He attended UML World in June 2005 and had some discussions with Scott Ambler (AgileModeling), Steve Mellor (co-creator of the Schlaer-Mellor OOAD method, and co-author of "Agile MDA", Executable UML and MDA Distilled), and Jon Kern, Agile MDA Evangelist (who helped Peter Coad launch TogetherSoft). He found most of what they had to say supported the possible synergy between Agility and MDA, but was very surprised to see AMDD folks and XP/Scrum folks throwing away their models once they had the code for it.

Upon hearing the above, I noted that Peter Coad is quite possibly the missing link between MDE and Agility:
The potential mismatch between MDA, with AMDD and XP-like Agile methods, is that:
  • Full/pure MDA strives for 100% generation of all code and executables directly from the models.

  • Ambler's AMDD, and "domain modeling" espoused by the likes of Robert Martin, Martin Fowler, and others in the XP community strives for "minimal, meaningful models", where they model only as needed, as a means of gaining more understanding, and then embed the knowledge gained into the code and/or tests.
I beleive FDD has the potential to bridge the gap. It strives for a comprehensive domain model, but from that point the code is written by hand (using coding practices that are traditionally non-Agile in nature, including strict-code-ownership, and formal code reviews/inspections). FDD doesn't say anything about using MDA/MDD techniques to auto-generate code, but the method is extremely amenable to doing exactly that.

Furthermore, doing so would remove a lot of the manual parts and practices of FDD that many consider to be the least "Agile". And much of the FDD "Color Modeling" patterns and techniques are very much the equivalent of refactoring and design-patterns that are currently used for code. See the end of this message for some more resources on Color Modeling.

In my own humble opinion, I think the "sweet spot" is somewhere in between 100% code generation and "hand-crafted" code. I realize that 100% is the ideal, but I'm thinking about the 80/20 rule here, and whether trying to eliminate that last 20% is perhaps not always practical.

I think the biggest barrier to doing that today is tools:
  • The modeling tools are good at handling the structure, but not as much as the behavior.

  • High-level programming languages like Java and C# and their IDEs are more convenient for specifying behavior (which is still largely textual in UML 2 and Action-Syntax Languages).

  • It is extremely difficult to maintain the non-interface code for a "class" or "package" unless it is either 100% manually coded or else 100% auto-generated. If it is 50-50, or even 80-20, then the "nirvana" of seamless and reversible round-trip design to code and back just isn't quite there yet.
What would get us there and help close that gap? I think the "melding" of the IDE with the modeling tool is what is needed - and it would have to allow specifying code such as Java or C# as opposed to only allowing ASL "code" (most of which looks pretty darn close to Java and C# anyway :) as well as a means indicating if/how a chunk of code was auto-generated or if it was to be hand-crafted, but "navigable" and editable via the Model.

The Eclipse framework shows a lot of promise in helping us get to that point, and has a lot of the groundwork and building blocks already in place, but still has a lot more work to be done.

I hear some of you saying, "Okay Brad, I see what this has to do with Agility. But what does this have to do with CM?" Well, in my January 2005 Agile SCM column, among other "crystal-ball gazing" predictions, I talked a little about "Model-Driven CM" and how it would resurrect the once popular SCM research-area of Software/System Configuration Modeling:
  • MDE/MDA would readily lend itself to allowing the developer to focus on the logical structure of the code, letting the physical structure (files and directories) be dictated by the code-generation tool with some configuration scripting+profiles.

  • This in turn would allow models and modeling to be easily used to analyze and design the physical structure of the code, including build & configuration dependencies.
Of course, we have a ways to go until we get there, but I do believe the trend is on the rise and it's only a matter of time.

Some other resources related to Agility and MDE:

Wednesday, February 08, 2006

Book Review: Practical Development Environments

Matthew Doar's Practical Development Environments (PDE) looks to be a pretty AMAZING book. It really does cover the entire lifecycle of development environment tools for version control, build management, test tools, change/defect tracking, and more. My previous favorite work on this topic was the Pragmatic Programmer's Pragmatic Project Automation (PPA), but no more.

The PPA book is still a GREAT book! And it focuses a lot more on programming and automating tasks and good ways to go about doing it. It goes into some of the details of particular tools and setting them up, especially JUnit.

But the PDE book is far more comprehensive in the range of development environment practices and tools that it covers, including not just the automation aspects, but evaluating them, setup and administration, integrating them together (and issues and challenges encountered), and many more aspects of testing, building, project tracking, version controlling, and just generally helping the development team get work done with maximal support and minimal hindrance from the tools they use.

If you want to be a toolsmith, and learn more about scripting and automating tasks and some of the common tools that already exist, then I'd still recommend Mike Clark's Pragmatic Project Automation.

If you're focus is less on how/when/why to automate and more on evaluating, setting-up and maintaining a practical development environment for your team, then Matthew Doar's Practical Development Environments is definitely my top pick nowadays!

Sunday, February 05, 2006

Book Review: Perl Best Practices

As far as I'm concerned, Damian Conway's Perl Best Practices book should be required reading for any serious Perl programming, and should be mandatory for any team that does any serious Perl development. These best-practices and conventions are exactly the sort of thing that programming teams need to come to grips with, and establish shared norms for how to make their codebase clear and "clean."

Next time I come across a team of Perl scripters that needs to develop a set of team standards and strategies for how to do thse kinds of things, I'm simply going to tell them to get this book: read it together, discuss it, learn it, understand it, and then do it!

Friday, February 03, 2006

O'Reilly Book Reviews

I received a whole slew of books from O'Reilly to review, so I'll be writing about them in a subsequent review either on this blog or in separate articles. The one's I'll be reading through are:Watch this space! I've already been making my way through Perl Best Practices, and it looks quite good. The other one I'll be doing soon is Practical Development Environments, which looks like it might give the Pragmatic Programmer's Practical Project Automation more than a run for it's money.