Agile Release Train Community of Practice

As those of you familiar with Scaling Software AgilityAgile Software Requirements or this blog know, the Agile Release Train is the mechanism I like to recommend whenever an enterprise needs to harness a significant number of agile teams (5-10-15) to a common mission. And in the larger enterprise, that’s pretty often. As I described in Chapter 15 of ASR, in more abstract terms, the ART is used to:

  1. Drive strategic alignment
  2. Institutionalize product development flow

The agile release train, is implied in the Big Picture:Scaled Agile Delivery Model, as seen below:

A number of large software enterprises are using this or a similar mechanism (an uber sprint?) to build larger scale, fully agile programs. In so doing, a number of new best-practices-at-real-agile-scale,  are starting to emerge.

My colleague Jorgen Hesselberg, is organizing a small community of practice to explore both challenges and patterns of success. I know some of the likely participants, and I can assure you there will be some lively discussions about some of the largest agile enterprise initiatives. Topics for this COP are likely to include:

  • organizing trains around value streams
  • preapring for the release planning event
  • running the event
  • release planning at scale (50-100 largely co-located devs and testers) and super scale (100-200 distributed, mutli-site, concurrent planning)
  • roles and responsibilities
  • release train governance (keeping the train on the tracks, PMO involvement, etc)
  • metrics
  • coordinating epics across multiple trains

If you’d like to participate, contact Jorgen at jorgen.hesselberg@navteq.com.

If this comes together, I’ll certainly be blogging about the process and results of this strategically interesting, agile-at-scale COP as soon as they become available.

Tooling for User Story Verification, Part 2

Note: This post is a continuation of the series on Agile in High Assurance and Regulated Environments.

In the last post in this series, Tooling to Automate User Story Verification, I pointed to a post, Tools to Automate User Story Verification – Part 1: Rally and Quality Center, by my collaborator Craig Langenfeld, wherein he describes how HP Quality Center can be used to perform Verification Testing on User Stories that are planned, tracked, and managed within Rally. That post describes one of the three potential user story-level verification paths that I described in an earlier post, Software Verification and Validation in High Assurance Agile Development: SRS and User Story Verification, illustrated below as follows.

High Assurance User Story Traceability Paths

This user-story to story-acceptance-test  verification/traceability path is probably the most critical verification step, as it illustrates that the story has been tested and accepted into the baseline via the story acceptance test. For many in high assurance, that’s as far as they need to go with verification at the story level. However, if the risk analysis determines that certain pieces of code are extremely  critical for safety and efficacy, then one or both of the other two paths:

  • user-story to code-that-implements-the-story, and
  • user-story to unit-tests-that-tests-the-code-that-implements-the-story,

may also be necessary.

In this new post, Tools to Automate User Story Verification – Part 2: Incorporating SCM, Craig shows how to link the user story to a changeset in the SCM system, in this case, Subversion. And since its likely that the source file will be revised a number of times through the course of an iteration, Craig illustrates how, with the addition of a small custom program provided by Barry Mullen , “an Iteration Diff Report App can be used to pull all those revisions together and, in the case of viewvc users, create a link to the diff for all changes. This is useful for conducting code and/or security audits of the code changed for that iteration.”

This part completes the user-story-verification-via-tooling part of our high assurance story.

However, the second part of the post illustrates another important step, which is an in-sprint validation step. Since, the inclusion of the new story into the baseline changes the behavior of the system, it is necessary to assure that the whole system still works as intended.  To help make this a largely automated function, Craig shows how an  an integration from the agile project management tool (in this case Rally) to the automated build system (in this case Hudson) monitors the build health during the course of the iteration, thereby assuring that all the build verification tests (new and old) still pass when the new story is added to the baseline.

In the next post in this series, we’ll look at tooling to support the higher level traceability path, from the feature in the Product Requirements Document to the User Stories in the Software Requirements Specification, (or equivalent repository).

Feature Teams Vs. Component Teams (continued)

My friend and agile “ninja” Chad Holdorf (http://www.scaledagiledelivery.com/about/) just put up a nice post with a video and graphic explanation of the value of organizing around Features, rather than Components. For context, Chad is using the Agile Enterprise Backlog Model  (and the Big Picture: Scaled Agile Delivery Model) as his organizational model for scaling agile, so the many of the terms and graphics he uses come from there. In addition, Chad has worked with Kenny Rubin on this topic and Kenny provided some nice slides to support the verbal explanation as well.

Chad’s new post is here: http://www.scaledagiledelivery.com/2011/04/28/component-vs-feature-team/

In this post, Chad does a great job of describing the complexities of delivering value when teams are organized around the components of a larger system, as opposed to the features or value streams that deliver the end result to the customer.

In my view, Chad correctly describes that its not an “either/or”, rather it’s a mix and Chad recommends that an agile enterprise should run about 70-30 or 60-40 of feature teams to component teams. I generally agree with this rough heuristic. Certain components are highly complex in nature, some perhaps use different technologies than the rest of the system, (typically lower level languages and protocols) and some can provide substantial economies of scale with reuse across the organization. Those components are best handled by small teams dedicated to building the most robust and scalable component they can. However, every interface to a component creates a dependency, so MOST of the agile enterprise should be organized to rapidly deliver feature value to the customers, and Chad describes why these efficiencies are so much greater with a feature team approach.

As I described at length in Agile Software Requirements, (Chapter 4: Agile Requirements for the Program), this is a nontrivial topic and I suggested that the discriminator should be along two axis, 1) complexity and uniqueness of the component technology, and 2) degree of reuse. This is illustrated by the following figure.

Feature Team vs. Component team Discriminator

For the area above the curve, component teams can be the most efficient, but MOST teams should fall below the curve, and that’s the area that Chad highlights for fastest value delivery.

Thanks Chad (and Kenny).