Concurrent US-India Release Planning Agenda

Many of our larger software enterprises (perhaps MOST) do substantial development within the continental United States and also have a significant offshore development presence, most likely in India (or China or both and more). So it is a very common pattern that some large group of developers here, and a large group in India, must collaborate on building larger scale systems of systems in the model of the Agile Release Train and Scaled Agile Delivery Model.

While face to face planning is by far preferred, this is simply not practical in these cases, as the enterprise must entertain a budget and overhead of flying, say 100 developers, halfway around the world, putting them in relatively expense hotels, renting cars etc etc etc. It’s a great notion, but it simply doesn’t scale. So there is a need for concurrent, real time face-to-face-like release planning that does not encounter those costs. Everyone plans together, but they plan from their respective locations. However, even with adequate video, audio and text-based real time communications, the time zone differences (typically 9.5-10.5 hours) really exacerbate the planning challenge. After all, which team wants to plan at midnight? Since neither do, some enterprises have developed a joint, concurrent planning model that mitigates the time zone problem, albeit by sharing some of the pain of awkward working hours.

An example is below:

Note 1: This agenda assumes a 10.5 our time distance between the US and Inida locations.
Note 2: all agenda items in white background are joint, shared, real-time sessions with remote communications. All agenda items with a grey background or non-overlapping sessions (regions work independently).

Day 1 US

Day 2 US

Day 3 US

Day 1 India

Day 2 India

Day 3 India

Special note: I’ve never been happy with this six page agenda format. While it works well for the participants in the individual locations, as a planner or anyone else who needs the gestalt of the whole operation, it’s hard to piece the whole thing together. If any reader can come up with a condensed, legible, moderately aesthetically pleasing, three page version, pretty printable (for review and poster sized printouts in the sessions) where you can see everything at a glance, I’ll publish it here and thank the contributor profusely!

(Hint: probably need to do it in excel in a four column spread, with 24 rows, one per hour of day, and blackout shading for respective locations during sleep time.)

Agile Release Train Community of Practice

As those of you familiar with Scaling Software AgilityAgile Software Requirements or this blog know, the Agile Release Train is the mechanism I like to recommend whenever an enterprise needs to harness a significant number of agile teams (5-10-15) to a common mission. And in the larger enterprise, that’s pretty often. As I described in Chapter 15 of ASR, in more abstract terms, the ART is used to:

  1. Drive strategic alignment
  2. Institutionalize product development flow

The agile release train, is implied in the Big Picture:Scaled Agile Delivery Model, as seen below:

A number of large software enterprises are using this or a similar mechanism (an uber sprint?) to build larger scale, fully agile programs. In so doing, a number of new best-practices-at-real-agile-scale,  are starting to emerge.

My colleague Jorgen Hesselberg, is organizing a small community of practice to explore both challenges and patterns of success. I know some of the likely participants, and I can assure you there will be some lively discussions about some of the largest agile enterprise initiatives. Topics for this COP are likely to include:

  • organizing trains around value streams
  • preapring for the release planning event
  • running the event
  • release planning at scale (50-100 largely co-located devs and testers) and super scale (100-200 distributed, mutli-site, concurrent planning)
  • roles and responsibilities
  • release train governance (keeping the train on the tracks, PMO involvement, etc)
  • metrics
  • coordinating epics across multiple trains

If you’d like to participate, contact Jorgen at jorgen.hesselberg@navteq.com.

If this comes together, I’ll certainly be blogging about the process and results of this strategically interesting, agile-at-scale COP as soon as they become available.

Resource Flexibility in The Agile Enterprise

I received this interesting email from a colleague (who allowed me to share it) a few days back.

“I currently lead a project on how to increase our resource fluidity so that we can effectively assign sufficient manpower to where it matters the most, e.g. working on the highest priority items on the backlog. We acknowledge the need for deeply specialized teams in certain areas and that drastic competence shifts are unrealistic, so the change project aims at finding out how many scrum teams do we need to make more fluid? What competences should these teams have, if they are to be deployed on a wider range of tasks? We also need to address change resistance such as designers or managers being protective of their work domain.

I wonder if you have any advice on how to increase resource fluidity and thereafter managing it.”

Best regards,

— Dr. Mikkel Mørup, Nokia Mobile Phones, Experience Portfolio Management

The email also reminded me of a visual on the same topic that I saw recently as well, which went something as follows:

Matching project backlog to team competencies

Even if we have driven the excess WIP out of the system, even if we can match capacity to backlog, even if we have shorter queues, even if we can build working software in a short time box, we still have to rapidly adjust resources to match the current backlog; that’s a big part of what makes us agile after all. But of course, it never really matches. So we constantly struggle to make the best of the situation, and yet who wants to be the epic owner (or project manager) for Epics (epics 7&8 above) or a team member for Team 4 or 5? A number of awkward conversations and suboptimal economic solutions are likely to develop.

To address this problem generally, we need two things:

1)   Feature teams, which have the ability to work across the domains of interest (See feature teams/component teams category)

2)   Individuals with “T Skills”, i.e. people who are deep in one area and broad in many. (See Reinertsen: Principles of Product Development Flow, W12).

As agile as we hope to be however, this is a journey often measured in years, not weeks or months, and it is sometimes met with resistance from the enterprise, as Mikkel notes above. Resistance can come from:

–       individuals who are highly specialized experts,  and who may even see their knowledge of a domain and specialty as a form of job security

–       managers who control resources associated with a domain and who can be reluctant to share their resources (and implied base of power)

–       Individuals or managers who may have lost their technical thirst for “life long learning” and are comfortable in their existing skills and knowledge

–       Logistics and infrastructure (CI and build environments, branching structures, etc) that make it difficult to share code bases across teams

I’m posting this as I would like to hear some other opinions on the topic. As a kickstart, however, my initial response to Mikkel went something as follows:

1)   Make the objective clear. It is product development economics that drive us to this particular change vector, and in the end economics wins (or loses) the game for every enterprise. Make the business case based on economics of agility and flow.

2)   Make the value system clear. We value feature teams and T skills the most highly (yes, we value component teams too; but even there T skills are an asset). Embed the value system in the performance review/appraisal system to eliminate ambiguity about our expectations for individual career growth and advancement.

3)   Adopt XP like practices and values (simplicity, pair programming, collective ownership, single code line, etc.). Hire people with experience in these areas.

4)   Attack the infrastructure unremittingly. The technical blocks must be eliminated or the rest of the change program will far less effective.

For you other enterprise agilists out there, do you have thoughts and experiences that you can share?

Prioritizing Features

In the last post, I described a system for estimating features that will be in my upcoming book. This post, prioritizing features, is that post’s evil twin. Together, it should be provide a reasonable strategy to estimate cost and schedule for a feature, and a systematic way to pick  which features will deliver the highest value first, based on the cost of delay. For context, you may want to browse the Big Picture category and the Agile Requirements category.

Introduction

One of the largest challenges that all software teams face is prioritizing requirements for implementation and delivery to the customers. This is certainly a challenge for every agile team at iteration boundaries, and it rises to even greater importance when prioritizing features at the program level. Here, small decisions can have big impacts on implementation cost and timeliness of value delivery.

There are a number of reasons why prioritization is such a hard problem:

  1. Customers are seemingly reluctant to prioritize features. Perhaps this is because they simply “want them all”, which is understandable; or perhaps they are uncertain as to what the relative priorities are; or perhaps they cannot gain internal agreement.
  2. Product managers are often even more reluctant. Perhaps this is because that if they could only get them all, they wouldn’t have to prioritize anything and more importantly, they would be assured of receiving all the ultimate value[1]
  3. Quantifying value is extremely difficult. Some features are simple “must haves” to remain competitive or keep market share. How does one quantify the impact of keeping market share, one feature at a time?

To assist us in our efforts, we often attempt to provide a Return on Investment (ROI) per feature, by predicting the likely increase in revenue if a feature is available. Of course, investment in determining ROI is most likely a false science, as no ones crystal ball is an adequate predictor of future revenue, especially when you attempt to allocate revenue on a per-feature basis. This is compounded by the fact that the analyst that does the work is likely to develop a vested, personal interest in seeing that the feature is developed. Plus, any product manager or business analyst worth their salt can probably make a case for a great ROI for their feature – otherwise they wouldn’t have worked on it to begin with.

In agile, however, the challenge of prioritization is immutable – we admit up front that we can’t implement (nor even discover) all potential requirements. After all, we have typically fixed quality, resources and delivery schedule. Therefore, the only variable we have is scope. Effective prioritization becomes a mandatory practice and art; one that must be mastered by every agile team and program.

Of course, prioritizing requirements is not a new problem. A number of authors have described reasonable mechanisms for prioritization. Our favorites include

  • Agile Estimating and Planning [Cohn. 2006]
  • Software Requirements [Wiegers. 1999.]
  • Software by Numbers [Deene and Cleland-Huang [2004].

For those for whom this topic is potentially a determinant of program of success or failure of the program, we refer you also to these bodies of work. While we will take a different approach in this post (and upcoming book), there is certainly no one right way to prioritize and teams will benefit from differing perspectives on this unique problem.

Value/Effort as an ROI proxy – A First Approximation

As we alluded to above, we have traditionally prioritized work by trying to understand the relative Return on Investment, which is the relationship between potential return (value) divided by the effort (cost to implement) for a feature. At least the model was simple:

If we could simply establish value and cost – if not in absolute terms, then at least relative to other features – then we have a way to prioritize based on economics. After all, who wouldn’t want to deliver a higher ROI feature before a lower ROI feature? That seemed to make totally intuitive and (apparently) economical common sense.

What’s Wrong with our Value/Effort ROI Proxy?

However, based on more complete economic framework, it turns out that relative ROI (as we’ve too simply defined it above) is not an adequate proxy for prioritizing value delivery.  Recently, one of the more thoughtful and rigorous economic views to prioritizing value delivery (sequencing work in flow terms) is Reinertsen’s Principles of Product Development Flow [Reinertsen 2009] that we introduced in an earlier post. These principles describe methods of sequencing work based on the mathematics and underlying economics of lean product development.

We’ll use those principles to describe an enhanced method for prioritizing features. As we do so, we’ll discover a deeply seated flaw in our first assumption – the assumption that a high relative ROI project should naturally have precedence over a lower ROI project.

Instead, what need to understand is the way in which the economics of our program may be dramatically affected by sequence. For example, the potential profit for a particular high ROI feature could be less sensitive to a schedule delay than a lower ROI feature. In this case, the lower ROI feature should be implemented first, followed by the higher ROI feature. This may not make intuitive sense, but we’ll see that it does make economic sense.

Prioritizing Features Based on the Cost of Delay.

Since prioritizing value delivery is the key economic driver for a program, we’ll need a more sophisticated model to produce better returns. To build the model, we’ll make a small detour through some of the fundamentals of queuing theory. After all, as we described in another post, prioritizing value delivery in software development is just a special case of queuing theory. So applying those principles will should create a solid foundation for critical decision-making.

Introducing Cost of Delay

As Reinertsen points out “if you only quantify one thing, quantify the cost of delay.”[2], so we’ll need to be able to estimate the CoD as part of our more economically-grounded approach. (More on that shortly). Fortunately, however, we don’t have to only quantify one thing, and as we have already outlined an estimating strategy, we’ll actually be able to quantify two things, the feature effort estimate, as we have described in the last post, and the Cost of Delay. Together, we should have what we need.

In achieving product development flow, Reinertsen describes three methods for prioritizing work based on the economics of Cost of Delay.

Shortest Job First

When the cost of delay for two features is equal, doing the Shortest (in our case, smallest) Job First, produces the best economic returns, as is illustrated in the Figure below.

Shortest Job First

Fro

Indeed the impact is dramatic, as doing the smallest job first has economic returns that are many times better than doing the larger job first. So we arrive at our first conclusion:

If two features have the same Cost of Delay, do the smallest feature first.

High Delay Cost First

If the size of the jobs are about the same, then the second approach, High Delay Cost First, illustrates the effect of prioritizing the jobs with the highest cost of delay. Again, the economics are compelling as the figure below illustrates.

High Delay Cost First

Of course this makes intuitive sense in this case as well (not that intuition has always led us to the correct conclusion). For if CoD is a proxy for value, and one feature has more value than another and it’s the same effort, we do it first; we knew that already from our ROI Value/Effort proxy. So we have our second conclusion:

If two features have the same effort, do the job with the highest cost of delay first.

Weighted Shortest Job First

Now that we have seen the impact, we understand that these two conclusions are quite sensible when the size or CoD of two features are comparable. Of course, we are not manufacturing widgets here. The Cost of Delay and implementation effort for different software features is highly variable. Plus, often they have weak or no correlations (i.e., some valuable jobs are easy to do, some are hard); that’s just the way it is with software. In the case where the CoD and effort for a feature are highly variable, then the best economics is achieved when we implement them in order of the Weighted Shortest Job First.

In this case, we calculate the relative priority weighting by dividing the CoD by the size estimate. If the CoD and job sizes vary greatly, then the differential economics can be dramatic, as illustrated in the figure below.

Weighted Shortest Job First

This, then, is our preferred approach for software development:

If two features have different size and CoD, do the weighted shortest feature first.

Estimating the Cost of Delay

This seems like a promising decision model for prioritizing features. It is based on solid economics and is quite rational once you understand it. However, we have excluded one small item – how does one go about calculating the cost of delay for a feature? If we aren’t careful, we could fall into the analysis trap we mentioned above – overinvesting in calculating size estimates for features – plus overinvestment in calculating CoD – could lead to too much overhead plus a potential bias by those doing the work. We need something simpler.

We suggest that CoD – so critical to our decision-making criteria – is in turn, an aggregation of three attributes of a feature, each of which can be estimated fairly readily, if compared to other features. They are user value, time value, and information discovery value.

  • User value is simply the potential value of the feature in the eyes of the user. Product managers often have a good sense of the relative value of a feature, (“they prefer this over that”) even when it is impossible to determine the absolute value. And since we are prioritizing like things, relative user value is all we need.
  • Time value is another relative estimate; one based on how the user value decays over time. Many features provide higher value when they are delivered early, and differentiated in the market, and lower value as features become commoditized. In some cases time value is modest at best (“implement the new UI standard with new corporate branding”). In other case, time value is extreme (“implement the new testing protocol prior to the school year buying season”), and of course there are in between cases as well (support 64 bit servers).
  • Information discovery value adds a final dimensionone that acknowledges that what we are really doing is software research and development. Our world is laden with both risk and opportunity. Some features are more or less valuable to us based on how they help us unlock these mysteries, mitigate risk, and help us exploit new opportunities. For example, “move user authentication to a new web service”, could be a risky effort for a shrink-wrapped software provider that has done nothing like that in the past, but imagine the opportunities that such a new feature could engender.

With these three value factors, user value, time value, and information discovery value, we have the final pieces to our prioritization puzzle.

Feature Prioritization Evaluation matrix

Now, we can integrate all this information into an evaluation spreadsheet that we can use to establish the relative priorities for a feature, based on both effort and CoD. An example is shown in the table below below.

Cost of Delay Effort WSJF
User Time Info. Total
Feature A 4 8 8 20 4 5.0
Feature B 8 4 3 15 6 2.5
Feature C 6 6 6 18 5 3.6
Legend:
Relative weighting Scale: 10 is highest, 1 is lowest.
Weight is calculated as Total Cost of Delay divided by effort

Table 1:  Spreadsheet for calculating weighted shortest job first

In our example, it is interesting to note that that Feature B – the job with the highest user value (8) and the highest feature/effort rating (1.3) –  is actually the job that has the lowest weight, and therefore should be implemented last, not first. The job that has the lowest feature value (Feature A) actually produces the highest actual return on investment, so long as it is implemented first. So much for our intuition!

All Prioritizations are Local and Temporal

Reinertsen points out another subtle implication of WSJF scheduling. Priority should be based on delay cost, which is a global property of the feature, and effort, which is a local property of the team that is implementing the feature. In other words, a job with a lower relative feature value may require little resource from a specific team, and therefore should be implemented ahead of another higher priority feature, if that feature requires more resources for that team. This means that all priorities are inherently local[3].

This occasionally flies in the face of the way we often do things, whereby management sets a global priority for a project, which is to be applied across all teams. In that case, a lower priority task for a high priority project may take precedence over a high priority task for a lower priority feature. And availability of scarce resources doesn’t even enter into the equation. We see now that this approach simply doesn’t make economic sense.

In addition, we note that our model is highly sensitive to the time element – priorities change rapidly as deadlines approach. For example: “implement the new testing protocol in time for the school year buying season” could have a time value of “1-2” on the winter prior to the next school year start, but could easily be a “10” in May of the next year.

One conclusion of the above is that priorities have to be determined locally, and at the last responsible moment. That is the time when we can best assess the cost of delay and the resources available to work on the feature.

Fortunately, in our big picture model, we prioritize features on Release ( or PSI) Planning boundaries. Therefore the cadence we established for our release train will serve us well here, so long as we take the time to reprioritize at each such boundary.

When we do so, we can be confident that our priorities are current – taking into account then-current availability of resources and then-current cost of delay, as well as being based on solid economic fundamentals.

Achieving Differential Value – the Kano Model of Customer Satisfaction

Along with his colleagues, Noriaki Kano, an expert on the field of quality management and customer satisfaction, developed a model for customer satisfaction that challenged many traditional beliefs.  Specifically, the Kano model challenges the assumption that customer satisfaction is achieved by balancing investment across the various attributes of a product or service. Rather, customer satisfaction can be optimized by focusing on differential features, those “exciters” and “delighters” that increase customer satisfaction and loyalty beyond that which a proportional investment would otherwise merit. The Kano model is shown in the figure below.

Kano Model of Customer Satisfaction

The model illustrates three types of features

  • Basic (must have) features – features that must be present to have a viable solution.
  • Linear features – features for which the investment is directly proportional to the result. The more you invest, the higher the satisfaction.
  • Exciters and delighters. These are the features that differentiate the solution from the competition. They provide the highest opportunity for customer satisfaction and loyalty.

The primary insight from the Kano model is the position and shape of the lower and upper curves.

The shape of the basic curve is telling. Until a feature is simply “present”, the investment is proportional to the result, but satisfaction remains low until a threshold is achieved. Thereafter, however, additional investment produces a less than proportional reward. The center point of this curve gives rise to what is often described as the Minimum Marketable Feature (MMF). In other words, in order for solution to be considered viable, it must contain the requisite set of MMFs. However, enhancing or “gold plating” any MMF will not produce a proportional economic return.

The position and shape of the exciters and delighters curve tells the opposite story. Because these features are unique, compelling and differentiated, even a small investment (the area on the left) still produces high customer interest and potential satisfaction. Additional investment produces still more, and proportionally more investment produces still higher satisfaction. This is where we get the greatest leverage for our investment.

Prioritizing Features for Differential Value

Given that we have already described a full-fledged Weighed-Shortest-Job-First prioritization model, the question arises as to what additional benefit we can derive from Kano’s thinking? There are three strategic takeaways:

First, when competing on features in an existing marketplace, teams should place relatively high user value (and therefore a relatively high cost of delay) on features required to reach minimal parity. This leads us to rule #1:

Differential Value Rule #1 – Invest in MMFs, but never overinvest in a feature that is already commoditized.

Thereafter, the strategic focus should move to placing higher user value on differentiating features – those that will excite and delight the users, those for which competitive solutions have no answer, and those for which an incremental investment produces a nonlinear return. This leads us to rule #2:

Differential Value Rule #2 – Drive innovation by having the courage to invest in exciters.

Finally and most subtly, when we are forced to engage in feature wars with competitors that may already be ahead of us, it may not make sense to put all our investment into MMFs. After all, our competitors will keep investing too; what makes us think we can catch up? Instead, it may be better to slight some narrow category of MMFs and, instead, focus some amount of investment on exciters – even if we have not reached full basic feature parity.

Experience has shown that customers can be relatively patient with developers when they can a) reasonably anticipate the appearance of adequate MMFs and b) see the promise of the differential value of the exciters that the team is bringing forward from the backlog. This leads us to our third and final rule of feature prioritization:

Differential Value Rule #3 – If resources do not allow you to compete with the current rules of the game, play a different game.

Summary

In this post we’ve provided a method for prioritizing features based on a weighted Cost of Delay. It’s not so different from prior models, but it delivers superior economic returns, and it includes attributes of estimate size (cost), potential return and adjustments for risk (information discovery). We’ve also described the Kano Model which leads to an appropriate bias for investment in exciters and delighters. It may just be common sense; or perhaps its common sense buttressed by the real, yet subtle,  economics of product development flow.

Comments welcome.


[1] There was once a Dilbert cartoon on this topic. The developer says to the customer, “please tell me what your priorities are so I’ll know what I don’t have to work on.”

[2] Principle of Product Development Flow E3: if you only quantify one thing, quantify the Cost of Delay

[3] Principle of Product Development Flow:F18: Prioritizations are inherently local.

Reference: Reinertsen, Don.  The Principles of Product Development Flow: Second Generation Lean Product Development. Celeritas Publishing. 2010.

The Agile Release Train, Strategic Alignment and Product Development Flow

In this blog, in Scaling Software Agility, and in my forthcoming book on Agile Requirements, I’ve been writing fairly extensively on the implementation of the Agile Release Train (also see whitepaper derived from the book)  as a means of achieving both strategic alignment and product development flow in the larger software enterprise. It is hard to overstate the importance of the Agile Release Train in how it helps the enterprise achieve product development flow. I spend a lot of my time helping enterprises achieve the benefits of this model.

Recently, I’ve been reading (and rereading) Don Reinersten’s new book The Principles of Product Development Flow: Second Generation Lean Product Development. This book may be the best book on product development I have read in a decade. I strongly recommend it to anyone who finds my blog or the topic of scaling software agility of interest.

In addition to the depth of content, and its simultaneously light, yet rigorous, (is that really possible?) economic and mathematical treatment of flow, I like the way the book is organized around eight major themes of product development flow. They are:

1. Take an economic view

2. Actively manage queues

3. Understand and exploit variability

4. Reduce batch sizes

5. Apply WIP constraints

6. Control flow under uncertainty – cadence and synchronization

7. Get feedback as fast as possible

8. Decentralize control

For a meaningful discussion of these themes, I refer you directly to Reinertsen’s book.

Mapping The Agile Release Train to Reinertsen’s Themes

Perhaps not surprisingly, the Agile Release Train maps pretty directly to Reinertsen’s eight primary themes. Understanding this mapping is the key to understanding the criticality and motivation for the ART itself. So to make it easy for you, I’ll map them in this post:

Product Development Flow Theme 1 – Take an economic view.
The ART’s incremental releases substantially improve the ROI of software development by accelerating the release of value to the customer. This helps capture early market share and drives gross margins by delivering features to the market at the time when the market values them most highly.

Product Development Flow Theme 2. Actively manage queues.
The short, frequent planning cycles of the Agile Release Train actively manage queue lengths:

Team backlogs (queues of waiting stories), (see the mildly controversial post on this topic ) are generally limited to about the amount of work that can be accomplished in a single release increment (Or Potentially Shippable Increment). Planning much beyond that is generally not very productive for the teams, because teams know strategic priorities could change at any release boundary.

Release backlogs (queues of waiting features) are typically limited to those features that can realistically be implemented in the next release or two. Beyond that, product managers understand that they may be over-investing in feature elaboration for features that will never see the light of day.

Portfolio backlogs (queues of waiting epics and future projects) are typically limited to those epics that could likely find their way to release planning in the next six months or so. Too early, or too in-depth investment in business cases for those projects that will not be implemented is a form of waste.

Product Development Flow Theme 3. Understand and exploit variability.
Since a high degree of variability is inherent in software development, frequent re-planning provides the opportunity to adjust and adapt to circumstances as fact patterns change. New, unanticipated opportunities can be exploited by quickly adapting plans; critical paths and bottlenecks become clear and resources can be adjusted to optimize throughout through the bottlenecks and better avoid unanticipated delays.

Product Development Flow Theme 4. Reduce batch sizes.
Large batch sizes create unnecessary variability and cause severe delays in delivery and quality. The ART reduces batch sizes by releasing only those features to the development teams that are prioritized, elaborated sufficiently for development, and are sized to fit within the next release cycle. This avoids overloading the development teams with significant number of longer term, parallel development projects, which may or may not ever see the light of day, and in any case cause multiplexing, overhead, and thrashing. The transport (handoff) batch delay between teams is minimized as well, as face to face planning facilitates high bandwidth communication and instant feedback.

Product Development Flow Theme 5. Apply WIP constraints.
In release planning, teams plan their work and take on only the amount of features that their velocity indicates they can achieve. This forces the input rate (agreed-to, negotiated release objectives) to match capacity (what the teams can do in the release). The current release timebox prevents uncontrolled expansion of work; so the current release does not become a “feature magnet” for new ideas. The global WIP pool, consisting of features and epics in the enterprise backlog, is constrained by the local WIP pools, which reflects the team’s current backlog driven by the current release increment. When WIP is too high, lower value projects either a) never make it into development, or b) are purged during, or just prior to, release planning.

Product Development Flow Theme 6. Control flow under uncertainty – cadence and synchronization.
Planning – The release train planning cadence makes planning predictable, and lowers transaction costs (facilities, overhead, travel.) Planning can be scheduled well in advance, allowing participation by all key stakeholders in most planning events, making face to face information transport reliable and predictable.

Periodic re-planning (resynchronization) allows us to limit variance and misalignment to a single planning interval.

Releasing – The cadence and synchronization of regular, system-wide integration provides high fidelity system tests and objective assessment of project status at regular intervals. Transaction costs are lowered as teams invest in infrastructure necessary for continuous integration, automated testing, and deployment.  Since planning is bottom up, (performed by the teams and based on teams actual known velocity) and short term, delivery becomes predictable. Most all that has been planned should be reliably available as scheduled.

Product Development Flow Theme 7. Get feedback as fast as possible.
The fast feedback of the iteration and release cycle allows us to take fast corrective action. Even within the course of a release increment (or PSI), feedback is no more than two weeks (or the iteration length) away. Small incremental releases to customers allow us to track more quickly to their actual, rather than anticipated, needs. Incorrect paths can be abandoned more quickly (at worse, at the next planning cycle).

Product Development Flow Theme 8. Decentralize control.
Release plans are prepared by the teams who are doing the actual implementation, rather than by a planning office or project management function. Commitments to the plans are bottom-up, based on individual’s commitment to their teammates and team-to-team commitments reached during the planning cycle. Once planned, the teams are responsible for execution, albeit subject to appropriate lightweight governance and release management. Agile project management tooling automates routine reporting; management does not have to slow down the teams to assess actual status.

Summary

That’s it for this post, though I expect I’ll visit the topic again, as it’s certainly going to be a major theme of the next book. In the meantime, you might want to read Reinertsen’s new book, as it’s available now.

Fun with Release Planning

Enterprise release planning is my favorite (business!) activity. Put all the right people in a room for a day or two, fuel them with caffeine and sugar. Present the strategy. Convert the strategy to vision and a set of the next-release objectives. Understand the impact of architectural refactors. Acknowledge deadlines that exist prior to even entering the session. Factor that against the teams actual velocity for new development. Look at the technical debt pile. Discuss and debate. Heat…smoke…then hopefully …light!

Wow, that can be fun. Fresh off her most recent release planning session, Jennifer Fawcett (agileproductowner.com) muses on her recent experiences on her blog at Agile Release Planning Musings. I don’t know for sure how it went, but I’m told that her boss took her out for many drinks afterward. Could have been to celebrate.  Could have been simple encouragement to come back into work the next day.  Check it out; might even be a few chuckles in it for you.

More on the Agile Release Train – Internal vs External Releases

In a number of posts, including Enterprise Agility – The Big Picture(5) The Release Revisited I’ve commented on the desirability of separating Internal Releases (or Potentially Shippable Increments) from External or General Availability Releases. Although not directly represented in the Big Picture itself, this is the assumption behind the Agile Release Train graphic in the model. This creates a separation of concerns that allows development to work on the fasted possible pace, producing PSIs at an even and fast cadence, while the market and our marketing teams make the appropriate decisions as to what gets released to customers and when.

In a recent article entitled to To Release No More or to Release Always, posted for free download at the Cutter Consortium, (note you’ll need to enter the promotion code RELEASEMYTH) Israel Gat of BMC comments on this model as follows:

Think of the in-pipe in this example as engineering and the out-pipe as the business. Engineering can post releases at its own pace. The business can selectively choose from the posted releases. In this paradigm, marketing is not obligated to promote a release upon its completion. Marketing might do so in three months; it might choose to promote the current release with another release due at a later time; it might choose to make a release available on a limited basis; or it might choose never to promote a release.

He then goes on to note an even more potentially aggressive release postulate:

An intriguing question poses itself if you accept this premise of asynchronous operation. If the business is free to determine how it will promote a release in the market, why should engineering be bound to producing releases in the traditional manner? Can everyone benefit from relaxing the constraints that usually surround a release and move toward a more flexible, fluid release concept?

Ryan Martens, Rally Software’s CTO, commented on both this article and indirectly on the method behind the madness of the Agile Release Train in his recent post at Agile Commons, Ryan notes:

As a result, I coach most agile teams to start by making sure their “internal release” cadence is twice as fast at marketing, operations and the market is used to.  In this way you get a release where you can gain feedback and steer the “external release” to market better.

Couldn’t have said it better myself.

–Dean

Enterprise Agility-Big Picture (5): The Release Revisited

In the post, Enterprise Agility-The Big Picture (5): The Release, we described that seminal, value delivery construct of the Agile Release. (see Big Picture below).

big-picture-with-release-boundary-highlighted

Agile Big Picture - Internal vs external release?

But as with all the other oversimplifications in the Big Picture, (which at times seems to create almost as many questions as answers) comments and questions have been raised about what makes a Release a Release. In the book, I described the release on the agile release train in terms of an “internal release milestone” as opposed to a GA (General Availability) release.

The reasons for this are many and for further perspective, I refer you to Chapter 18: Systems of Systems and the Agile Release Train and Chapter 21, Impact on Customers and Operations. In Chapter 21, I described at length how attempting to synchronize the team’s internal development cadence with external release milestones is an over-constrained problem. Dates and features never line up to market events; public relations has an information cycle of its own; customers do not always want to adopt your new software at the rapid pace at which it now evolves, etc. etc. (see Chapter 21).

To address this problem, I suggested that the development teams build their own engineering cadence in a manner conducive to the most efficient development operations and build a GA firewall between the internal development cycle and the distribution model. This is illustrated in the picture below:

Agile Release Train GA Firewall

Agile Release Train GA Firewall

This creates a separation of concerns which allows entities on both sides of the firewall to do their best work without constraining each other.

In any case, with respect to the Big Picture, however, I Indicated the Release milestone is just a Release and that has created questions from readers. From a purist perspective, the release illustrated on this picture is really an Internal Release, which may or may not be a GA Release, (though I’ve decided not to update the picture for fear that would be more confusing to the casual observer). If the internal release is not a GA Release, then the question becomes, why even have it? The answer is that the internal release model is integral and seminal to the agile enterprise model:

  1. From a portfolio planning perspective, the internal release frequency is controlled by the teams to a timeframe that makes face-to-face, enterprise-level release planning feasible from a travel and cost overhead perspective. Re-planning happens at these boundaries and hopefully, only these boundaries.
  2. From a technical perspective, if  for whatever reason, it is difficult to assure true system level quality at iteration boundaries, then this milestone represent an absolutely Potentially Shippable Increment at the system level and it meets all the reliability, performance, standards compliance, and compatibility that is required for a GA release. Its fast, it’s frequent, it is of high quality and it is available!
  3. From a human resource perspective, teams perform best when they work together long enough to be in the flow (the performing phase of the Forming – Storming – Norming – Performing model
    of group development) and that typically takes a release cycle or two, so the release boundaries
    are the best times to adjust teams to address the current bottlenecks in value delivery. This provides teams with the stability they need for high performance, while simultaneously giving the enterprise the resource flexibility it needs to achieve agility.

Hopefully, this clarifies some of the questions and comments on the Big Picture Release. Continue reading

Successes in Release Planning: An Agile Enterprise Tipping Point

In the “Release Planning” category and series on this blog, I described an enterprise best practice designed to coordinate the efforts of some number of agile teams in order to deliver a larger scale system in a somewhat predictive and systematic manner. I also described this process from a practitioner’s perspective in the Agile Journal.

The first time I remember seeing such a session in largely this format was an effort led by Jean Tabaka of Rally Software at BMC in about 2005. Since then, I’ve seen this best practice employed to good effect in a number of larger-scale agile transformations and I have facilitated a number of such sessions myself. As can be seen in the series, Release Planning is an intense, face-to-face process where team members meet to establish a common vision, set objectives for a near-term internal or external release, identify and coordinate their interdependencies, plan the iterations that constitute the release, and finally, commit to release objectives.

In one instance, we combined a couple of days of agile training and an initial release planning session in an “all in” kickstart for about 100 people. I recently noted a blog post on this topic from some of the participants and it looks like the process, and more importantly, agile in general, is really taking hold in that enterprise. The post is ” The Tipping Point: Release Planning – #3″ , and I’ve taken the liberty of providing a few highlights of “what success starts to looks like”, below:

  • “Vision – We were able to convey enough of a roadmap to declare a common long term goal. Every team had an equal focus on this goal and they understood their purpose. In past release planning sessions, some teams weren’t working on the highest priorities. This caused frustration and doubt.
  • Resources – We kicked off release planning with a known set of teams and resources. We didn’t spend valuable hours trying to figure out who would be on what team. We still had a few resource shifts that weren’t ideal but it was limited to one area.
  • Product Owners – We went into release planning with each iteration team having their most effective product owner …. we hired a few very promising product owners and we elevated a few past iteration team members to high potential product owners.
  • Dedicated Agile Build Team – We finally convinced folks that it was important to have a dedicated build team. Up until this release planning session, builds did not receive the valuable resources they needed.
  • Pre-Planning Preparation – We went into the release planning session with each team’s PM, PO and SM having a good understanding of what needed to happen. The teams had a backlog and they even spent time with the entire iteration team validating their thoughts before the planning session started! In addition, each iteration team brought along at least one of their architects to further improve our results.
  • No Major Surprises – We completed our release planning session without encountering any major surprises. No last minute major misses that could be used by the Agile Naysayers as reasons for why Agile can’t work. The old “Agile doesn’t plan long term” myth.”

This is a pretty insightful post, so I recommend reading it in its full context.

Meeting Deadlines: What’s wrong with this Release Plan #2? (AERP8)

In my last post, Meeting Deadlines: What’s wrong with this Release Plan #1, I noted that the ability to commit to and deliver on near term deadlines is a reasonable and necessary accomplishment of a professional agile team and enterprise. To achieve this, we depend a lot on the release planning function (see Release Planning category on this blog and this article in the Agile Journal). In the last post, we looked at a flawed release plan from one perspective. In this post, we’ll look at the same plan from another perspective.

In Waltzing with Bears by DeMarco and Lister, the authors posit that software teams aren’t historically very good at estimating and meeting deadlines. In my personal experience, that has absolutely been the case and the projects I have tracked over the years are typically late by about 50-100% of the initial schedule. In other words, six months becomes more typically nine or twelve. (Here’s a fun exercise: you do the math to see what an old-time, twelve-month plan becomes. And then reflect back on your last big waterfall project!).

This is illustrated with the probability-of-meeting-the-deadline graph from their book, which looks roughly as follows:

Deadline Probability Curve

Don’t worry too much about the shape of this curve, because it isn’t a science and I can’t draw worth a hoot anyway, but the point is that the probability of delivering before the deadline is basically zero. This is because we’ve most likely identified all the tasks that must be done on the project and they can’t likely be accelerated due to all resources being allocated.

And since the area under the curve (total probability of meeting the plan) is 100% (assuming that it will eventually be delivered at some point), then the probability that it will be delivered on any one day predicated in advance, including the deadline date, cannot be 100%! In other words, while it could happen, it isn’t very likely and it would be a stupid thing to build external dependencies on. In addition, since it can’t be delivered early as we discussed above, the logical conclusion is that it will probably be delivered late. Sadly, our own experience has aptly illustrated that this is the most likely outcome.

DeMarco and Lister hold that the proper thing to communicate is not the deadline itself, but the probability of meeting any particular date over time. However, that is a pretty sophisticated notion for most of today’s enterprises.

So are we forever doomed to bad outcomes relative to plan?
Do we need to consider a career change?

Not if we take the lessons learned and adjust our predictive model.

Now, back to my earlier post. We saw in this post that the first plan proposed by the team was flawed, i.e.:

revised-release-plan.jpg

Relative to the risk profile curve above, the flaws are pretty obvious, i.e.:

Original Release Plan

The probability of meeting the deadline is small, since it cannot possibly be delivered early and there is no allowance for the inevitable gotchas.

In response, the team came up with a revised plan as follows:

revised-release-plan1.jpg

This plan was better received at the release planning session.

Why might this plan work when the other likely would not? Because it builds in the allowance for deadline risk mitigation as the following figure shows:

Revised plan with higher probability overlay

This has the effect of moving the probability curve to the left. Since the third iteration is lightly loaded, it might be possible to deliver a little early, thereby starting the upward curve earlier. Moreover, with the lightly loaded third iteration having the capacity to take on the unanticipated tasks and absorb any underestimated tasks from iterations one and two, it is much more probable to deliver it on time.

The probability is still not 100%, because it can’t be 100% at any particular date, but the area under the curve indicates that there is a higher probability of meeting the date than there is of being late!

Perhaps that’s as good as it gets. This is Research and Development after all. And it’s all software, even at that.

(Refer to Waltzing with Bears for a deeper and more meaningful treatment of the probability curve and its effect on risk management.)