Agile Requirements Information Model (4) – Subset For Agile Programs (a): Features and Feature Backlog

Note: This is the fourth in a series of posts describing, in more rigorous terms, the  Lean, Scalable Requirements Information Model for the Agile Enterprise (AE RIM) that underlies the Big Picture Series. As I’ve noted, I’ve been collaborating in this series with Juha-Markus Aalto, Director of Operational Development for Nokia S60 Software Unit, where a variant of the model is being developed and applied on a large scale (very large- affecting thousands of practitioners) development project.

In the last post, the Subset for Agile Teams we described the model as it applies to Agile Project Teams. In this next post, we move up one level in the Big Picture (see graphic below) and describe how the model is applied at the Program Level.

Agile Enterprise Big Picture with Program Level Highlighted

Agile Enterprise Big Picture with Program Level Highlighted

But first, for convenience, we’ll take a quick look again at the base model.

Base Requirements Information Model

Base Requirements Information Model

Note: as the model is being elaborated in earlier posts, it is being refined with additional model elements and relationships, but I’ll continue to use the summary version (base) model here for simplicity.

The Program Level.

As can be seen in the Big Picture, we assume that a significant number of Agile Teams are collaborating in a Program building an enterprise-class System of substantial complexity. We described the teams use of a quintessentially agile model subset (Story, Task, Backlog Item and Acceptance Test) in the last post.  At the Program Level, we’ll introduce the additional elements of Feature Backlog Item (in this case, Feature backlog item) and Non-Functional Requirement.

capture-program-elements-highlighted

Feature

In describing the features of a product or system, we leave the world of small team agile (where most everything is a Story) and take a more abstract and higher level view of the system of interest. In so doing we have the security of returning to a more traditional description of system behavior. There is certainly some comfort in knowing that while agile changes most everything, it needn’t change absolutely
everything we have learned prior, including what has been well described in software requirements texts (for example: Managing Software Requirements: A Use Case Approach: Leffingwell and Widrig). In that text, Features were described as:

Services provided by the system that fulfill a user need.

They lived at a level above software requirements and bridged the gap from the problem domain (understanding user needs) to the solution domain (specific requirements intended to address the user needs) as the graphic below shows:

"Requirements Pyramid" from Managing Software Requirements, by Leffingwell and Widrig, Addison-Wesley 2003.

"Requirements Pyramid" from Managing Software Requirements, by Leffingwell and Widrig, Addison-Wesley 2003.

We also posited in that text that a system of arbitrary complexity can be described with a list of 25-50 such features. (Note, however, that in the Big Picture, Features are sized to fit within release boundaries, so that a release can be described in terms of a list of just a few features).

That simple rule of thumb allowed us to keep our high level descriptions exactly that, high level, and simplified our attempts to describe complex systems in a shorter form. Of course, in so doing we didn’t invent either the word “Feature” or the usage of the word in that text. Rather, we simply fell back on industry standard norms to describe products in terms of, for example, a Features and Benefits Matrix that was often used by product marketing to describe the capabilities and benefits provided by our new system. By applying this familiar construct in agile, we also bridge the language gap from the agile project team/product owner to the system/program/product manager level and give those who operate outside our agile teams a traditional label (Feature) to use to do their traditional work (i.e. describe the thing they’d like us to build).

Even more conveniently, we also provide a surface to organize agile teams around – the Feature Team, which is an agile team organized for the purpose of delivering a set of features to end users – see Larmon & Vodde-Scaling Lean and Agile Development, Highsmith- Agile Project Management and Leffingwell-Scaling Software Agility. (Note: in my latter text, though, I described teams more commonly as agile component teams, but noted that these teams could be organized around Features, Components, Subsystems, or whatever made sense to the system context).

Whether by serendipity or intent, the ability to describe a system in terms of its proposed features and the ability to organize agile teams around the features gives us a straightforward method to approach building large-scale systems in an agile manner.

The Feature Backlog

Returning to the Backlog and Feature model elements:

capture-backlog-and-feature-highlights

We can see two obvious implications.

  1. Features are realized by Stories. At release planning time, Features are decomposed into Stories, which is the team’s implementation currency. (In a large system, there may tens and even hundreds of Stories necessary to realize a Feature)
  2. Planned Features are stored in a Backlog, in this case the Feature (Release) Backlog.

In backlog form, Features are typically expressed in bullet form, or at most, in a sentence or two. For example, you might describe a few features of Google Mail something like:

  • Provide “Stars” for special conversations or messages, as a visual reminder that you need to follow-up on a message or conversation later.
  • Introduce “Labels” as a “folder-like” conversation-organizing metaphor.

Expressing Features in Story Canonical Form

As agilists, however, we are also comfortable with the suggested, canonical Story form described in the last post:

As a <role> I can <activity> so that <business value>

Applying this form to the Feature construct can be also to help focus the Feature writer on a better understanding of the user role and need. For example:

As a modestly skilled user, I can assign more than one label to a conversation so that I can find or see a conversation from multiple perspectives

Clearly there are advantages to this approach. However, there is also the potential confusion from the fact that Features then look exactly like Stories but are simply written at a higher level of abstraction. But of course, that’s what they really do represent!

Testing Features

In the last post, we also reintroduced the agile mantra all code is tested code and illustrated the point with the base model:

capture-stpry-with-acceptance-test

This illustrates a Story cannot be considered Done until it has passed one or more acceptance tests.

At the Program level, the question naturally arises as to whether or not Features also deserve (or require) Acceptance Tests. The answer is, most typically, yes. While story level testing should assure that the methods and classes are reliable (unit testing) and the stories suits their intended purpose (functional testing) , the fact is that the Feature may span multiple project teams and many, many (tens to hundreds) of stories.

While perhaps ideally, each project team would have the ability to test the Feature at the system level, the fact is that that is often not practical (or perhaps even desirable – after all, how many teams would we want to continuously test the same feature!). Many individual project teams may not have the local resources (test bed, hardware configuration items, other applications) necessary to test a full system. Moreover, at scale, many teams are developing APIs, infrastructure components, drivers and the like based upon the architecture of the solution, and they may not even have an understanding of the full scope of the system Feature that drove their Story to exist. (If you don’t believe or agree with this, test the hypothesis by asking some newly hired teammates working on a new feature or component of a really big system to describe how or why a system-level feature works……).

In addition, there are also a myriad of system-level “what if” considerations (think unified process alternate use-case scenarios) that must be tested to assure the overall system reliability and some of these can only be tested at the full system level.

For this reason, Features typically require one or more functional acceptance tests to assure that the Feature meets the user’s needs. In practice, many of these tests may be implemented at a higher level in the hierarchy, at the Big Picture System Team level. (see below)

system-team1

To reflect the addition of Acceptance Tests to Features, it is necessary to update the information model with an association from Feature to Accetpance Test as the grpahic below shows.

Revised Information Model, Adding Acceptance Tests to Features

Revised Information Model, Adding Acceptance Tests to Features

In this manner, we illustrate that every Feature requires one or more Acceptance Tests, and a Feature cannot be considered Done until it passes.

Next- Nonfunctional Requirements in Agile

Well, that covers the Feature model element at the Program level. However, we have yet to explore another critical Program model element, the Nonfunctional Requirement. Since this post is already way too long, I’ll leave that for the next post in this series.

Advertisements

Enterprise Agility – The Big Picture (14a): On Agile Portfolio Management and the Legacy Mindset

Note: In the post Enterprise Agility: The Big Picture, we introduced an overview graphic intended to capture the essence of enterprise agility in a single slide. In a series of continuing post (see the Big Picture Category on this blog ) we have been working our way from the bottom (stories, iterations and the like) to the top where the enterprise vision and portfolio management resides. In this post, we’ll start a miniseries to describe the last big icon to the left – Agile Portfolio Management.

big-picture-with-portfolio-highlighted

The Big Picture 14 - Agile Portfolio

A Particularly Relevant Case Study

Recently, while researching this topic for another purpose, I ran across an excellent case study from DTE Energy called: Establishing an Agile Portfolio to Align IT Investments with Business Needs. The article was written by Joseph Thomas and Stephen Baker and presented at Agile 2008. Unfortunately, I did not attend this presentation while I was at Agile 2008, but I found the whitepaper published in the proceedings. It appears to be available at http://submissions.agile2008.org/files/CD-ThomasBaker-EstablishAgilePortfolio-Paper.pdf and I highly recommend it for those readers of this blog that have
product or asset portfolios
(and you know who you are).

Here was the introductory “grabber” for me:

“Those who implement agile software development and agile project management in a traditional corporate environment may encounter legacy corporate and IT processes that reflect legacy mindsets and cultures- These remnant processes, mindsets, and cultures represent opportunities to improve the systemic value that agile approaches are capable of enabling.”

This is a reminder that team agility does not automatically engender enterprise agility and in most all cases, the team is just the beginning. The article is surely relevant to the enterprise perspective, because when it comes to scale, DTE Energy is right “up there”:

“DTE Energy, a Fortune 300 is a diversified energy company involved in the development and management of energy related businesses and services nationwide with $9 billion in annual revenue and 11,000 employees. DTE Energy’s Information Technology Services (ITS) organization, now consisting of over 900 people, provides leadership, oversight, delivery, and support on all aspects of information technology (IT) across the enterprise.”

Illustrating the maturity of thought reflected in the article, DTE Energy’s IT teams have been implementing and extending agile practices in their enterprise since 1998, moving through CMM levels II and III via agile practices. They have almost a decade of agile adoption upon which to build their ongoing learning. As reflected in the case study, DTE also illustrates Kaizen Thinking (continuous thirst for improvement) that is the hallmark of top enterprise agilists. Evidently it is just this thirst that drives the agile initiative ever upward until it hits the level of portfolio planning and decision making at DTE.

The Transformation Starts on the Ground

The title of the article, along with the maturity of DTE’s agile implementation efforts, also reflects the fact that building the agile enterprise is a “ground up” exercise. The primary work must start with the development teams themselves. They build all the code. If they aren’t agile, no one is. Thereafter, dealing with many of the challenges at the corporate level, (with the Project Management Office (PMO) often being the control room of the mother ship) is likely to be a significant challenge that must be addressed. For it is there that projects and programs are initially formed, budgets and resources are determined, governance is established and longer term (and not particularly agile) external commitments are typically made. If not successfully transformed, many of the potential benefits of the agile enterprise – time to market, productivity and quality, ROI, revenue and profitability growth – may be heavily mitigated.

However, this next set of enterprise challenges is most easily addressed after the teams have first demonstrated  the substantive productivity and quality improvements of the methods. That way, they’ll be standing on their accomplishments and can serve as an object lesson in what agile could do, if only unfettered.

But for readers of this blog series now is a later time and it’s time we addressed the big “portfolio” icon on the top left of the Big Picture

Legacy Mindsets Can Hinder Potential Enterprise Benefits

DTE’s whitepaper starts with a discussion of the various legacy mindsets that can inhibit achievement of the full benefits of the agile enterprise. These include: “widget engineering”, “control through data”, “order taker mentality” and more. This is an important underpinning for the enterprise transformation we are driving, because one can’t recognize solutions to a problem if one does not understand the problem. These mindsets must be understood and addressed before much agile progress can be made at the portfolio level.

We’ll discuss these legacy mindsets, and more from our own experiences, in the next post.

Enterprise Agility- The Big Picture (13): Portfolio Vision & Epic

Note: In the post Enterprise Agility: The Big Picture, we introduced an overview graphic intended to capture the essence of enterprise agility in a single slide. In a series of continuing posts, too numerous to highlight here (see the Big Picture Category on this blog for an orderly summary)we have been gradually working our way from the bottom (stories, iterations and the like) towards the top from where the enterprise vision originates. In this post, we’ll briefly discuss the Epics that drive so much of the behavior of the agile teams at Level 1 and 2.

Epics, Features and Stories

In an earlier post, we took the liberty of putting labels on the different types of system behaviors (stories, features, epics) that drive our hierarchical, Big Picture model.

big-picture-epic

Agile Enterprise Big Picture 13- Epics

I also took care to note that there is no “UML for describing agile things” so I’ve simply tried to apply usage models that appear to be fairly common, providing us with at least a tentative language to describe what we are trying to describe. In so doing, I picked the word “Feature” to represent things much bigger than a User Story as this is mostly consistent with other agile uses of the term including Feature-Driven Development and the organization of delivery models based on the agile feature teams construct. It is also consistent with the language we used to describe such system services in earlier software requirements texts, including my own – “a feature is a service provided by the system that fulfills a user need”- (Managing Software Requirements: A Use-Case Approach, Addison-Wesley, 2003).

Sizing of Features and Stories

When it comes to sizing, we sized User Stories arbitrarily so as to fit in an iteration (leaving no dangling participles of work at the end of the iteration). This is standard agile practice and is a point of agreement in the various methods. For the Feature, we simply expanded the paradigm to this larger service class and again forced Features to be sized arbitrarily to fit inside an Internal or External Release boundary. This helps assure that teams focus on larger-scale value delivery while again, not leaving the user hanging with an incomplete, non-holistic chunk of functionality. In addition, this forces Product Owners and Product Managers into the same incremental thinking (what’s the simplest thing that can possibly work for the user to fulfill that need, and how soon can we deliver it….?) that agile teams use (with XP being the most extreme case of incremental-ism). Indeed this one-piece-of-user-value-at-a-time is the most basic construct of agility at the team level and we build on it continuously to create the agile enterprise.

Epics – Managing Relative Investment

As we reach the top of the pyramid, however, and discuss the highest class of user benefit, the Epic, there is no need to force arbitrary sizing as these are portfolio vision items that take multiple releases to deliver. Indeed, some epics take years to deliver, even in agile enterprises. For example, the Epics such as “video streaming” and “integrated on-line music stores” will likely consume hundreds of person years for mobile phone manufacturers and will be delivered in stages over a number of years.

The question that arises is “given their large scope and size, even if prioritized how would we know when we will deliver what”? The answer has to come from the Portfolio Management function (represented by the Portfolio icon on the left). There the decisions are made on a relative investment basis based on the business case for the Epic. In other words, enterprises decide what percentage of the total resources they want to invest in an Epic to achieve the ROI of the business case. Given that data, teams can decide at Release Planning time, (where they divide the Epic into Features that deliver user value) what percentage of their resources they can assign to the Epic. With resources as a given, the highest priority Features for the Epic are delivered first, gated only by the resources available. Then Release by Release, the Epic makes its way to market in Feature-size chunks, with reprioritization occurring at every Release Planning boundary.

Summary

In this Epic post (sorryJ), we’ve introduced the Epic container as the vehicle to describe the major initiatives that we need to deliver to the market. In so doing however, we’ve left at least one stone unturned, which is how Epics can be sized and estimated (assuming that they can be). We’ll discuss that in the next post.

In addition, we’ve opened another Pandora’s box at the top of the Big Picture, which is “what the heck is Agile Portfolio Management” and how does an enterprise achieve that?” So fortunately or unfortunately, the Big Picture series will continue a little while into the future, with a few upcoming posts on Agile Portfolio Management.

Agile Enterprise Big Picture Revised, (again!)

For those following the Big Picture series on the blog, you’ve probably already noted that the Big Picture and elaborations have been evolving over time. I supposed it would have been nice to “get it right the first time” but in fact, that wasn’t possible, for without publishing and feedback, there can be no “right”. So I publish and refactor and publish and refactor and publish and refactor. In the meantime, readers have had the benefit of any value I’ve been able to deliver in the meantime.

After all, this is the Agile Enterprise Big Picture we are trying to describe!

In any case, based on reviewers and previewers comments, I’ve come to the conclusion that I was missing one more element that was necessary to describe how the enterprise model works. This element is the “system team”, a team which runs iterations on the same cadence and release train as the agile development teams and carries system level integration and evaluation responsibilities. I did describe it in the book, but I hadn’t included here for fear of overloading the graphic. I’ve now included it:

capture-big-picture-tip-jan-2009

Big Picture Revised (Again!) Jan 2009

and I’ll describe it in the next post.

I’m also going to insert this revised graphic back on the first introductory post, in case any future readers start there. But I’m not going to bother to redo Posts 2-9, because that isn’t the “simplest thing that can possibly work”.

Enterprise Agility–The Big Picture (4): Backlog

Note: In the post Enterprise Agility: The Big Picture, we introduced an overview graphic intended to capture the essence of enterprise agility in a single slide. In prior posts, we’ve discussed Teams, Iterations and the Agile Product Owner. To see the full series, select the “Big Picture” blog post category. In this post, we’ll describe three types of Backlog [callout (4)].

big-picture-4-backlog

Big Picture 4 - Backlog

One of the difficult and interesting aspects to working with software process models is their lack of semantic precision. Based on my experiences in working with the UML team in the late 90s and my responsibilities for the RUP thereafter, I’m familiar with the difficulty in establishing a common semantic structure for software things. It takes collaboration, consensus and process maturity as well as some sort of governing authority to establish agreement. Little such infrastructure exists today in the agile community. That’s a bit of handicap in describing these methods and perhaps a measure of their relative immaturity (not to confuse maturity with efficacy, after all, the waterfall model is really mature!). At the same time, that’s probably one of the reasons we like agile – it’s not too late to explore and develop new methods and new meaning.

In this context, I would describe the word “Backlog” as the simplest and yet most overloaded term in agile. It’s not intended to be complicated and it really just means “stuff that needs to be done”. In its simplest form, it is primarily a repository for un-done user stories, that agile invention of value-added objects that we use to describe things our system needs to do for the user. (See Chapter 17 of Scaling Software Agility or Mike Cohn’s User Stories Applied, for a deeper treatment of user stories.).

However, in enterprise-class development things get more complicated and effective agile gets more complicated too. In one very large scale software development case, I saw a schema that had about six or seven different backlog types and while that seems complicated, it seemed to work ok in that company’s increasingly agile practice.

In the Big Picture, I’ve taken a stab at some simple clarifying labels for three common types of backlog that scale to the enterprise. However, the reader should be aware that there is no “backlog label consensus” in the industry that I am aware of, and I’m simply using these labels as I and some others have applied them. In any case, the labels matter less than the principles and it’s the hierarchical principle that I hope to make clear. In the Big Picture, I’ve indicated a three-level requirements hierarchy to hold three types of backlog. They are:

The Iteration (or Story) Backlog – holding user stories and the like that are generally intended to be implemented by a component team in their code baseline in the context of an iteration

The Release (or Feature) Backlog – holding features that deliver system-level value that typically affect multiple component teams and typically span iterations

The Portfolio (or Epic) Backlog – a placeholder for capturing and discussing the larger scale initiatives that an enterprise has, or intends to have, underway. Epics often affect multiple component teams, multiple systems, and even multiple products. Epics typically span iterations, releases, and sometimes years!

I’ll describe the first and hardest working of these, the Iteration (User Story) Backlog, in the next post.

Enterprise Agility–The Big Picture (2): Iterations

Note: In the post Enterprise Agility-The Big Picture, we introduced an overview graphic intended to capture the essence of enterprise agility and the Agile Release Train in a single slide. In this post, we’ll describe the iterations [callout (#2)] that are the fundamental building block of enterprise agility. For elaboration on earlier numbered callouts, select the “Big Picture” blog post category.

In this post, we’ll discuss the Iteration, the heartbeat of agility. While it isn’t appropriate here to elaborate in detail on how teams achieve the iteration, Chapter 11 of Scaling Software Agility, Mastering the Iteration
is posted right here on the blog for easy access to that discussion. But for now, we’ll just focus on the Big Picture, repeated with updated callouts below.

big-picture-2-iterations1

Big Picture 2-Iterations

Iterations and the Agile Release Train

The graphic illustrates an example of the Agile Release Train concept that I described in Chapter 18 of SSA. To summarize, the Agile Teams teams are organized around a standard iteration length, and share start and stop boundaries so that the code maturity is comparable at each iteration boundary system integration point. Of course, agilists always recommend continuous daily integration (Chapter 14 of SSA) and this graphic doesn’t mean to imply that isn’t happening. However, it does gives credence to the fact that it may be quite difficult to achieve continuous integration, (including full automated regression testing) across all teams (at least early on in the agile adoption lifecycle) and at least it is forced at the iteration boundaries in this model.

Iteration Length

Iterations have a standard pattern of plan-implement stories-review (demo and retrospective). In the Big Picture,  , the iteration lengths for all teams are the same as that is the simplest organizational and management model (Chapter 18 of SSA). As for length, most have converged on two weeks as a standard length. The logic is as follows: Realistically, there are only four natural, calendar-based choices – 1 week, 2 weeks, 4 weeks, and a calendar month. Due primarily to iteration overhead (planning, demo, retrospective, etc.) and the occasional holiday, one week is too short for many teams (though I am aware of large scale systems being built in one week iterations and that length is also quite typical for XP teams). Four weeks or one month is simply too long, as there are not enough times to fail before some major release commitment. In other words, the lessons learned are too far apart to make the necessary mid-course corrections necessary to land the release on time. That leaves just one choice: two weeks.

However, you’ll also note that the graphic doesn’t call out an iteration length. I am aware of different teams using this basic Agile Release Train model that apply iterations of one, two and four weeks, respectively, so if you don’t like the two week choice, after you and I are done arguing, you can still apply the model.

Number of Iterations per Release

You may also note that we’ve illustrated four development iterations (indicated by a full iteration backlog) and one hardening iteration (indicated by an empty backlog). This is somewhat arbitrary as well and there is no fixed rule for how many times a team iterates prior to an internal or external release boundary. On the other hand, if you assume standard two week iterations and the multi-iteration pattern shown here, you’ll arrive at an internal release boundary of about ten weeks. In practice, many teams apply this model with five development iterations and one hardening per release, creating an internal release cadence of a fully shippable increment about every 90 days, which is perhaps a bit more natural and corresponds to the likely maximum external release frequency for some enterprises.

In any case, the length and number of iterations per release are up to each enterprise and in point of fact, the differences are not all that critical, so the Big Picture model is largely immune to the team’s actual selection.

We’ll discuss the Role of the Product Owner and the iteration backlog in the next posts.

Enterprise Agility–The Big Picture (1): Agile Teams

Note: In the post Enterprise Agility-The Big Picture, we introduced an overview graphic intended to capture the essence of enterprise agility and the Agile Release Train in a single slide. We repeat it here with an annotation callout.

big-picture-1-agile-teams

Big Picture (1): Teams

In this post, we’ll describe the Agile Teams [callout (#1)] that are responsible for creating all that high quality software.

You can see from the graphic that we’ve indicated some number (<=10) of agile teams that collaborate on building the bigger system.  Since it’s an agile team, it has a maximum of ten or so members, and includes all the roles necessary to Define/Build/Test (see Chapter 9 of Scaling Software Agility) the software for their component or feature. The roles are Scrum/Agile master, Product Owner and the Team itself. The Team includes fully dedicated developers, test and test automation experts, tech leads, and support from component level architects, SCM/build/infrastructure support personnel, or indeed, whomever it takes such that the team is fully capable of defining, developing, testing and delivering working and tested software for their component into the system baseline.

These teams are most typically organized around a software component or a feature. Most enterprises will have a mix of both, some component teams focused on infrastructure and architectural components, and some feature teams focused on larger scale, current, value-delivery initiatives. Agile Teams are self-organizing and tend to reorganize themselves continuously based on the work in the backlog so the teams are more dynamic than static. (That’s one of the reasons the open space environment works more effectively).

We’ve indicated on this slide that there typically a maximum of 5-10 or so such teams cooperating on building a larger system or subsystem (the system, application or product domain). However, this isn’t a hard or fast rule, but experience has shown that even for VERY large systems, the logical partitions defined by system or product family architecture tend to cause “pods” of developers to be organized around the various domains. This implies that perhaps 50-100 people must intensely collaborate on building their “next bigger thing” in the hierarchy. And as we’ll discover later, this is also about the maximum size for face-to-face, collaborative Release Planning at the system or subsystem domain level. So while this construct and the numbers are somewhat arbitrary, it does track real world examples and is a fairly natural way to organize for large scale agile development.

Of course, even that’s an oversimplification for a really large system, as there are likely to be a number of such larger domains, each contributing to the portfolio (application suite, larger system, or whatever). From a purely mathematical scalability standpoint, ten such domains, each consisting of ten component teams each could be the agile organization containers for up to 1,000 practitioners. In practice, this is about the largest, highly cooperative, development project I’ve personally seen. While there are certainly much larger numbers of practitioners in some really large software enterprises, it is not typically the case that all, or even most, must contribute cooperating code. Rather, it is far more likely that the teams are organized around somewhat stand-alone product lines or  applications with relatively thin interfaces.

So this big picture model appears to serve us well in describing an organizational and process model that could work for teams of many hundreds of software practitioners, who must collaborate frequently and intensely to deliver a large scale system in an agile manner. You’ll also notice a Release Management Team a little higher in the graphic. This is covered in Chapter 18 of SSA, and it will also be discussed in a post later in this series.

In the next post (Big Picture #2, later this week), we’ll discuss the heartbeat of the agile enterprise, Iterations.

Enterprise Agility–The Big Picture

(Note: The Big Picture graphic below has been revised during this series of posts. I try to keep the latest version here so you may see some differences if you follow the series from front to back).

I spend a fair amount of time working with executives trying to give them the “big picture” of what their software organization, requirements, process and delivery model would look like after they adopt enterprise agility and implement the Agile Release Train (Chapter 18 of Scaling Software Agility). My thinking is that if I can paint the “after” picture in their minds without spending all day at it, then it will be easier to for them to understand the types of work they need to do to achieve it.

For some time, I’ve struggled to come up with a graphic that communicates the high-level model of enterprise agility as simply as the waterfall graphic communicated its stage-sequenced model. I’ve made a number of attempts at it, including a few in the book, but I don’t feel that I ever really captured the essence in a single graphic. (Though I’m a writer at times, I’m a far more visual learner).

Recently,  I worked with Matthew Balchin and Richard Collins at Symbian Software Ltd. , ( and later, with Juha-Markus Aalto of Nokia, Nov-Jan 2009) to build a “single slide” big picture that we could use in a number of presentation contexts. Here is a modified result.

big-picture-00

The Big Picture (updated Mar 2009)

I tested an earlier version of this graphic recently with a group of software executives contemplating an agile transformation. We spent a solid hour just discussing the graphic. It seemed pretty effective in helping them understand the intended result, with far fewer words and slides.

Since it seemed to work but it’s not quite self explanatory, I will annotate it extensively in some upcoming posts in the hope that it might help others struggling with the same communication challenge.