Hiren Doshi is doing a good job of describing various considerations around the feature vs. component teams discussion and on the practical application of the three tier Big Picture model for larger and distributed enterprises, at his blog at www.practiceagile.com. His ongoing, experience-based perspective is lucid and practical so I’ve added his blog to the blogroll. For those interested in this thread, check out these posts on the feature vs component teams debate.
Prompted by the discussion of feature and component teams, a reader recently sent in the following question:
“I have a user story, that on its face, appears to have been structured in such a way that it simply cannot be independent. That is, the story itself cuts across teams, and even across product lines. If the story were to be broken down any further, it would only be into its constituent tasks. Moreover, it appears the cross cutting nature of this story will lend itself to the necessity of creating a virtual team of sorts as it will take individuals from many teams to coordinate. Moreover, we have identified many new stories of this nature.
The team involved is rather large (300+) and spans several countries. So I suppose what we may be looking at is one very large project team with many sub-teams and that many stories may impact many teams. My first inclination is to suggest that if a story such as the one above exists, then it should be decomposed into smaller stories that each team can work towards. However, I’m struggling with the lack of delivering anything tangible on a per story basis.”
Here was my response:
Your first inclination is generally the best one. At enterprise scale, many features span teams. This is indeed the common case, especially as the customer value stream pile moves up the stack to where they are using multiple products, web services, or whatever, in new and innovative manners. However, there should be no reason to create new teams, or programs, just to coordinate content of this type.
Don’t worry so much about the fact that each team-level story doesn’t appear to deliver user value by itself, (although you should take every opportunity to see if you can define team-level value stories that deliver value to their users too) as that is taking the user value story metaphor too far. Since you can be dang certain that the bigger feature is delivering value to your customer, it doesn’t matter that some team is having to build an adapter-API –widget-whatever, or has to refactor some code that they wouldn’t otherwise have to refactor. It’s just part of what they do to deliver larger end-user value.
Indeed, this is such a common occurrence in the enterprise that the Big Picture was designed in part to address it: Here’s how it’s handled.
- A Product Manager or program manager who understand the need for the bigger feature expresses it to the teams at the release planning boundary
- During release planning, teams break the feature into smaller stories that that they deliver to the baseline in the iterations. (Note: In the lean and scalable agile requirements model, I don’t call these breakdowns tasks, as tasks are reserved for the individual activities to coordinate completion of a story).
- A scrum of scrums, a release management team, the system team, or the program or product manager, helps coordinate the delivery of the feature over time. The system team, or some like construct, performs end-to-end level testing of the new feature.
If you are organized this way, you should be able to handle any number of larger spanning features. In the Big Picture, features and stories are just “content” to the teams prioritized backlog. The process just runs and runs, without the necessity for reorganization.
Whether currently organized by feature, component, product or service, if you attempt to reorganize every time a cross cutting feature comes along, everyone will spend too much time figuring out where their desk is or who the product owner is, and too little time cutting code.
Note: also see the following post on the same topic.
In my last post, I reintroduced this topic, describing the conundrum of organizing large number of agile teams to better address value delivery for new end user features and services. I described two basic approaches, feature teams and component teams, and some arguments for each.
I’m still waiting for some feedback from a couple of others before I conclude with my personal recommendations, which I’ll post in the next day or so. In the meantime, Craig Larman courteously responded with his comments on my comments on his work in Larman . Since wordpress, as a capable (and free!) as it is, tends to bury comments at a level where you really have to want to see them to find them (not sufficiently “in your face” from my perspective), I thought I’d elevate Craig’s comments to this post, as he provides some clarifications on his work and my interpretation, as well as some more expansive thinking on the topic. Craig’s Comments from the prior post below are verbatim below.
Note: please also see additional comments from others on this and the prior post.
(someone pinged me that this topic was under discussion). thx for raising the topic.
bas and i did not write or intend to write that feature teams are the only rational way to organize teams. rather, the choice of any organizational design (including team structure) can be evaluated in terms of the global goal for optimization (e.g., maximizing fast value delivery, minimizing disruption, etc). from a toyota or lean perspective and the first agile principle, the global goal relates to early/fast delivery of value to real customers.
in the context of that measure, if one has design 1 with more delay and handoff and design 2 with less delay and handoff, design 2 is ‘better’ — with respect to that particular measure.
the ideal cross-functional cross-component feature team minimizes delay and handoff, whereas (not an opinion, just by pure systems logic..) component teams (which per definition are doing only part of the work for a customer feature) have more delay and handoff and create mini-waterfalls with single-specialist groups passing WIP items to each other (analysts, architects, programmers, testers, …).
but feature teams are only the ‘better’ model depending on the global goal — which is worth identifying in any discussion. and as you point out, we note that an org design such as feature teams may optimize flow of value, but it raises other problems (which are solvable). from a systems thinking perspective however, optimizing around addressing one of the other issues (e.g., minimizing org disruption) is a sub-optimization from the the perspective of value delivery… but not a suboptimization if the group agreed minimizing disruption was the system’s global goal.
if one visits an org that was organized around component teams and then they successful transition to feature teams, and then one asks if delivery of value is faster, the answer will be “yes.” it is also common when investigating such an org that they will say that planning and coordination are simpler, and that the multi-skilled learning that arises when working in feature teams has been useful to increase their degrees of freedom and to reduce the “truck number” of bottleneck specialists.
i look forward to your future comments, and those of others.
Highsmith, Jim. Agile Project Management. 2004. Addison-Wesley.
Leffingwell, Dean. Scaling Software Agility: Best Practices for Large Enterprises. 2007. Addison Wesley.
Larmon, Craig and Bas Vodde. 2009. Scaling Lean and Agile Development. Addison Wesley.
While continuing my work with a number of larger enterprises facing the cultural change, new practice adoption and organizational challenges of a large scale agile transformation, the topic has again come up as to how to organize large numbers of agile teams to effectively implement agile and optimize value stream delivery.
For the smaller software enterprise, it’s usually no issue at all. Teams will naturally organize around the few products or applications that reflect the mission. The silos that tend to separate development, product management and test in the larger enterprise (hopefully!) do not exist. The teams are probably already collocated, rather than being distributed across disparate geographies. Creating an agile team is mostly a matter of deciding what roles the individuals will play, and rolling out some standard training.
At scale however, like most other things agile, the problem is dramatically different and the challenge is to understand who works on what and where. How do we organize larger numbers of teams in order to optimize value delivery of requirements? Do we organize around features, components, product lines, services or what? While there is no easy answer to this question, the question must be explored because so many agile practices – how many backlogs there are and who manages them – how the vision and features are communicated to groups of teams – how the teams coordinate their activities to produce a larger solution – must be reflected in that decision.
Organize Around Components?
In Scaling Software Agility, I described a typical organizational model whereby many of the agile teams are organized around the architecture of a larger-scale system. There, they leverage their technical skills and interest and focus on building robust components – making them as reliable and extensible as possible, leveraging common technologies and usage models, and facilitating reuse. I even called the teams define/build/test component teams, which is (perhaps) an unfortunate label. However, I also noted that:
“We use the word component as an organizing and labeling metaphor throughout this book. Other agile methods, such as FDD, stress that the team be oriented around features, still others suggest that a team may be oriented around services. We use component rather liberally, and we do not suggest that every likely software component represents a possible team, but in our experience, at scale, components are indeed the best organizing metaphor.”
I wasn’t particularly pedantic about it, but noted that a component-based organization is likely to already exist in the enterprise, and that wasn’t such a bad thing, given the critical role of architecture in these largest, enterprise-class systems.
In any case, in a component-based approach, development of a new feature is implemented by the affected component teams, as the figure below illustrates.
In this case, the agile requirements workflow would put backlog items for each new feature on each of the affected component teams. They minimize multiplexing across features by implementing them in series, rather than parallel. Moreover, they are able to aggregate the needs of multiple features into the architecture for their component and can focus on building the best possible, long-lived component for their layer.
Perhaps this is reflective of a bias towards an architecture-centric view of building these largest-of-all-known software systems. For there, if you don’t get the architecture reasonably right, you are unlikely to achieve the reliability, performance and longer-term feature velocity delivery goals of the enterprise. If and when that happens, the enterprise may be facing a refactoring project that could be measured in hundreds of man-years. A frightening thought for certain.
In addition, there are a number of other reasons why component teams can be effective in the agile enterprise:
- Because of its history, past successes and large scope, the enterprise is likely already organized that way, with specialists who know large scale data bases, web services, embedded operating systems and the like, working together. Individuals – their skills, interests, residency, friendships, cultures and lifestyles – are not interchangeable. It’s best not to change anything you don’t absolutely have to.
- Moreover, these teams may already be collocated with each other, and given the strong collocation benefits in agile, organizing otherwise could potentially increase team distribution and thereby lower productivity.
- Technologies and programming languages typically differ across components as well, making it difficult, if not impossible, for pairing, collective ownership, continuous integration, testing automation, and other factors critical to high performing agile teams.
- And finally, at scale, a single user feature could easily affect hundreds of practitioners. For example, a phone feature like “share my new video to utube” could affect many dozens of agile teams, in which case organizing by feature can be a nebulous and unrealistic concept.
Organization Around Features?
However, our current operating premise is that agile teams do a better job of focusing on value delivery, and this creates a contrary vector on this topic. Indeed, as traditionally taught, the almost universally accepted approach for organizing agile teams is to organize around features.
The advantages to a feature team approach are obvious: teams build expertise in the actual domain and usage mode of the system, and can typically accelerate value delivery of any one feature. The team’s core competence becomes the feature (or set of features), as opposed to the technology stack. The teams backlog is simplified, just one or two features at a time. That has to promote fast delivery of high value added features!
Other authors support the Feature-focused approach as well. For example, in Agile Project Management, Highsmith  states:
“Feature based delivery means that the engineering team build features of the final product.”
Of course, that doesn’t mean to say that the teams themselves must be “organized by feature”, as all engineering teams in the end build features for the final product, though perhaps that is a logical inference. Others, including Larmon and Vodde  have more directly (and adamantly) promoted the concept of feature teams as the only rational (well, perhaps, see Craig Larman’s comments below) way to organize agile teams. They note:
“a feature team is a long lived, cross-functional team that completes many end-to-end customer features, one by one.. advantages include increased value throughput, increased learning, simplified planning, reduced waste…”
Larman and Vodde state specifically that you should “avoid component teams” and indeed, spend an entire chapter with good coverage of the aspects of the topic. However, the authors also point out several challenges with the feature team approach, including the need for broader skills and product knowledge, concurrent access to code, shared responsibility for design and difficulties in achieving reuse and infrastructure work. Not to mention the potential for dislocation of some team members as the organization aligns around these boundaries.
So What is the Conclusion?
So what is an agile enterprise to do in the face of this conundrum? I’ve asked a few additional experts for their opinion, and I’ll be providing my opinion as well, in the next post.
Highsmith, Jim. Agile Project Management. 2004. Addison-Wesley.
Leffingwell, Dean. Scaling Software Agility: Best Practices for Large Enterprises. 2007. Addison Wesley.
Larmon, Craig and Bas Vodde. 2009. Scaling Lean and Agile Development. Addison Wesley.
Note: This post is part of a continuing series where I’ve been discussing the critical role that release planning has in enterprise agility.
These seminal release planning events are one of the key mechanisms the enterprise can apply to use its emerging agile practices to drive a coordinated and directed strategy into the marketplace – a foundation for the Agile Enterprise.
I’ve discussed release planning extensively in the book and have more recently emphasized Release Planning as an answer to a number of critical questions in the following posts:
In a nutshell, release planning is to the enterprise what iterations are to the team, i.e., the basic cadence, a critical best practice and organizational metaphor that holds all the other practices together. For without iterating (working software every two weeks), the teams are not agile. Without effective release planning, effective agile teams may well drive the enterprise to distraction – lots of pressures for improvement, increased productivity – but the enterprise itself may not yet have the ability to more quickly drive the right solutions to the right customers in the marketplace.
However, as with all things agile, when one considers the need for even doing release planning, which implies a certain amount of cost and overhead, we are reminded of some fundamental agile (manifesto) principles:
- the most efficient form of communication is face-to-face
- the best requirements, architecture and designs emerge from self-organizing teams
- At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.
And of course, the basic mantra,
“what is the simplest thing that can possibly work?”
These principles drive us to the conclusion that a periodic, one or two day, face-to-face meeting which gathers the right stakeholders together is the enterprises “Main Event”, an event intended to address the following objectives:
- Build and share a common vision
- Communicate market expectations, features and relative priorities for the next release(s)
- Reflect and apply lessons learned from prior releases
- Estimate and plan, in real time, the next release
- Gain a commitment from the team to deliver the next release
- Communicate any governing requirements such as standardization, internationalization, usability, etc.
- Communicate and coordinate any major architectural initiatives, refactors and the like
- Evolve, refine and communicate the product roadmap which the company uses to communicate to its customers and other stakeholders
In addition for those enterprises new to agile, this is an important opportunity to communicate to all stakeholders (particularly the executives) what the current priorities are, how the resources are allocated and reallocated based on priorities, and how the new agile processes are applied to deliver software to the market
Frequency of the Release Planning Event
The frequency of the event is dependent upon the company’s agile iteration and release patterns. We’ve recommended iterations of a maximum length of two weeks and internal releases (internal synchronization points that are potentially shippable, but decoupled from the general availability/customer consumable release) of a maximum length of 60-90 days. (By decoupling the internal releases from the external ones, we provide a degree of freedom that allows internal release planning to be scheduled and periodic, and the engineering teams to have a rapid delivery cadence, independent of any more public commitments (SLAs, public announcement venues, etc) the enterprise must make.)
So for many companies, an internal release cadence consisting of three “development” iterations plus one “hardening iteration” could look as follows:
With this (fairly rapid) cadence, release planning events would be generally be scheduled immediately before or after the internal release milestones. In practice, we’ve found that it doesn’t really matter exactly when, so long as a) the frequency generally matches that of the internal release and b) it happens before the last responsible moment, i.e before the release begins. We’ve also observed that for larger organizations, the 60 day internal release frequency is a big bite, and often we see it at a more leisurely 90-120 days. This may seem long to an agilist, but pretty fast to an enterprise that might have been releasing as infrequently as annually.
With this as context, I’ll be describing the preparation and agenda for the release planning event in posts throughout this week.
I learned yet another new thing this week while attending a board meeting at Ping Identity, providers of software and services for Internet Single Sign On. Ping’s Development VP, Bill Wood, has been practicing and advancing agile development since 2003, and he always impresses me with his innovations and advanced thinking with respect to agile practices. Agile teams pride themselves on constantly improving their practices, and even after four years, Ping is constantly innovating in order to improve productivity, quality and time to market.
Yesterday, he described, Ping’s “Swarm” model of agile development (which seems to be working well, as exhibited by Ping’s steady stream of frequent, on time releases.) Ping has development teams in the United States, a substantial presence in Moscow, and now a couple of developers in Ireland as well. In Bill’s view, when he finds really talented and highly productive people (we know that some developers achieve 5-10X the productivity of others) with in-depth product knowledge, he is loathe to lose them due to their personal needs for relocation, plus he occasionally leverages the serendipity of hiring an already world-class expert who just doesn’t happen to live in any of the existing development centers. To this end, he has advanced his agile model, which he calls “Swarm Agile”, to use these people productively, even though they are remote and highly distributed. He characterizes it below.
Swarming Agile – Highly Distributed Team with a Working Agile Cadence
n Strongly based on Agile tenants (emphasizing people and working software)
n Applies remote talent with overlap of sync time
n Individual work situations will vary:
o Those largely working from home
o Those choosing to always work on-site
o Those required to work on-site for short period of time
o Those always required to work on-site
n Basics of the Model – Combination of working at home with
o team-wide daily Scrums via telecom plus local standups
o face-to-face (f2f) meetings at least one per iteration
o Large, continuous on-site presence during hardening tails.
The travel model is highlighted in the graphic below.
You might think that with the nasty time zone problem, the daily meeting would be impractical, and we all know how critical that is in keeping the teams “in synch”. But remarkably, with a little flexibility on the part of some, he accomplishes this as the graphic below illustrates.
In addition, the need for constant communication is augmented by tools for that purpose as well:
All in all, an interesting agile model that again emphasizes “People over Process” and adjusts the process to leverage the talent the organization needs in order to excel in its marketplace.
Within the context of the larger enterprise, distributed development is the norm, not the exception. After all, if one has hundreds of developers and testers, the likelihood that they are all in co-located component teams, or that the enterprise could immediately relocate them to be so, is essentially zero. Moreover, scale alone drives distribution as we know that past about 100 ft from ones desk, the opportunity for casual conversation decreases about exponentially, so even if the enterprise is located on a rare “single campus”, they are in fact , highly distributed in agile terms.
Recently, when asked about whether or not distribution alone can be an Achilles heel to enterprise agile adoption, I answered “absolutely not” as I’ve seen it work extremely well even in the face of what some would consider extraordinarily distributed teams. For example, the BMC quality and productivity case study by QSM (see the post at https://scalingsoftwareagility.wordpress.com/2007/09/21/newly-release-quantitative-data-on-enterprise-agility-benefits-at-bmc-software/)contains the following quote:
“Especially noteworthy is the fact that BMC has found a ‘Secret Sauce’ that enables its SCRUM process to succeed in spite of teams being geographically dispersed. Other companies experience higher defects and longer schedules with split teams, but BMC does not. I’ve never seen this before. The low bug rates also result in very low defect rates post-production, which is great news for their customers who demand a high quality product.”
On the long flight home from the enterprise where I was asked that question, I found myself second guessing my own comment, as it flies in the face of so much that is considered critical, or even mandatory, for true team-level agility. And yet, as I reflect on my personal experiences, I feel comfortable in boldly stating that “of all the challenges enterprises face in adopting large-scale agile, the problem of distributed teams ranks fairly low on the list.”
And then I had to ask myself why. I suspect there a number of reasons that distribution alone is not as critical to enterprise agility as I once assumed. These include:
- The define/build/test team (See Chapter 9 of SSA), even when distributed, is still a TEAM. They have all the skills they need to deliver working software. In addition, agile teams are both empowered and challenged to address their own challenges. If they are distributed, so be it, and they must find their way to communicate to be as effective, or nearly effective, as a co-located team. In other words, it isn’t necessarily management’s problem if it is impractical or undesirable to relocate team members or change technical assignments to facilitate collocation, rather it is the TEAM’S problem, and they will solve it for themselves.
- Their mission is clear. No matter the distribution, the team’s mission is clear – to deliver working software every two weeks. The clarity of the mission focuses the team on what really matters, that is the state of the actual software, and they will necessarily do what they have to do to help assure the outcome.
- Individuals live where they live for many reasons. There must be some significant intangible motivation associated with the fact that the team members are in an environment of their own choosing, and yet they are still participating in a high-performance, though distributed, software TEAM.
- We are in the age of the virtual community, and we have all accommodated to enhanced forms of communication from skype, webex, IM, email, webcams, shuttle diplomacy and the like (see Chapter 19). While everyone understands the virtues of face to face communication – (agile manifesto principle: The most efficient and effective method of conveying information to and within a development team is face-to-face conversation) – at the same time, no one derides distributed open source communities, which in some facets are paragons of agility, for the fact that team members are not collocated, and no one challenges the quality, productivity or delivery velocity of that model.
So yes, collocated is better, and there is nothing quite like the face-to-face communication of collocated an agile team, and yes, an enterprise should work to facilitate collocation wherever feasible, but no, it isn’t mandatory, and in all probability, it won’t even be the largest challenge the enterprise faces in its agile transformation.