Splitting Epics, Features and User Stories

It will come as no surprise to any agilist that one of the hardest parts of agile development is the ability to split valued, useful work into stories small enough to fit in a sprint. Indeed, team after team has commented on this challenge and they are constantly looking for ways to make this easier, or at least have a variety of techniques to draw on. When you back off a bit for perspective, you also immediately recognize that splitting all user value work, whether it be business or architecture epics, user features, or user stories is actually the primary working metaphor for agile development in general, as building valuable, working, tested increments of code in a short time box, whether sprint or release timebox, requires it. If you can’t do that, then you can’t leave a timebox with potentially shippable code, else you have “hanging chads” littering the development or master code line and you can’t make a sharp turn (be agile) without having to first finish what you should have finished in the last sprint.

So we split and split and split again. The better we are at splitting, the more agile we become.

It’s an important topic, and I’ve provided hints and techniques for splitting user stores and architectural epics in Agile Software Requirements. That’s the best I could do given my experience and what I’ve gleaned from others.

However there is more to be written on this topic, and my colleague, Alex Yakyma from Kyiv, Ukraine has been elaborating on ways to split stories on his website at www.yakyma.com. Alex is an experienced agilist, mathematician and coder, (the parts of my brain I used to use for coding have apparently atrophied over time, oh well….) so he has much in-depth perspective to share through categorizing and examples. He has divided the problem into five categories (types of things to be split). The first post, describing incremental implementation of complex algorithms can be found here: Splitting User Stories. Part 1: Incremental Development of Complex Algorithms.

Today he posted the second post, Evolving Rich UI.

Future posts are intended to cover:

Splitting User Stories. Part 3: Incremental Architecture.

Splitting User Stories. Part 4: Simple Steps in Implementing Complex Scenarios.

Splitting User Stories. Part 5: Evolving APIs, Protocols, Interfaces.

If you are an agile developer or architect in the trenches, you might want to keep your eye on this series.

Advertisements

Refactoring and Software Complexity Variability: A Whitepaper

My friend and colleague, Alex Yakyma, from Kiev, Ukraine, wrote this interesting whitepaper which describes how, based on the underlying mathematics, software complexity tends to be inherently higher than one might think. He also describes how refactoring, our key weapon in this battle, can be used to continuously manage complexity over time and thereby keep our maintenance burden within controlled bounds.

Here’s the abstract.

The inherent complexity of software design is one of the key bottlenecks affecting speed of development. The time required to implement a new feature, fix defects, or improve system qualities like performance or scalability dramatically depends on how complex the system design is. In this paper we will build  a probabilistic model for design complexity and analyze its fundamental properties. In particular we will show the asymmetry of design complexity which implies its high variability. We will explain why this variability is important, can and has to be efficiently exploited by refactoring techniques to considerably reduce design complexity.

Here’s the Refactoring and Software Complexity Variability _v1_35_: Feedback would be very welcome. Also, you can ping Alex directly at alex@yakyma.com.

Resource Flexibility in The Agile Enterprise

I received this interesting email from a colleague (who allowed me to share it) a few days back.

“I currently lead a project on how to increase our resource fluidity so that we can effectively assign sufficient manpower to where it matters the most, e.g. working on the highest priority items on the backlog. We acknowledge the need for deeply specialized teams in certain areas and that drastic competence shifts are unrealistic, so the change project aims at finding out how many scrum teams do we need to make more fluid? What competences should these teams have, if they are to be deployed on a wider range of tasks? We also need to address change resistance such as designers or managers being protective of their work domain.

I wonder if you have any advice on how to increase resource fluidity and thereafter managing it.”

Best regards,

— Dr. Mikkel Mørup, Nokia Mobile Phones, Experience Portfolio Management

The email also reminded me of a visual on the same topic that I saw recently as well, which went something as follows:

Matching project backlog to team competencies

Even if we have driven the excess WIP out of the system, even if we can match capacity to backlog, even if we have shorter queues, even if we can build working software in a short time box, we still have to rapidly adjust resources to match the current backlog; that’s a big part of what makes us agile after all. But of course, it never really matches. So we constantly struggle to make the best of the situation, and yet who wants to be the epic owner (or project manager) for Epics (epics 7&8 above) or a team member for Team 4 or 5? A number of awkward conversations and suboptimal economic solutions are likely to develop.

To address this problem generally, we need two things:

1)   Feature teams, which have the ability to work across the domains of interest (See feature teams/component teams category)

2)   Individuals with “T Skills”, i.e. people who are deep in one area and broad in many. (See Reinertsen: Principles of Product Development Flow, W12).

As agile as we hope to be however, this is a journey often measured in years, not weeks or months, and it is sometimes met with resistance from the enterprise, as Mikkel notes above. Resistance can come from:

–       individuals who are highly specialized experts,  and who may even see their knowledge of a domain and specialty as a form of job security

–       managers who control resources associated with a domain and who can be reluctant to share their resources (and implied base of power)

–       Individuals or managers who may have lost their technical thirst for “life long learning” and are comfortable in their existing skills and knowledge

–       Logistics and infrastructure (CI and build environments, branching structures, etc) that make it difficult to share code bases across teams

I’m posting this as I would like to hear some other opinions on the topic. As a kickstart, however, my initial response to Mikkel went something as follows:

1)   Make the objective clear. It is product development economics that drive us to this particular change vector, and in the end economics wins (or loses) the game for every enterprise. Make the business case based on economics of agility and flow.

2)   Make the value system clear. We value feature teams and T skills the most highly (yes, we value component teams too; but even there T skills are an asset). Embed the value system in the performance review/appraisal system to eliminate ambiguity about our expectations for individual career growth and advancement.

3)   Adopt XP like practices and values (simplicity, pair programming, collective ownership, single code line, etc.). Hire people with experience in these areas.

4)   Attack the infrastructure unremittingly. The technical blocks must be eliminated or the rest of the change program will far less effective.

For you other enterprise agilists out there, do you have thoughts and experiences that you can share?

Scrum of Scrum Ground Rules

In the last post, I described an impressive Scrum of Scrums meeting I attended this week. It’s a pretty big Agile Release Train that has 14 cooperating teams. After just a few months of agile, this program really seems to be “on their game”, and this team clearly is developing some significant “ba”.

I also snagged a photo of their “ground rules for the Scrum of Scrum meeting”. But the photo isn’t clear, so I’ll just list the rules here (with their permission):

  • “Like my daily Scrum, this meeting is meant to be fast and short.
  • I will be present on time and I will come prepared.
  • I will first state my Scrum Team name.
  • I will talk about my team, not individuals on the team.
  • I understand that problems can and should be raised during this meeting, but I will not discuss solutions until after everyone has had a chance to go thorugh their standard report.
  • After all teams have reported, the focus will shift to any issues, problems or challenges that were raised. These will then be discussed and resolved or added to the Scrum of Scrums parking lot for future discussion.”

Thanks guys! (you know who you are).

On “Ba” at a Scrum of Scrums

I was a “chicken” today at one of the more effective Scrum of Scrums for an Agile Release Train that I have witnessed. Big program. 14 teams. The uberScrumMaster (Release Team Manager) started the meeting at 9 am, facilitated the meeting and ran a pretty tight ship.

Fourteen teams (2-3 remote) reported in a standard format (see pic) in about 20 minutes total.

A few “meet afters” ( we need to talk about this more) were raised.

In the standard report, one of the ScrumMasters narrated the following;

–       “yesterday, we reported that we were blocked (something was put in our way) by another team.

–       It wasn’t true. It was we that had blocked ourselves and, yes, we blocked everybody else too!

–       We are really sorry and we fixed it just as soon as we could .”

This conversation happened part in jest and fun, and part in dead seriousness. This team of teams has “ba”. (the energy of an effective, self-organizing team. See below.).

After the standard report, there were 3-4 “meet afters” discussed. The uberScrumMaster took notes and his laptop was projected in the room so everybody can see the work and agreements in process.

The meeting adjourned at 9:25 AM , with all questions asked and answered.

Make no mistake, this team of teams has significant challenges ahead (don’t they all?), but it is just so satisfying when you see it all come together. And they will handle the challenges. I am so proud of these people. It is so rewarding for any of us to be a part of a high performing team.

Again I am reminded, that with just a little leadership (which these folks and their managers, who were not present at this meeting, of course) clearly exhibit, you can trust the teams EVERY TIME.

Quick note on “Ba” below.

========

What is Ba?

Ba- is the Zen of Scrum, a shared “context in motion and the energy that drives a self-organizing team:

–       “Dynamic interaction of individuals and organization creates synthesis in the form of a self-organizing team

–       The fuel of ba is its self-organizing nature-  a shared context in which individuals can interact

–       Team members create new points of view and resolve contradictions through dialogue

–       New knowledge as a stream of meaning emerges

–       This emergent knowledge codifies into working software

–       Ba must be energized with its own intentions, vision, interest, or mission to be directed effectively

–       Leaders provide autonomy, creative chaos, redundancy, requisite variety, love, care, trust, and commitment

–       Creative chaos can be created by implementing demanding performance goals. The team is challenged to question every norm of development

–       Time pressures will drive extreme use of simultaneous engineering

–       Equal access to information at all levels is critical

(source. Hitotsubashi: On Knowledge Management)

Organizing at Scale: Feature Teams vs. Component Teams – Part 1

While continuing my work with a number of larger enterprises facing the cultural change, new practice adoption and organizational challenges of a large scale agile transformation, the topic has again come up as to how to organize large numbers of agile teams to effectively implement agile and optimize value stream delivery.

For the smaller software enterprise, it’s usually no issue at all. Teams will naturally organize around the few products or applications that reflect the mission. The silos that tend to separate development, product management and test in the larger enterprise  (hopefully!) do not exist. The teams are probably already collocated, rather than being distributed across disparate geographies. Creating an agile team is mostly a matter of deciding what roles the individuals will play, and rolling out some standard training.

At scale however, like most other things agile, the problem is dramatically different and the challenge is to understand who works on what and where. How do we organize larger numbers of teams in order to optimize value delivery of requirements? Do we organize around features, components, product lines, services or what? While there is no easy answer to this question, the question must be explored because so many agile practices – how many backlogs there are and who manages them – how the vision and features are communicated to groups of teams – how the teams coordinate their activities to produce a larger solution  – must be reflected in that decision.

Organize Around Components?

In Scaling Software Agility,  I described a typical organizational model whereby many of the agile teams are organized around the architecture of a larger-scale system. There, they leverage their technical skills and interest and focus on building robust components – making them as reliable and extensible as possible, leveraging common technologies and usage models, and facilitating reuse. I even called the teams define/build/test component teams, which is (perhaps) an unfortunate label. However, I also noted that:

“We use the word component as an organizing and labeling metaphor throughout this book. Other agile methods, such as FDD, stress that the team be oriented around features, still others suggest that a team may be oriented around services. We use component rather liberally, and we do not suggest that every likely software component represents a possible team, but in our experience, at scale, components are indeed the best organizing metaphor.”

I wasn’t particularly pedantic about it, but noted that a component-based organization is likely to already exist in the enterprise, and that wasn’t such a bad thing, given the critical role of architecture in these largest, enterprise-class systems.

In any case, in a component-based approach, development of a new feature is implemented by the affected component teams, as the figure below illustrates.

Picture 2
In this case, the agile requirements workflow would put backlog items for each new feature on each of the affected component teams. They minimize multiplexing across features by implementing them in series, rather than parallel. Moreover, they are able to aggregate the needs of multiple features into the architecture for their component and can focus on building the best possible, long-lived component for their layer.

Perhaps this is reflective of a bias towards an architecture-centric view of building these largest-of-all-known software systems. For there, if you don’t get the architecture reasonably right, you are unlikely to achieve the reliability, performance and longer-term feature velocity delivery goals of the enterprise. If and when that happens, the enterprise may be facing a refactoring project that could be measured in hundreds of man-years. A frightening thought for certain.

In addition, there are a number of other reasons why component teams can be effective in the agile enterprise:

  1. Because of its history, past successes and large scope, the enterprise is likely already organized that way, with specialists who know large scale data bases, web services, embedded operating systems and the like, working together. Individuals – their skills, interests, residency, friendships, cultures and lifestyles – are not interchangeable. It’s best not to change anything you don’t absolutely have to.
  2. Moreover, these teams may already be collocated with each other, and given the strong collocation benefits in agile, organizing otherwise could potentially increase team distribution and thereby lower productivity.
  3. Technologies and programming languages typically differ across components as well, making it difficult, if not impossible, for pairing, collective ownership, continuous integration, testing automation, and other factors critical to high performing agile teams.
  4. And finally, at scale, a single user feature could easily affect hundreds of practitioners. For example, a phone feature like “share my new video to utube” could affect many dozens of agile teams, in which case organizing by feature can be a nebulous and unrealistic concept.

Organization Around Features?

However, our current operating premise is that agile teams do a better job of focusing on value delivery, and this creates a contrary vector on this topic. Indeed, as traditionally taught, the almost universally accepted approach for organizing agile teams is to organize around features.

Picture 1
The advantages to a feature team approach are obvious: teams build expertise in the actual domain and usage mode of the system, and can typically accelerate value delivery of any one feature. The team’s core competence becomes the feature (or set of features), as opposed to the technology stack. The teams backlog is simplified, just one or two features at a time. That has to promote fast delivery of high value added features!

Other authors support the Feature-focused approach as well. For example, in Agile Project Management, Highsmith [2004] states:

“Feature based delivery means that the engineering team build features of the final product.”

Of course, that doesn’t mean to say that the teams themselves must be “organized by feature”, as all engineering teams in the end build features for the final product, though perhaps that is a logical inference. Others, including Larmon and Vodde [2009] have more directly (and adamantly) promoted the concept of feature teams as the only rational (well, perhaps, see Craig Larman’s comments below) way to organize agile teams. They note:

“a feature team is a long lived, cross-functional team that completes many end-to-end customer features, one by one.. advantages include increased value throughput, increased learning, simplified planning, reduced waste…”

Larman and Vodde state specifically that you should “avoid component teams” and indeed, spend an entire chapter with good coverage of the aspects of the  topic. However, the authors also point out several challenges with the feature team approach, including the need for broader skills and product knowledge, concurrent access to code, shared responsibility for design and difficulties in achieving reuse and infrastructure work. Not to mention the potential for dislocation of some team members as the organization aligns around these boundaries.

So What is the Conclusion?

So what is an agile enterprise to do in the face of this conundrum? I’ve asked a few additional experts for their opinion, and I’ll be providing my opinion as well, in the next post.
===================
References:
Highsmith, Jim. Agile Project Management. 2004. Addison-Wesley.
Leffingwell, Dean. Scaling Software Agility: Best Practices for Large Enterprises. 2007. Addison Wesley.
Larmon, Craig and Bas Vodde. 2009. Scaling Lean and Agile Development. Addison Wesley.

The Simple Math Behind TDD?

For those who have followed this blog or read the book, you might know that I called Chapter 13 of SSA “Concurrent Testing” and not TDD (Test-Driven Development). I talked about TDD in the book, but I didn’t specifically prescribe it. TDD says:

  1. Write the test
  2. Run it and watch it fail (meaning build it permanently into the test harness and execute it there)
  3. Write the minimum amount of code necessary to pass the test
  4. When it passes the test, go work on another story!

At the time I wrote the book, TDD was still a little edgy and had not crossed the chasm to the mainstream, even the early agile mainstream. It it just hard for some developers to get their head around this early investment in test case development as it changes the way they think about coding (though for the better).  And I suspect that is still likely the case. So I was generally happy with the Concurrent Testing chapter, because it drives teams to be able to “unit test-acceptance test-component/feature test-system test”, all in the course of an iteration. Good enough for me.

However, last week I was with a client along with Amir Kolsky of Net Objectives. Amir is a TDD expert and proponent and he advised the client to always “write the test first”. We had a minor debate about the differences in viewpoint. After all, if the team leaves the short, two-week iteration with everything tested, isn’t that about as good as it gets?

Well, actually not. Amir led us through this “simple TDD math exercise”.

First:

tdd1

Next:

tdd21

and finally:

tdd3

I had to admit that I had seen plenty of rework time (Rt)in recent iterations so it was fairly easy for me to support this simple conclusion.

After our meeting, I attended iteration demos from a number of teams and I saw quite a bit of Rt there too.

Ok, write the test first!

Jeff Sutherland’s Sprint Emergency Landing Procedure

Last week, I gave a talk on Scaling Agility at Agilis 2008 in Reykjavik, Iceland. Yes, Iceland has an active agile community within their population of some 300,000-400,000. A local consultancy Sprettur, hosted this conference and invited guest speakers including Jeff Sutherland and myself. I was somewhat surprised to see the advanced level of agile adoption in this community and I had the opportunity to meet a number of agile masters who were in the process of implementing and expanding agile and Scrum.

Jeff gave two separate talks on Scrum which I enjoyed immensely. It was a learning experience for me to be able to benefit from the depth and breadth of understanding that this co-inventor of Scrum possesses. In one talk entitled “Your money for nothing and your change for free” he mentioned, almost in passing, that as a former fighter pilot, one always had to know the emergency landing procedures that came into play while attempting to land a jet on an aircraft carrier. He used that analogy to remind teams that they needed instant recall of their Sprint Emergency Landing Procedure of they ever saw this shape to their burn down chart:

Sprint in trouble

Sprint in trouble

When this is the case, the teams should immediately fall back on its four-step emergency procedure:

  1. Innovate/remove impediments – quickly analyze the root cause of the problem (blocked story, resources diverted to emergency, or whatever) and see what ideas the team has to correct the problem. As always, the collective minds of the team are the first and best resource.
  2. Get Help – can help be found outside the team? If so, apply these resources to accelerate the burn down.
  3. Reduce scope – cut lower priority features and re-plan based upon what the team can accomplish. While the team might not be able to deliver all the stories, it may not be too late to fulfill the objectives of the sprint and still end up in an acceptable state at the end.
  4. Abort – if the deck is still pitching and the glide path is still too high, then lastly, it may be necessary to abort the sprint and simply start over. Not every sprint can be a winner (unless the team is too risk averse) so the learnings and completed stories can launch you into a new, more realistic sprint. This is the last resort, but it could still be better than an unfortunate landing.

The Agile Enterprise Acid Test – Updated

In a prior post, I referred to a post by Paul Beavers of BMC (Is it Possible to be Half-Agile?) which gave his perspective on the agile acid test- the quintessential test of whether or not an organization is truly achieving agility at enterprise scale. In my post, I also provided my viewpoint on the subject. Recently, I tested that viewpoint with a few others. As a result, I’ve modified my version a bit as you see in the graphic below:

Agile Enterprise Acid Test

Agile Enterprise Acid Test

As amended, the three elements of my version of the Agile Enterprise Acid Test are now:

Variable scope. Fixed quality.

  • Can the teams vary scope? – Does the team have the authority to vary scope even as the release deadline draws closer?
  • Is quality acceptable and fixed? – You can’t go fast building on crappy code. Agile accomplishes little without the requisite code quality which must be built into the process through TTD, continuous integration, test automation, coding standards and the like. If you see your teams iterating or sprinting with a primary objective of working down code-level defects, then you are not truly agile.

Incremental Value Delivery

  • Is software delivered incrementally? – If your teams are sprinting but there is no feedback until the final delivery (one form of “waterscrumming”) then you are not achieving agility as there is no meaningful feedback to drive the solution to an acceptable outcome
  • Is it valued and evaluated? – Demos are great, but you need real value delivery to assure fitness for intended use and early and continuous ROI. If the incremental code is not being actually used, you are not very likely to get the results you seek.
  • Is feedback continuous? – In addition to customer feedback, product owners, product managers and other stakeholders have a responsibility to continually assess the current result. This is achieved through story-level acceptance and iteration-level demos.

Empowerment and Accountability

  • Are the teams empowered? – Are the teams truly empowered and able to modify and improve their own software processes? Do they self organize and self-manage? Are resources routinely moved to the most critical bottlenecks?
  • Are they accountable for results? Empowerment and accountability are two sides of the same agile coin. Are the teams delivering reliable quality code in incremental fashion? Do you they commit to iteration and release objectives, subject to responsible scope management?
  • Do they regularly reflect and adapt? – Do the teams adhere to Agile Manifesto Principle #12? – At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.
  • Does management eliminate impediments? – is management also engaged in continuous improvement? Are impediments routinely surfaced, addressed and eliminated.

That’s my current definition.

–Dean

Note: Readers may also want to note Jon Strickler’s view on this definition, perhaps from a somehwat “leaner” perspective, in the comments on this post below.

Does Prioritizing Backlog by ROI Work?

For those following the “Big Picture” series and the recent post Enterprise Agility-The Big Picture (4) Backlog, I just saw a number of posts from Luke Hohmann at Agile Commons describing some of the methods and challenges associated with prioritizing backlog.

If found the post Why Prioritizing Your Product Backlog for ROI Doesn’t Work particularly relevant, as it debunsk one of the most common myths, which is that there is a meaningful way to prioritize backlog based on expected ROI of a Feature or Story. While that seems like a logical, pragmatic and politically correct thing to do, Luke notes that in fact, it its likely to be impractical and potentially even counterproductive, (at least at the Iteration/Story level- note these are my backlog labels, not Luke’s), and he points out why that is the case. Here’s a sample excerpt to tweak your interest:

“Agilists celebrate our ability to respond to change rather than following a rigorous plan. Which sounds great until you’re asking a product manager who just invested a lot of time and money in her market research to reshuffle the backlog based on new information. They’ll be torn. The market research says that they should stick to the plan. The new information says that it should change. The greater the investment in the market research, the more challenging it is to make the change.”