Agile Tooling for High Assurance Development Series: Intro

For those who have been following the High Assurance and Regulated Environments and Agile and FDA series on this blog, you’ll be interested to note that my co-conspirator, Craig Langenfeld of Rally Software, has committed to describing some best practices for tooling high assurance projects using Rally’s Agile Lifecycle Management and compatible tools such as HP Quality Center and Apache Subversion. (SVN). In this series of upcoming posts on Rally’s agile blog, Craig will be illustrating various tooling solutions for artifact management, version control, test management and traceability. Even more interestingly, our collaboration and feedback from the broader community has incented Rally to provide some tooling extensions which will materially improve the automation and record keeping required in these more formal process systems.

Craig will be authoring posts on topics including:

1)   Use Rally to track and report on the basics – Define | Verify | Build

2)   Achieve tight traceability and reporting from Product Requirements to

3)   System Requirement to Verification & Validation

4)   Don’t overlook the implementation of the Validation Sprint

5)   Testing and test traceability in High Assurance Environments

Readers may note that I’ve never really discussed tooling in this blog. There are a number of reasons for this: 1) I’ve simply assumed it 2) tools tend to limit definition and discussion of the abstract methods to that which the tools can actually support, and 3) tools change faster than the underlying principles and practices do, so it’s hard to keep up, and 4) I am stakeholder in Rally.

I’m also well aware of the idiom  “a fool with a tool is still a fool”, and yet it also true that those trying to scale agile development, much less high assurance development, without appropriate enterprise-class, agile-native tooling may well be the biggest fools of all.

To that end, I really look forward to this series as it puts some “beef” into the abstractions we’ve described so far, and will materially help those who head down the agile, high assurance, path. You might want to subscribe directly to Rally’s blog, and add your voice to the community input that is driving these new practices and tooling enhancements. I’ll also be posting parallel comments on this blog as well.

And, oh yeah, Craig’s introductory post is here.

Tech Target Interview of Agile Software Requirements

I was recently interviewed by Yvette Francine of the on line magazine techtarget.com with respect to some questions regarding Agile Software Requirements.  She asked some interesting questions about the book, the method and some opinions on related topics. Part I of the interview can be found here.

Picking Agile vs. Waterfall “Projects”: a Ten Point Quiz.

For some reason, I woke up today realizing that I have been working with, and writing exclusively about, enterprise scale software agility full time now for five years running. I’ve had the fantastic opportunity to work with some of the world’s largest enterprises, each adopting agile at scale. Some of these initiatives were extremely successful, some quite successful, and a few, less successful than we had hoped. (We’ll return to that latter topic later in this post.) I also woke up with a “bee in my bonnet” (pick your own, more contemporary idiom here….) about a topic that has come up repeatedly in the last few weeks.

The topic is this:

As agile moves across the chasm from the innovators and early adopters to the early majority,

(flow interrupting Note 1: if you haven’t already read the important, and now classical, text, Crossing the Chasm, by Geoffrey Moore, you might want to stop reading this blog post right now, download this book, and then come back after you have read it. I’d guess that your world view on technology adoption will be substantially altered thereafter. Mine certainly was and still is.)

enterprises that are adopting agile, especially larger scale IT enterprises, look at their “portfolio of upcoming projects” and then have to decide which projects they want to implement using their new-found agile instincts, vs. their traditional waterfall mandates.

(flow interrupting Note 2: As readers of Agile Software Requirements are likely to know, I’ve become quite uncomfortable with the “project” word and practice in agile development; maybe I’ll fire that post off in another day or so).

In some cases, I’m even peripherally involved in picking the best projects for agile implementation. Sometimes, I’m even asked for a discriminator as to which projects best lend themselves to agile development. But as I told an enterprise the other day, though, perhaps they are talking to the wrong guy, because for the life of me, I can’t seem to find any excuse any longer to not apply agile development to any project anyone actually cares about! (See, for example, https://scalingsoftwareagility.wordpress.com/category/high-assurance-and-regulated-environments/).

So I woke up today thinking about a 10-point True False test that one could use to make these “tough” decisions. Here it is:

Question T or F
1 The project requirements are fully determinable in advance of development
2 And if (1) is True, they are likely to remain completely static during the course of the development effort.
3 If (1) and (2) are True, they will remain static even if takes our teams teams longer (30-50-75%) than estimated to complete the project
5 There is no significant technological risk whatever in the project.
6 The users/customer are unable or unwilling to evaluate early delivery of solution increments. Early deliveries have no value to them.
7 There is no value to our enterprise in helping teams figure out how to “build small increments of working, tested high quality software in a time box”.
8 There is no value to our teams or productivity in collocating product ownership (content authority) developers and testers for this project.
9 There is no value in helping our teams learn better software engineering practices driven by agile, including such things as collective ownership/peer review/pair programming, emergent design, continuous integration, single code line, test-driven development and test automation.
10 Our organization is addicted to the adrenalin rush and heroic efforts required by late project risk discovery.

You know where this is going …. if you answered False to ANY of the above questions 1-9, or True to Question 10,  then your project is ready for agile. (And if perchance you didn’t answer False to any of 1-9, ask yourself “is this project really worth doing?).

There, I have that off my chest.

OK, now back to the discussion of “less successful” projects. Yup, I suspect we all have them, and perhaps it would be a sign of industry maturity if we could more readily admit them (see my friend Philippe Kruchten’s report “The Elephant in the Room”) from the 10th year Agile celebration in Snowbird, UT on February 11th and 12th; also see Ryan Marten’s post on the event here).

But as I look at the (perhaps obviously biased) test above, I ask myself the question “In these less-than-hoped-for successes, would the enterprise, the program  and the teams have been better off than if they hadn’t adopted agile?” Based on my direct knowledge in these few cases, and an opinion that I  know is shared by my cohorts and agile champions in those same programs, the answer is “Hell, no.” Moreover, they can always inspect and adapt from there!

Be agile. After all, don’t you and your enterprise deserve the benefits?