Agile Crosses the Chasm to….DoD?

I recently received an email from Dr. David F. Rico, (PMP, CSM) regarding the recent AFEI DoD Agile Development Conference, held on Tuesday, December 14, 2010 in Alexandria, Virginia. “The purpose of the conference was to promote agile acquisition and IT development practices in the U.S. DoD. AFEI is a non-profit organization who helps the U.S. DoD develop contemporary acquisition, systems, and software practices, among other valuable services.” Dr.  Rico (author of “Business Value of Agile Software Methods) wrote a synopsis of the conference and sent me a copy (permanent link: Thanks David!

It is clear from his notes that agile is making an increasing impact on the acquisition of software within the DoD. A few excerpts below:

It reinforced the U.S. DoD’s commitment to the use of Agile Methods. Furthermore, it was interesting to see that Agile Methods are in widespread use by the U.S. DoD, and that no individual organization, project, group, or person is practicing them in isolation.

Prior to AFEI’s DoD Agile Development Conference, both the commercial industry and DoD contractors believed the U.S. DoD was not committed to Agile Methods, which is an enormously incorrect assumption. It’s a popular urban legend or urban myth that the U.S. DoD uses traditional methods such as DoD 5000, CMMI, PMBoK, and other waterfall-based paradigms to develop IT-intensive systems (and that no one is using Agile Methods in the U.S. DoD).

The AFEI DoD Agile Development Conference shattered that myth completely. Furthermore, it served as a forum for reinforcing the use of Agile Methods in the U.S. DoD. Psychological reinforcement or affirmation of a desired behavior is a key ingredient to successful organization change (i.e., adoption of Agile Methods and the abandonment of traditional ones).

Of course, agile has its critics within DoD (don’t we all?) as this little vignette illustrates:

This was a panel of five industry experts on Agile Methods, hosted by Chris Gunderson of the Naval Postgraduate School. Chris, an outspoken critic of Agile Methods, challenged the panel of industry experts on a variety of flash points. These included organizational change and adoption issues, scalability to large U.S. DoD programs, and empirical evidence to indicate whether they were any better or worse than traditional, waterfall-based methods. The industry experts challenged the moderator to prove traditional methods were any more scalable or applicable to large programs, citing the 67% failure rate among DoD programs using traditional methods over the last 40 years.

I suspect that many of us considered the DoD to be the last bastion for rigorous and mandated waterfall development. If the DoD can move on ( at least in some instances) to more effective, agile methods for development of high assurance software systems, I’d guess most everyone else developing such systems should be able to move on too!

Additional Perspective on Agile, V&V and FDA

Brian Shoemaker, Ph.D., who clearly has expertise on software quality and FDA, (see; Brian has more than thirteen years’ experience with software quality and validation in the FDA-regulated industry) added an in-depth comment describing additional documentation guidance for doing software V&V. He also provides  his opinion on applicability of agile in this world (just do it?) as well as some implications for updates to the companies QMS. Since wordpress comments aren’t partriculary reader-facing-friendly, and there is real value in his comments, I’ve reposted excerpts here:

“Thanks for putting together the comments you’ve published here……. FDA has other guidances relevant to software besides the Design Control Guidance and General Principles of Software Validation, though those two are certainly the most important.

A more complete list would be:

  • Design Control Guidance For Medical Device Manufacturers (March 11, 1997)
  • General Principles of Software Validation (January 11, 2002)
  • Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices (May 11, 2005)
  • Off-The-Shelf Software Use in Medical Devices (Sep. 9, 1999)
  • Cybersecurity for Networked Medical Devices Containing Off-the-Shelf (OTS) Software (Jan. 14, 2005)
  • Computerized Systems Used in Clinical Investigations (May 2007 – note this is not directly relevant to medical device software)
  • 21 CFR Part 11: Electronic Records and Signatures (Aug. 1997)
  • Draft Guidance: Radio-Frequency Wireless Technology in Medical Devices (PDF) (January 3, 2007)

The “Premarket Submissions” guidance (third in the list) is often misread as specifying what processes a firm needs to use based on the hazard classification of the device (termed “Level of Concern” in the guidance). This is not the intent at all – rather, the guidance states what documents should be SUBMITTED based on the Level of Concern; the documents should exist anyway, whether they’re submitted or not.

I’ve argued for several years, just as you do, that nothing in GPSV specifies that medical device developers are required to use a waterfall approach.
In the October 23 post, you interpret the various activities required for Design Control. I disagree that waterfall methodology is implied here, but I can see how that interpretation can slip in – and I’ve worked with plenty of clients who believed that a waterfall approach is required.
Notice that the terms in the Design Control section of Part 820 all come directly from ISO 9000 (or ISO 13485, its medical device half-brother). I’m convinced that one can match up every quality-related activity in a software process (i.e. quality planning, requirements development, design and coding, configuration management, defect tracking) with an appropriate subparagraph in part 820.
Also notice that “verify” and “validate” are ISO 9000 concepts.

In the projects I’ve worked on with client companies, the “Product Requirements Document” or “System Requirements Specification” originated with Marketing, for better or worse (the marketing folks are often harder to train than engineers when it comes to concisely and clearly expressing requirements).

The concept of a “define-build-verify” team is powerful enough that, in my opinion, it should be described in a planning document (Software Development Plan or Quality Plan, depending on how the company structures such things). Planning documents not only outline steps to be followed, but give specific responsibility for specific steps – describing the team’s structure and tasks are important not only for the participants but also for the reviewer who checks the documentation after the fact.”

end of Brian’s quote…….

Software Verification and Validation in High Assurance Agile Development: Verification: SRS and User Stories

Series Background: In this series of posts, I’ve been using Medical Device Development (as Regulated by U.S. FDA via CFR 820.30 and international standard IEC62304) as an exemplar for suggesting ways to develop high quality software in regulated (and other high assurance, high economic cost of failure) environments in an agile manner. This series is sponsored, in part, by Rally Software Development. Thanks to Craig Langenfeld of Rally for his ongoing collaboration on this series.


In prior posts, I’ve introduced a suggested agile lifecycle model for verification and validation of software in high assurance development. In the last post, I’ve defined a set of ground rules that agile teams will have to follow. For convenience,  the lifecycle model graphic is reproduced here:


High Assurance Agile Software Development Lifecycle Model

I’ve also provided definitions of the key terms, verification, validation and traceability from the medical device exemplar.

This post describes the continuous verification activities implied by the model in additional detail.

Iterations and Increments

In the agile model, development of software assets occurs in a series of system iterations (or sprints). Iterations are short, timeboxed envelopes in which teams, define (requirements and design), build (code) and test new software. In support of our high assurance strategy, we’ll replace “test” with a more robust “verification” process here. So the activity becomes define|build|verify.

Periodically, the iterations are validated so as to become potentially shippable increments (PSIs). they may or may not be shipped, but they are at least valuable and evaluable, and fully validated to their intended use.

Define|Build|Verify Teams

In Scaling Software Agility, I described a common development team model, based primarily on Scrum, which organizes the resources to more effectively do all these activities in a short time box. We called that the Define|Build|Test team. Here, our Define|Build|Validate teams consists of

  • A product owner who has the responsibility to define and manage the backlog (the things we still need to have the system do). In our exemplar, this person must have the requisite medical domain knowledge and authority to make critical decisions on the safety and efficacy of the product.
  • Developers who implement the code and unit tests that test the code
  • Testers who develop and execute tests intended to assure that the system always meets its requirements
  • (and optionally) Quality assurance personnel who assist with verification and necessary documentation activities.

We mention the team construct here because it is assumed in this model (and all agile development). However,  for a high assurance company moving from waterfall to agile development, we also recognize that this step alone may be a big challenge, as it breaks down the functional silos and reorganizes along value delivery lines. Without this step, real agility cannot likely occur. Also, because this is how we do what we do, formalization of the team construct may be important enough to be called out in the companies quality management system (QMS) practice guidelines.

Software Verification in the Course of the Iteration

With such a team in place, the team has all the skills necessary to build the system incrementally. However, in order to avoid a large manual “end game” (and the inevitable delayed feedback loops that come when you defer quality (and verification) to the end) we’ll need to be able to verify that the software conforms to its intended uses as it is developed. Fortunately, with properly implemented agile, quality is built in, one user story at a time. That’s one of the prime benefits of agile:  higher innate quality+better fitness for use, so here we’ll just be leveraging what good agile teams already do. We’ll also have tooling to help us automate as much of this as possible.

Leveraging the Agile Requirements Backlog Model

Speaking of leveraging, in support of verification (and soon, validation) we’ll also be building on the agile requirements model I’ve described in the new Agile Software Requirements book and blog series. Relevant portions of that full model appear in the figure below:

High Assurance Requirements metamodel (portion of the Enterprise backlog model)

Capturing the User Stories as the Software Requirements Specification

For now, we are primarily concerned with the lower right portion (user stories, tasks, acceptance tests) of this model. (Note: we’ll return to Features, Feature Acceptance Tests, Nonfunctional requirements, etc. when we elevate the discussion to discuss Product Requirements and validation in later posts).

Of course, in accordance with the ground rules we established, in order to actually do verification of the software assets, we’ll have to always maintain a current copy of the software requirements, which will consist primarily of the collection of user stories we use to evolve the system over the course of each iteration. We can picture that simply as follows:

User Stories

A user story is a form of requirements expression used in agile development. (see user story whitepaper). Each user story is a brief statement of intent that describes something the system needs to do for the user. Typically, each user story is expressed in the “user voice” form as follows?

As a <role> I can <activity> so that < value>


  • Rolerepresents who is performing the action.
  • Activityrepresents the action to be performed
  • Valuerepresents the value to the user (often the patient in our exemplar) that results from the activity.

For (a fairly high level)  example:

“As an EPAT (Extracorporeal Pulse Activation Technology) technician, (<role>) I can adjust the energy delivered (<what I do with the system>) so as to deliver higher or lower energy pulses to the patient’s treatment area (<value the patient receives from my action>).”

User Story Verification

To assure that each new user story works as intended, we’ll have to verify it. To do that, we’ll use three built-in (and semi-automated) traceability paths:

  • User story to code – path to the SCM record that illustrates when and where the code was changed to achieve the story
  • User story to code-level unit test (the “white box” test which assures assure the new code works to its internal specifications, also maintained via path to the SCM change set)
  • User Story to Story acceptance tests (black box testing of the user story to make sure the system functions as intended)

To do this explicitly, we can use a standard tasking model for each new user story. Each task (maintained in our agile project management tooling, and directly associated with the story) can be used to establish a traceability link to the specific artifact we need as pictured below:

These artifacts (SRS, code and unit tests, story acceptance tests) must persist indefinitely, (so long as our solution is used in the market) so we’ll need competent tooling to help manage this.

Taken together, these activities

1)   a defined and controlled statement of requirements (collection of user stories and other items)

2)   traceability to code that implements the user story

3)   traceability to a unit test that assures the code works as it is designed

4)   traceability to a story acceptance test which verifies that the system behaves as intended based on this new user story

should provide sufficient verification that the code fulfills the story as intended. After all, what more can we do at this level?

In this manner, the requirements evolve one iteration at a time, the system evolves with it, the SRS evolves too, and we can demonstrate throughout that the system always behaves as we intended it to.

User Story Informal Validation

In addition, at the end of the iteration, each user story is demonstrated to the product owner and other stakeholders. This provides a first, informal validation that the story does meet its intended requirements. And if by any chance it isn’t right, it can be addressed in the next iteration.

Looking Forward

This completes the first half of our agile, high assurance verification and validation story. Validation will be the subject of the next post in the series.

Software Verification and Validation in High Assurance Agile Development: Ground Rules

Series Background: In this series of posts, I’ve been using Medical Device Development (as Regulated by U.S. FDA via CFR 820.30 and international standard IEC62304) as an exemplar for suggesting ways to develop high quality software in regulated (and other high assurance, high economic cost of failure) environments in an agile manner. This series is sponsored, in part, by Rally Software Development.


In an earlier post, I noted that there are set of immutable rules (driven  by regulations, interpretations of regulations or established auditing practices) associated with software development, and thereby agile software development, for high assurance systems. Within our medical device examplar, for example, we noted that the folks at SoftwareCPR called out the set of things that teams must address in order to be in compliance with USA FDA CFR 820.30, and IEC62304. Not surprisingly, it’s a pretty long list as this graphic shows.

Many of these artifacts include the word “plan” so its clear that plans for activities such as verification and validation, problem resolution, risk management etc, must be developed, documented, and most importantly, followed. Since these plans form much of the body the enterprises “Quality Management System” (QMS-the body of internal governing documents and procedures that define how quality is achieved and  regulations are met), many of these will have to be modified as the company adopts agile methods.

We’ll return to that activity later, but initially, the primary objective of this series to describe a suggested high assurance agile software engineering process that covers the definition , development and verification of the code itself. Without that, we won’t have accomplished much, and once we’ve done that, the necessary changes to a QMS will be more obvious.

In order to do THAT, we’ll need to address a subset of this list and we’ll need some specific artifacts and activities  (our “ground rules”) to follow. We’ll highlight those in the balance of this post.

Software Requirements Specification

As “the software validation process cannot be completed without an established software requirements specification” (General Principles of Software Validation, FDA CDRH, this is the seminal document that will control most of our work. It must be correct, accurate, complete, consistent, version controlled and signed. This document must cover (paraphrasing from the above):

–       all software system inputs, outputs, and functions.
–       All performance requirements, including throughput, reliability, accuracy, response times (i.e., all nonfunctional requirements)
–       Definition of external and user interfaces
–       User interactions
–       Operating environments (platforms, operating systems, other applications)
–       All ranges limits, defaults and specific values the software will accept
–       All safety related requirements specifications, features, or functions implemented in software

However, as we’ve noted before, the fact that we need such a document doesn’t mandate that we do it all Big and Up-Front. Indeed, to be agile, we can and will develop it incrementally. But prior to any release to users, we’ll have spiff it up, put a bow on it and make sure that, in aggregate, it absolutely correctly reflects the current state of the software system behavior as of that time. That’s real work, but it doesn’t mandate that we do it only once, and up-front.

Product Requirements Document/System Requirements Specification

While the centerpiece of our world is the SRS, it doesn’t stand alone. Indeed, for most systems, including systems of systems and devices which contain both hardware and software, the governing document which resides above the SRS is usually a Product Requirements Document (we’ll use the generic acronym PRD) or System Requirements Specification, which contains the complete specifications for the device as a whole.

The development and maintenance of this document is just as important as the SRS, and we’ll return to it in future posts.  Since it’s a fairly traditional document, one can refer to a variety of texts for descriptions of this document, including some of my own earlier works, (Managing Software Requirements: First (1999) and Second Editions (2003)). In a fashion similar to the SRS, it will be developed incrementally, and the system will be validated against these governing requirements when necessary.

Verification (with traceability) Processes

Verification is used to illustrate how each phase of the process, (PRD to SRS, SRS to code) etc, meets the requirements imposed by the prior phase. (see definitions post). As we develop the code, we’ll have to both 1) make sure that it works absolutely correctly, and 2) be able to prove that it does via documentation and audit trail. We’ll use agile methods to do it in real time, and just-in-time, but we’ll still have verify the code works as intended and be able to demonstrate traceability from:

  • Product Requirements Document to Software Requirements Specification
  • Software Requirements Specifications (we’ll use small, agile “user stories” as objects which express necessary system behaviors) to “story acceptance tests”
  • Software Requirements Specification to code (via SCM change item)
  • Software Requirements Specification to code unit tests (via SCM change item)

We’ll also need to be able to prove that we’ve done all this via change control mechanisms, tracking and traceability matrices, and regression test automation. Thankfully, we’ll be able to automate much of this with the new breed of agile tooling we can apply in support of agile requirements management, coding practices, unit testing, acceptance test automation, and SCM and build management, but we’ll have to do it, automated or not.

Validation Processes

The artifacts we described above and the code that delivers the end user value will evolve incrementally, one iteration at a time.  However, before any software is released for use by the end user –  be it alpha, beta or production code – the results of all this work will have to be validated. “Validation shall be performed under defined operating conditions … testing under actual or simulated usage scenarios…to ensure that the devices conform to define user needs and intended uses” (CFR 820.30). Validation is the final “end run” on quality, where we evaluate the system increment against its product (system) level functional and nonfunctional requirements to make sure it does what we intended.

From the standpoint of software, we’ll need to include at least the following activities:

  • Aggregate increments of product requirements into a version controlled and definitive statement
  • Aggregate increments of software requirements (user stories)  into an SRS Document (traditional document, repository, data base, etc).
  • Finalize traceability from PRD to SRS
  • Run all unit testing and story level acceptance testing regression tests to assure that they all still pass
  • Finalize and execute PRD (system) level feature acceptance tests
  • Run all system qualities tests for nonfunctional requirements (reliability, accuracy, security)
  • Run any exploratory, usability, and user acceptance tests
  • Finalize and update traceability matrices to reflect current state
  • Finalize/update risk analysis/hazard mitigation
  • Conduct a design review

Design Review

Lastly, while it’s a little outside the scope of the agile development process per se, we’ll need to conduct a design review for each increment (“each manufacturer shall establish and maintain procedures to ensure that firm design reviews of the design results are planned and conducted at appropriate stages of the …. Development” (CFR820.30)), to make sure that the work is complete, the system has the necessary quality, and that we have followed the practices, agile and otherwise, that we have defined in our quality system.

Moving Forward with the Agile High Assurance Software Engineering Practices

Finally, with all this context behind us, we can move forward to full discussion of the model we posted earlier. For completeness of this post, here is the graphic of the iterative, incremental, agile model we have suggested:

High Assurance Agile Software Development Lifecycle Model

We’ll start an in depth discussion of the high assurance agile software engineering practices we are suggesting in the next post in this series.

Updated Agile Requirements Metamodel (Enterprise Backlog Model)

I was walking through some blog posts in the requirements category today looking for a later version of the full “agile requirements metamodel”.  (Some readers commented that “enterprise backlog model” might be a better descriptor, and I tend to agree). The model has evolved and been refined based on peer reviewer feedback (thanks Gabor and others….) of Agile Software Requirements. Evidently I didn’t track that evolution here on the blog, so here is the updated graphic of the final, (at least for this book version) full model as it will appear in the appendix of the book.

Full agile requirements metamodel (Enterprise backlog model)

Note: This metamodel is also integral to the work I’m currently doing with Craig Langenfeld on applying agile development to high assurance systems, so I’m tagging this post to that category as well.

Software Verification and Validation in High Assurance Agile Development: Definitions

Series Background: In this series of posts, I’ve been using Medical Device Development (as Regulated by U.S. FDA via CFR 820.30 and international standard IEC62304) as an exemplar for suggesting ways to develop high quality software in regulated (and other high assurance, high economic cost of failure) environments in an agile manner. This series is sponsored, in part, by Rally Software Development.


In the last post, I introduced a suggested process model that teams could use to visualize and reason about how to apply iterative and incremental, and agile methods in the development of such systems. The graphic is reproduced here:

High Assurance Agile Software Development Lifecycle Model

I described the iterative and incremental nature of the model, and noted that we’ll need a better understating of Verification, Validation and Traceability to further understand it.

The use of the terms verification and validation,  (also described as V&V, SV&V) are often used interchangeably in the industry. The most common interpretation of these terms is that they translate to “assured testing” practices. In point of fact, the words have different meaning and lumping them together as V&V can obscure the constructs entirely.

One of the reasons we’ve picked an exemplar for our discussion is to provide a more definitive basis for the method, grounded in at least one known, public example: in our case the development of medical devices containing software. So we’ll return to these roots for definitions of these terms, and build from there. In an earlier post, I described the chain of regulations covering medical devices marketed in the US, and summarized with the graphic below:

Chain of regulations governing medical device software in the US

While CFR21 Part 820.30 is the governing regulatory requirements, it’s surprisingly short and not very explanatory. To address this, FDA produced the document on the bottom right, (General Principles of Software Validation; Final Guidance for Industry and Staff) to provide guidance to those who operate under the 820.30 mandate.

This document provides some meaningful definitions, and we’ll start with the most critical ones that are relevant to our model:


“Software verification provides objective evidence that the design outputs of a particular phase of the software development life cycle meet all of the specified requirements for that phase.

Software verification looks for consistency, completeness, and correctness of the software and its supporting documentation, as it is being developed, and provides support for a subsequent conclusion that software is validated. Software testing is one of many verification activities intended to confirm that software development output meets its input requirements. Other verification activities include various static and dynamic analyses, code and document inspections, walkthroughs, and other techniques.”

Clearly this definition takes us beyond testing, and takes a more “white box” look at assuring that each development activity meets the requirements imposed from the prior activity. With our yet-to-be-described agile , high assurance practices, continuous (and where possible, automated) verification activities will play a central role. Fortunately, we’ll discover that many of these practices (hierarchical reqs, small units of functionality, unit testing, pair/peer review, acceptance testing, etc.) are part of standard agile, high quality hygiene, so we’ll be able to “rigorously apply” existing practices, rather than invent new ones.


“…FDA considers… software validation to be confirmation by examination and provision of objective evidence that software specifications conform to user needs and intended uses, and that the particular requirements implemented through software can be consistently fulfilled.”

In practice, software validation activities may occur both during, as well as at the end of the software development life cycle to ensure that all requirements have been fulfilled. Since software is usually part of a larger hardware system, the validation of software typically includes evidence that all software requirements have been implemented correctly and completely and are traceable to system requirements. A conclusion that software is validated is highly dependent upon comprehensive software testing, inspections, analyses, and other verification tasks performed at each stage of the software development life cycle. Testing of device software functionality in a simulated use environment, and user site testing are typically included as components of an overall design validation program for a software automated device.”

In addition to these definitions, this document also provides an introduction and definition to a few other requirements that will be imposed on the process:

Software Requirements Specification

“A documented software requirements specification provides a baseline for both validation and verification. The software validation process cannot be completed without an established software requirements specification (Ref: 21 CFR 820.3(z) and (aa) and 820.30(f) and (g)).”

So while in agile, we don’t often create these in a formal way, instead using the backlog, collection of user stories and acceptance criteria,  test cases and the code itself to document requirements. But in this context,  it is 100% clear that we will need to rigorously develop and maintain a software requirements specification as part of our high assurance, but still largely agile,  practices.


This document goes on to describe traceability and traceability analysis as one of the primary mechanisms to assure that verification and validation are complete and consistent. However, it doesn’t define traceability, for this we refer to FDA Glossary of Computer Systems Software Development Terminology, where we find the IEEE definitions:

traceability. (IEEE) (1) The degree to which a relationship can be established between two or more products of the development process, especially products having a predecessor-successor or master-subordinate relationship to one another; e.g., the degree to which the requirements and design of a given software component match. See: consistency. (2) The degree to which each element in a software development product establishes its reason for existing; e.g., the degree to which each element in a bubble chart references the requirement that it satisfies.

Next Steps

With these activity/artifact/process definitions and the lifecycle graphic behind us, we can go on to a more meaningful elaboration of the model. We’ll do that in the next few posts.

An Iterative and Incremental Process Model for Agile Development in Regulated Environments

In this series of posts, I’ve been using Medical Device Development (as Regulated by US FDA via CFR 820.30, and internationally as regulated by IEC 62304) as an exemplar for suggesting ways to develop high quality software in regulated (and other high assurance, high safety, high economic cost of failure) environments in an agile manner.

In an earlier post, based on my reading of the 820.30 regs, I described the chain of regs and explanatory documents that might tend to imply a “waterfall” (sequential, one-pass, stage-gated) development model, which might look something as follows.

An implied, high assurance, waterfall lifecycle model

Whether based on CFR 820.30 or not, this would seem like a reasonable conclusion based on our historical approach to such development. With the addition above of a single pass “verification” and “validation” activities, this closely follows the requirements>design>code>test models of the past. On the surface, at least, this model has some obvious advantages:

–       it is simple and logical (what could be more logical than code following requirements, and tests following that?)

–       you only have to do “it all” (especially verification and validation, which can be both labor intensive and error prone) once.

Of course, readers of this blog know it doesn’t work well that way at all, and that’s why we strive for agility in the high assurance markets, just like we do everywhere else. But for many, the apparently beguiling (but false) simplicity of the model is one reason that it has made its way into the various governing corporate quality standards, etc., if for no other reason than “that’s the way we’ve always done it”. (And admittedly, most of the rest of the software industry is doing it that way too).

It Doesn’t Have to be that Way

Momentum aside, however, we now note that while such a thing could be inferred by the various governing documents and our own quality affairs personnel (and it certainly has been applied by countless FDA reviews and other quality audits), in point of fact the specific regs we are using in the exemplar DO NOT require such a process:

1)   From US General Principles of Software Validation; Final Guidance for Industry and Staff (US FDA CDRH 2002) we note the following:
This guidance does not recommend any particular lifecycle models…. “

2)   Even more specifically, as Pate and Russell note from IEC62304 (a widely recognized international standard for medical device software which is largely harmonized with FDAs interpretation of CFR 820.30)
“these activities and tasks may overlap or interact and may be performed iteratively or recursively. It is not the intent to imply that a waterfall model should be used.”

3)   And finally, from the 2009 Abbott Labs case study describing the successful application of agile development in a Class III (the most stringent FDA regulatory category) Adopting Agile in an FDA Environment:
“we will describe the adoption of agile practices…..This experience has convinced as that an agile approach (multiple iterations during the course of development) is the approach best suited to development of FDA-regulated medical devices.”

There you have it. If we are using waterfall development in support of high assurance systems (at least in the case of the regs we have described here), it is evidently because “we have always done so” (indeed, as this is still mostly case in much of the rest of the software industry) rather than “the regulators made us do it.”

So if, as an industry, we do want to increase the productivity, quality, and yes, even safety and efficacy of the software we produce, it’s time to move on!

Moving to an iterative and incremental, and … agile model.

With a nod to Abbott Labs whitepaper and the associated presentation, and with due respect to the verification and validation activities that will still be required in high-assurance development (see High Assurance Agile Development in Medical Devices: Prove It post), we offer the following figure as a potential, general model for iterative and incremental (and increasingly, as we will see) agile development in high assurance, regulated markets:

Iterative and Incremental High Assurance Lifecycle Model

It can be seen from the figure that development of software in this model does not follow a waterfall, sequential reqs-code-test time line. Instead we use a series of short iterations, each of which “defines|builds|verifies” some new, and valuable user functionality. [note: I described this as the Define|Build|Test atomic building block of agile development in Scaling Software Agility.)

Periodically (typically after 3-4 iterations) this new increment of software can then be validated and reviewed, prior to being made available for alpha or beta testing, or general availability release. (The shorter the iterations and increments-the faster the feedback;the faster the feedback-the higher the quality. The shorter and faster we go-the more agile we become.)

With respect to additional elaboration of the model, it occurs to me that I’ve used the terms “verify” and “validate” here, as if we knew what they actually meant. Indeed, there is much confusion on the topic, so I’ll try to clarify that in the next post as I go one to explain more about the model above. When we do so, we’ll also hit that nifty high assurance bugaboo called “traceability”. (And doesn’t that one already just sound like some fun?)

High Assurance Agile Software Development in Medical Devices: Prove It

In this series of posts, I’ve been laying the background for describing a set of practices that I believe can help development teams working in high assurance and regulated industries build the highest quality software possible…. using agile methods. It’s not an idle thought. In the last 3-4 years, I’ve had the privilege of working with some extraordinary agile teams, in addition to their obvious market success, one things stand out: they have the highest quality solutions and products that I’ve experienced. It isn’t accidental. Properly applied, agile methods build software quality in, rather than leaving gobs of untested code for a nasty waterfall test/triage/death march at the end. And we know the elimination of a large number of reported defects does not induce quality. It’s too late for that. In my new book Agile Software Requirements, I cover a lot of the basic agile requirements practices and how they produce endemically higher quality. This book will be released in the next 30 days or so, and I hope it provides value to all agile development practitioners.

But in this environment, we must go one step further, we not only have to have quality, we have to prove it, as there are QA governance and regulatory affairs people watching our every step.

The Rules of “Prove It”.

In an earlier post, I introduced a useful briefing presented by SoftwareCPR at the recent GE Agile Conference, Based on their experiences and interpretations with FDA 820.30 and IEC62304, Brian pate and Mike Russell posit that there are a set of immutable rules, the “gotta haves”, if you will, that are necessary for development and deployment of medical device software. I agree with the assessment, and with their permission, I now don’t have to make those up myself! Figure 1 below, provides their view of the “gotta haves’ for medical device development (and by inference at least, a comparable set of gotta haves occurs in most other high reliability, regulated environments):

It’s probably obvious that most of these items would not be required in typical agile environments, even environments where the cost of errors is prohibitively high.  But if we are going to use agile in this environment, we are going to have to make our peace with these requirements, and then use agile, iterative and incremental approaches to building the code we will eventually deploy. That means that many of the above artifacts, (for example: Software Requirements Specifications, traceability, and verification and validation activities) will need to be iterative and incremental, too. We can’t just do them once, whether at the beginning, or we will fall back in the waterfall trap.

I guess we expected that.

In the next post, I’ll describe one view of such an agile model, and then proceed to address these key requirements, without killing the goose that laid this golden agile egg.


Upcoming AAMI Technical Information Report on Agile and Medical Device Software

Mike Russell and Brian Lewis Pate of just informed me of an important new development with respect to the use of agile in medical device development (we are using medical device development as the exemplar for high assurance development in this blog series).

Specifically the AAMI (Association for the Advancement of Medical Instrumentation) is developing a Technical Information Report (TIR) which is devoted precisely to this topic. The description of the TIR is below. Mike and Brian tell me that it will likely be published sometime next year. (Hopefully, I might become involved as a reviewer). In any case, I’ll proceed apace with this blog series over the next few months, because Rally Software (my sponsor for some of this work) and I have some groundwork that we need to lay well prior to next year. Thanks for the tip Brian and Mike!

The foreword and abstract is include below:


Agile software development (hereafter referred to simply as “Agile”) has been evolving for many years. Agile began as a niche concept being used in small pockets of the software industry, and has since grown to be well established in many different software development contexts. As it has grown, it has been adapted to fit the unique needs of a specific context. For Agile to be established in the medical device software industry, guidance is needed to adapt it to fit that unique context. This TIR fulfills that need.

Agile was developed in response to quality and efficiency concerns posed by existing methods of software development. Too often software projects would deliver software with low customer value, poor quality, and were late to market/costly to develop. Agile attempts to solve those problems that are common to any type of software development.

In addition, Agile brings many quality and safety risk management benefits that are good for the Medical Device world. Some of those include:

  • Continuous focus on safety and customer value through backlog prioritization/management and customer feedback
  • Continuous assessment of quality through continuous integration and testing
  • Continuous evolution of the product and the process that produces it
  • Continuous focus on getting to “done” to demonstrate the completion of deliverables and activities that satisfy the Quality Management System
  • Continuous participation in and collective ownership of risk management

Agile’s principles, practices, and life-cycle model have been viewed as incompatible with regulatory agency requirements and expectations for a medical device software quality management system. For example, the Agile Manifesto has value statements that seem contrary to the values of a quality management system. Fortunately, along with some strong and perhaps controversial values and principles, Agile also brings the concept of adapting the needs of the context, taking into account all of their customer’s and shareholders needs.

Purpose of this TIR

Agile is compatible and can be used in a regulated environment.

This TIR will examine Agile’s goals, values, principles, and practices, and provide guidance on how to apply Agile to medical device software development. It will:

  • Provide motivation for the use of Agile.
  • Clarify misconceptions about the suitability of Agile.
  • Provide direction on the application of Agile so as to meet Quality System Requirements.

This TIR will provide guidance on the proper way to apply Agile.


This TIR provides guidance on how to achieve the benefits of Agile while being compliant with medical device regulations.


This TIR provides perspectives on the application of Agile during medical device software development. It relates them to the following existing standards, regulations and guidance:

  • EN 62304 – Life Cycle Requirements for Medical Device Software
  • Design Controls as required by CFR – Quality System Regulations
  • FDA guidance on premarket software guidance and general principles of software validation
  • The following groups are the intended audience for this TIR:
  • Medical device manufacturers who are planning to use Agile techniques
  • Manufacturers who are currently practicing agile and are entering the regulated (medical device) space, for example, Electronic Health Record manufacturers
  • Software development teams, including software test and quality groups
  • Senior Management, Project Managers, Quality Managers
  • Quality systems and Regulatory affairs personnel
  • Internal and external auditors
  • Regulating bodies, agencies and organization’s responsible for overseeing the safety and effectiveness of medical devices

This TIR is not intended to be used as an educational tool or tutorial for the following:

  • Agile Development practice
  • Quality System Regulations

This TIR does not provide perspectives for the following standards:

  • 80002-1
  • ISO 14971

This Technical Information Report (TIR) should be regarded as a reference and as a guidance which is intended to provide recommendations and perspectives for complying with international standards and FDA guidance documents when using agile practices in the development of medical device software. This TIR is not intended to be a prescription for a specific situation or method.

More Agile Momentum in Healthcare: GE Agile Conference

Craig Langenfeld, of Rally Software, (Craig has been providing some thought leadership and has been working directly with me on this blog series) just brought me up to date on another leading indicator of agile’s march across the chasm into high assurance development.  Last month, Craig attended the first ever GE Agile Conference at the GE Advanced Manufacturing & Software Technology Center in Detroit, Michigan.

The conference was hosted by Carl Shaw, GE Healthcare Global IT Agile Champion (kind of a telling title, eh?) .  Craig noted that topics ranged from basic, introductory Agile concepts to advanced examples of using Agile to build medical devices at GE.

Ryan Martens, Rally Founder and CTO, was the Keynote Speaker on the last day of the three-day conference.  He spoke on applying Agile concepts in a highly innovative world to get products in the hands of those who can better social wellbeing faster.

Craig also noted that Brian Pate of SoftwareCPR, gave a presentation on “Agile methods for medical device software.” This presentation is available on their website (

It’s a solid presentation with some guidance on IEC62304 (an IEC and ISO international standard for medical device software, one that is harmonized with FDA CFR 820.30. see my post.). It’s a great example of what this series is all about.

Brian notes three common misconceptions that can interfere with the adoption of agile methods in medical devices, including:

1)   lack of obvious rigorous treatment of requirements

2)   no formal verification and validation in normal agile practice

3)   no formal agile mechanisms for dealing with risk assessment hazard, and hazard mitigation.

But he goes on to illustrate how to deal with these misconceptions in a high-assurance development model as well. Refer directly to that presentation for his advice.

I’ll be dealing with these issues as well in the upcoming series. In the next post or two in this series, I’ll suggest an agile lifecycle model that can support rigorously assured software quality and provides recommendations for continuous verification and as-necessary validation practices. Then we’ll get into the meat of specific practices that agile teams can use to develop such software, in an agile, safe and efficacious way.

Note: Special thanks to Craig Langenthal of Rally, Carl Shaw of GE Healthcare and Brian Pate of Software CPR for providing content and permissions for this particular post.