Additional Perspective on Agile, V&V and FDA

Brian Shoemaker, Ph.D., who clearly has expertise on software quality and FDA, (see; Brian has more than thirteen years’ experience with software quality and validation in the FDA-regulated industry) added an in-depth comment describing additional documentation guidance for doing software V&V. He also provides  his opinion on applicability of agile in this world (just do it?) as well as some implications for updates to the companies QMS. Since wordpress comments aren’t partriculary reader-facing-friendly, and there is real value in his comments, I’ve reposted excerpts here:

“Thanks for putting together the comments you’ve published here……. FDA has other guidances relevant to software besides the Design Control Guidance and General Principles of Software Validation, though those two are certainly the most important.

A more complete list would be:

  • Design Control Guidance For Medical Device Manufacturers (March 11, 1997)
  • General Principles of Software Validation (January 11, 2002)
  • Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices (May 11, 2005)
  • Off-The-Shelf Software Use in Medical Devices (Sep. 9, 1999)
  • Cybersecurity for Networked Medical Devices Containing Off-the-Shelf (OTS) Software (Jan. 14, 2005)
  • Computerized Systems Used in Clinical Investigations (May 2007 – note this is not directly relevant to medical device software)
  • 21 CFR Part 11: Electronic Records and Signatures (Aug. 1997)
  • Draft Guidance: Radio-Frequency Wireless Technology in Medical Devices (PDF) (January 3, 2007)

The “Premarket Submissions” guidance (third in the list) is often misread as specifying what processes a firm needs to use based on the hazard classification of the device (termed “Level of Concern” in the guidance). This is not the intent at all – rather, the guidance states what documents should be SUBMITTED based on the Level of Concern; the documents should exist anyway, whether they’re submitted or not.

I’ve argued for several years, just as you do, that nothing in GPSV specifies that medical device developers are required to use a waterfall approach.
In the October 23 post, you interpret the various activities required for Design Control. I disagree that waterfall methodology is implied here, but I can see how that interpretation can slip in – and I’ve worked with plenty of clients who believed that a waterfall approach is required.
Notice that the terms in the Design Control section of Part 820 all come directly from ISO 9000 (or ISO 13485, its medical device half-brother). I’m convinced that one can match up every quality-related activity in a software process (i.e. quality planning, requirements development, design and coding, configuration management, defect tracking) with an appropriate subparagraph in part 820.
Also notice that “verify” and “validate” are ISO 9000 concepts.

In the projects I’ve worked on with client companies, the “Product Requirements Document” or “System Requirements Specification” originated with Marketing, for better or worse (the marketing folks are often harder to train than engineers when it comes to concisely and clearly expressing requirements).

The concept of a “define-build-verify” team is powerful enough that, in my opinion, it should be described in a planning document (Software Development Plan or Quality Plan, depending on how the company structures such things). Planning documents not only outline steps to be followed, but give specific responsibility for specific steps – describing the team’s structure and tasks are important not only for the participants but also for the reviewer who checks the documentation after the fact.”

end of Brian’s quote…….


Software Verification and Validation in High Assurance Agile Development: Verification: SRS and User Stories

Series Background: In this series of posts, I’ve been using Medical Device Development (as Regulated by U.S. FDA via CFR 820.30 and international standard IEC62304) as an exemplar for suggesting ways to develop high quality software in regulated (and other high assurance, high economic cost of failure) environments in an agile manner. This series is sponsored, in part, by Rally Software Development. Thanks to Craig Langenfeld of Rally for his ongoing collaboration on this series.


In prior posts, I’ve introduced a suggested agile lifecycle model for verification and validation of software in high assurance development. In the last post, I’ve defined a set of ground rules that agile teams will have to follow. For convenience,  the lifecycle model graphic is reproduced here:


High Assurance Agile Software Development Lifecycle Model

I’ve also provided definitions of the key terms, verification, validation and traceability from the medical device exemplar.

This post describes the continuous verification activities implied by the model in additional detail.

Iterations and Increments

In the agile model, development of software assets occurs in a series of system iterations (or sprints). Iterations are short, timeboxed envelopes in which teams, define (requirements and design), build (code) and test new software. In support of our high assurance strategy, we’ll replace “test” with a more robust “verification” process here. So the activity becomes define|build|verify.

Periodically, the iterations are validated so as to become potentially shippable increments (PSIs). they may or may not be shipped, but they are at least valuable and evaluable, and fully validated to their intended use.

Define|Build|Verify Teams

In Scaling Software Agility, I described a common development team model, based primarily on Scrum, which organizes the resources to more effectively do all these activities in a short time box. We called that the Define|Build|Test team. Here, our Define|Build|Validate teams consists of

  • A product owner who has the responsibility to define and manage the backlog (the things we still need to have the system do). In our exemplar, this person must have the requisite medical domain knowledge and authority to make critical decisions on the safety and efficacy of the product.
  • Developers who implement the code and unit tests that test the code
  • Testers who develop and execute tests intended to assure that the system always meets its requirements
  • (and optionally) Quality assurance personnel who assist with verification and necessary documentation activities.

We mention the team construct here because it is assumed in this model (and all agile development). However,  for a high assurance company moving from waterfall to agile development, we also recognize that this step alone may be a big challenge, as it breaks down the functional silos and reorganizes along value delivery lines. Without this step, real agility cannot likely occur. Also, because this is how we do what we do, formalization of the team construct may be important enough to be called out in the companies quality management system (QMS) practice guidelines.

Software Verification in the Course of the Iteration

With such a team in place, the team has all the skills necessary to build the system incrementally. However, in order to avoid a large manual “end game” (and the inevitable delayed feedback loops that come when you defer quality (and verification) to the end) we’ll need to be able to verify that the software conforms to its intended uses as it is developed. Fortunately, with properly implemented agile, quality is built in, one user story at a time. That’s one of the prime benefits of agile:  higher innate quality+better fitness for use, so here we’ll just be leveraging what good agile teams already do. We’ll also have tooling to help us automate as much of this as possible.

Leveraging the Agile Requirements Backlog Model

Speaking of leveraging, in support of verification (and soon, validation) we’ll also be building on the agile requirements model I’ve described in the new Agile Software Requirements book and blog series. Relevant portions of that full model appear in the figure below:

High Assurance Requirements metamodel (portion of the Enterprise backlog model)

Capturing the User Stories as the Software Requirements Specification

For now, we are primarily concerned with the lower right portion (user stories, tasks, acceptance tests) of this model. (Note: we’ll return to Features, Feature Acceptance Tests, Nonfunctional requirements, etc. when we elevate the discussion to discuss Product Requirements and validation in later posts).

Of course, in accordance with the ground rules we established, in order to actually do verification of the software assets, we’ll have to always maintain a current copy of the software requirements, which will consist primarily of the collection of user stories we use to evolve the system over the course of each iteration. We can picture that simply as follows:

User Stories

A user story is a form of requirements expression used in agile development. (see user story whitepaper). Each user story is a brief statement of intent that describes something the system needs to do for the user. Typically, each user story is expressed in the “user voice” form as follows?

As a <role> I can <activity> so that < value>


  • Rolerepresents who is performing the action.
  • Activityrepresents the action to be performed
  • Valuerepresents the value to the user (often the patient in our exemplar) that results from the activity.

For (a fairly high level)  example:

“As an EPAT (Extracorporeal Pulse Activation Technology) technician, (<role>) I can adjust the energy delivered (<what I do with the system>) so as to deliver higher or lower energy pulses to the patient’s treatment area (<value the patient receives from my action>).”

User Story Verification

To assure that each new user story works as intended, we’ll have to verify it. To do that, we’ll use three built-in (and semi-automated) traceability paths:

  • User story to code – path to the SCM record that illustrates when and where the code was changed to achieve the story
  • User story to code-level unit test (the “white box” test which assures assure the new code works to its internal specifications, also maintained via path to the SCM change set)
  • User Story to Story acceptance tests (black box testing of the user story to make sure the system functions as intended)

To do this explicitly, we can use a standard tasking model for each new user story. Each task (maintained in our agile project management tooling, and directly associated with the story) can be used to establish a traceability link to the specific artifact we need as pictured below:

These artifacts (SRS, code and unit tests, story acceptance tests) must persist indefinitely, (so long as our solution is used in the market) so we’ll need competent tooling to help manage this.

Taken together, these activities

1)   a defined and controlled statement of requirements (collection of user stories and other items)

2)   traceability to code that implements the user story

3)   traceability to a unit test that assures the code works as it is designed

4)   traceability to a story acceptance test which verifies that the system behaves as intended based on this new user story

should provide sufficient verification that the code fulfills the story as intended. After all, what more can we do at this level?

In this manner, the requirements evolve one iteration at a time, the system evolves with it, the SRS evolves too, and we can demonstrate throughout that the system always behaves as we intended it to.

User Story Informal Validation

In addition, at the end of the iteration, each user story is demonstrated to the product owner and other stakeholders. This provides a first, informal validation that the story does meet its intended requirements. And if by any chance it isn’t right, it can be addressed in the next iteration.

Looking Forward

This completes the first half of our agile, high assurance verification and validation story. Validation will be the subject of the next post in the series.

Software Verification and Validation in High Assurance Agile Development: Ground Rules

Series Background: In this series of posts, I’ve been using Medical Device Development (as Regulated by U.S. FDA via CFR 820.30 and international standard IEC62304) as an exemplar for suggesting ways to develop high quality software in regulated (and other high assurance, high economic cost of failure) environments in an agile manner. This series is sponsored, in part, by Rally Software Development.


In an earlier post, I noted that there are set of immutable rules (driven  by regulations, interpretations of regulations or established auditing practices) associated with software development, and thereby agile software development, for high assurance systems. Within our medical device examplar, for example, we noted that the folks at SoftwareCPR called out the set of things that teams must address in order to be in compliance with USA FDA CFR 820.30, and IEC62304. Not surprisingly, it’s a pretty long list as this graphic shows.

Many of these artifacts include the word “plan” so its clear that plans for activities such as verification and validation, problem resolution, risk management etc, must be developed, documented, and most importantly, followed. Since these plans form much of the body the enterprises “Quality Management System” (QMS-the body of internal governing documents and procedures that define how quality is achieved and  regulations are met), many of these will have to be modified as the company adopts agile methods.

We’ll return to that activity later, but initially, the primary objective of this series to describe a suggested high assurance agile software engineering process that covers the definition , development and verification of the code itself. Without that, we won’t have accomplished much, and once we’ve done that, the necessary changes to a QMS will be more obvious.

In order to do THAT, we’ll need to address a subset of this list and we’ll need some specific artifacts and activities  (our “ground rules”) to follow. We’ll highlight those in the balance of this post.

Software Requirements Specification

As “the software validation process cannot be completed without an established software requirements specification” (General Principles of Software Validation, FDA CDRH, this is the seminal document that will control most of our work. It must be correct, accurate, complete, consistent, version controlled and signed. This document must cover (paraphrasing from the above):

–       all software system inputs, outputs, and functions.
–       All performance requirements, including throughput, reliability, accuracy, response times (i.e., all nonfunctional requirements)
–       Definition of external and user interfaces
–       User interactions
–       Operating environments (platforms, operating systems, other applications)
–       All ranges limits, defaults and specific values the software will accept
–       All safety related requirements specifications, features, or functions implemented in software

However, as we’ve noted before, the fact that we need such a document doesn’t mandate that we do it all Big and Up-Front. Indeed, to be agile, we can and will develop it incrementally. But prior to any release to users, we’ll have spiff it up, put a bow on it and make sure that, in aggregate, it absolutely correctly reflects the current state of the software system behavior as of that time. That’s real work, but it doesn’t mandate that we do it only once, and up-front.

Product Requirements Document/System Requirements Specification

While the centerpiece of our world is the SRS, it doesn’t stand alone. Indeed, for most systems, including systems of systems and devices which contain both hardware and software, the governing document which resides above the SRS is usually a Product Requirements Document (we’ll use the generic acronym PRD) or System Requirements Specification, which contains the complete specifications for the device as a whole.

The development and maintenance of this document is just as important as the SRS, and we’ll return to it in future posts.  Since it’s a fairly traditional document, one can refer to a variety of texts for descriptions of this document, including some of my own earlier works, (Managing Software Requirements: First (1999) and Second Editions (2003)). In a fashion similar to the SRS, it will be developed incrementally, and the system will be validated against these governing requirements when necessary.

Verification (with traceability) Processes

Verification is used to illustrate how each phase of the process, (PRD to SRS, SRS to code) etc, meets the requirements imposed by the prior phase. (see definitions post). As we develop the code, we’ll have to both 1) make sure that it works absolutely correctly, and 2) be able to prove that it does via documentation and audit trail. We’ll use agile methods to do it in real time, and just-in-time, but we’ll still have verify the code works as intended and be able to demonstrate traceability from:

  • Product Requirements Document to Software Requirements Specification
  • Software Requirements Specifications (we’ll use small, agile “user stories” as objects which express necessary system behaviors) to “story acceptance tests”
  • Software Requirements Specification to code (via SCM change item)
  • Software Requirements Specification to code unit tests (via SCM change item)

We’ll also need to be able to prove that we’ve done all this via change control mechanisms, tracking and traceability matrices, and regression test automation. Thankfully, we’ll be able to automate much of this with the new breed of agile tooling we can apply in support of agile requirements management, coding practices, unit testing, acceptance test automation, and SCM and build management, but we’ll have to do it, automated or not.

Validation Processes

The artifacts we described above and the code that delivers the end user value will evolve incrementally, one iteration at a time.  However, before any software is released for use by the end user –  be it alpha, beta or production code – the results of all this work will have to be validated. “Validation shall be performed under defined operating conditions … testing under actual or simulated usage scenarios…to ensure that the devices conform to define user needs and intended uses” (CFR 820.30). Validation is the final “end run” on quality, where we evaluate the system increment against its product (system) level functional and nonfunctional requirements to make sure it does what we intended.

From the standpoint of software, we’ll need to include at least the following activities:

  • Aggregate increments of product requirements into a version controlled and definitive statement
  • Aggregate increments of software requirements (user stories)  into an SRS Document (traditional document, repository, data base, etc).
  • Finalize traceability from PRD to SRS
  • Run all unit testing and story level acceptance testing regression tests to assure that they all still pass
  • Finalize and execute PRD (system) level feature acceptance tests
  • Run all system qualities tests for nonfunctional requirements (reliability, accuracy, security)
  • Run any exploratory, usability, and user acceptance tests
  • Finalize and update traceability matrices to reflect current state
  • Finalize/update risk analysis/hazard mitigation
  • Conduct a design review

Design Review

Lastly, while it’s a little outside the scope of the agile development process per se, we’ll need to conduct a design review for each increment (“each manufacturer shall establish and maintain procedures to ensure that firm design reviews of the design results are planned and conducted at appropriate stages of the …. Development” (CFR820.30)), to make sure that the work is complete, the system has the necessary quality, and that we have followed the practices, agile and otherwise, that we have defined in our quality system.

Moving Forward with the Agile High Assurance Software Engineering Practices

Finally, with all this context behind us, we can move forward to full discussion of the model we posted earlier. For completeness of this post, here is the graphic of the iterative, incremental, agile model we have suggested:

High Assurance Agile Software Development Lifecycle Model

We’ll start an in depth discussion of the high assurance agile software engineering practices we are suggesting in the next post in this series.

Resource Flexibility in The Agile Enterprise

I received this interesting email from a colleague (who allowed me to share it) a few days back.

“I currently lead a project on how to increase our resource fluidity so that we can effectively assign sufficient manpower to where it matters the most, e.g. working on the highest priority items on the backlog. We acknowledge the need for deeply specialized teams in certain areas and that drastic competence shifts are unrealistic, so the change project aims at finding out how many scrum teams do we need to make more fluid? What competences should these teams have, if they are to be deployed on a wider range of tasks? We also need to address change resistance such as designers or managers being protective of their work domain.

I wonder if you have any advice on how to increase resource fluidity and thereafter managing it.”

Best regards,

— Dr. Mikkel Mørup, Nokia Mobile Phones, Experience Portfolio Management

The email also reminded me of a visual on the same topic that I saw recently as well, which went something as follows:

Matching project backlog to team competencies

Even if we have driven the excess WIP out of the system, even if we can match capacity to backlog, even if we have shorter queues, even if we can build working software in a short time box, we still have to rapidly adjust resources to match the current backlog; that’s a big part of what makes us agile after all. But of course, it never really matches. So we constantly struggle to make the best of the situation, and yet who wants to be the epic owner (or project manager) for Epics (epics 7&8 above) or a team member for Team 4 or 5? A number of awkward conversations and suboptimal economic solutions are likely to develop.

To address this problem generally, we need two things:

1)   Feature teams, which have the ability to work across the domains of interest (See feature teams/component teams category)

2)   Individuals with “T Skills”, i.e. people who are deep in one area and broad in many. (See Reinertsen: Principles of Product Development Flow, W12).

As agile as we hope to be however, this is a journey often measured in years, not weeks or months, and it is sometimes met with resistance from the enterprise, as Mikkel notes above. Resistance can come from:

–       individuals who are highly specialized experts,  and who may even see their knowledge of a domain and specialty as a form of job security

–       managers who control resources associated with a domain and who can be reluctant to share their resources (and implied base of power)

–       Individuals or managers who may have lost their technical thirst for “life long learning” and are comfortable in their existing skills and knowledge

–       Logistics and infrastructure (CI and build environments, branching structures, etc) that make it difficult to share code bases across teams

I’m posting this as I would like to hear some other opinions on the topic. As a kickstart, however, my initial response to Mikkel went something as follows:

1)   Make the objective clear. It is product development economics that drive us to this particular change vector, and in the end economics wins (or loses) the game for every enterprise. Make the business case based on economics of agility and flow.

2)   Make the value system clear. We value feature teams and T skills the most highly (yes, we value component teams too; but even there T skills are an asset). Embed the value system in the performance review/appraisal system to eliminate ambiguity about our expectations for individual career growth and advancement.

3)   Adopt XP like practices and values (simplicity, pair programming, collective ownership, single code line, etc.). Hire people with experience in these areas.

4)   Attack the infrastructure unremittingly. The technical blocks must be eliminated or the rest of the change program will far less effective.

For you other enterprise agilists out there, do you have thoughts and experiences that you can share?