Software Verification and Validation in High Assurance Agile Development: Verification: SRS and User Stories

Series Background: In this series of posts, I’ve been using Medical Device Development (as Regulated by U.S. FDA via CFR 820.30 and international standard IEC62304) as an exemplar for suggesting ways to develop high quality software in regulated (and other high assurance, high economic cost of failure) environments in an agile manner. This series is sponsored, in part, by Rally Software Development. Thanks to Craig Langenfeld of Rally for his ongoing collaboration on this series.


In prior posts, I’ve introduced a suggested agile lifecycle model for verification and validation of software in high assurance development. In the last post, I’ve defined a set of ground rules that agile teams will have to follow. For convenience,  the lifecycle model graphic is reproduced here:


High Assurance Agile Software Development Lifecycle Model

I’ve also provided definitions of the key terms, verification, validation and traceability from the medical device exemplar.

This post describes the continuous verification activities implied by the model in additional detail.

Iterations and Increments

In the agile model, development of software assets occurs in a series of system iterations (or sprints). Iterations are short, timeboxed envelopes in which teams, define (requirements and design), build (code) and test new software. In support of our high assurance strategy, we’ll replace “test” with a more robust “verification” process here. So the activity becomes define|build|verify.

Periodically, the iterations are validated so as to become potentially shippable increments (PSIs). they may or may not be shipped, but they are at least valuable and evaluable, and fully validated to their intended use.

Define|Build|Verify Teams

In Scaling Software Agility, I described a common development team model, based primarily on Scrum, which organizes the resources to more effectively do all these activities in a short time box. We called that the Define|Build|Test team. Here, our Define|Build|Validate teams consists of

  • A product owner who has the responsibility to define and manage the backlog (the things we still need to have the system do). In our exemplar, this person must have the requisite medical domain knowledge and authority to make critical decisions on the safety and efficacy of the product.
  • Developers who implement the code and unit tests that test the code
  • Testers who develop and execute tests intended to assure that the system always meets its requirements
  • (and optionally) Quality assurance personnel who assist with verification and necessary documentation activities.

We mention the team construct here because it is assumed in this model (and all agile development). However,  for a high assurance company moving from waterfall to agile development, we also recognize that this step alone may be a big challenge, as it breaks down the functional silos and reorganizes along value delivery lines. Without this step, real agility cannot likely occur. Also, because this is how we do what we do, formalization of the team construct may be important enough to be called out in the companies quality management system (QMS) practice guidelines.

Software Verification in the Course of the Iteration

With such a team in place, the team has all the skills necessary to build the system incrementally. However, in order to avoid a large manual “end game” (and the inevitable delayed feedback loops that come when you defer quality (and verification) to the end) we’ll need to be able to verify that the software conforms to its intended uses as it is developed. Fortunately, with properly implemented agile, quality is built in, one user story at a time. That’s one of the prime benefits of agile:  higher innate quality+better fitness for use, so here we’ll just be leveraging what good agile teams already do. We’ll also have tooling to help us automate as much of this as possible.

Leveraging the Agile Requirements Backlog Model

Speaking of leveraging, in support of verification (and soon, validation) we’ll also be building on the agile requirements model I’ve described in the new Agile Software Requirements book and blog series. Relevant portions of that full model appear in the figure below:

High Assurance Requirements metamodel (portion of the Enterprise backlog model)

Capturing the User Stories as the Software Requirements Specification

For now, we are primarily concerned with the lower right portion (user stories, tasks, acceptance tests) of this model. (Note: we’ll return to Features, Feature Acceptance Tests, Nonfunctional requirements, etc. when we elevate the discussion to discuss Product Requirements and validation in later posts).

Of course, in accordance with the ground rules we established, in order to actually do verification of the software assets, we’ll have to always maintain a current copy of the software requirements, which will consist primarily of the collection of user stories we use to evolve the system over the course of each iteration. We can picture that simply as follows:

User Stories

A user story is a form of requirements expression used in agile development. (see user story whitepaper). Each user story is a brief statement of intent that describes something the system needs to do for the user. Typically, each user story is expressed in the “user voice” form as follows?

As a <role> I can <activity> so that < value>


  • Rolerepresents who is performing the action.
  • Activityrepresents the action to be performed
  • Valuerepresents the value to the user (often the patient in our exemplar) that results from the activity.

For (a fairly high level)  example:

“As an EPAT (Extracorporeal Pulse Activation Technology) technician, (<role>) I can adjust the energy delivered (<what I do with the system>) so as to deliver higher or lower energy pulses to the patient’s treatment area (<value the patient receives from my action>).”

User Story Verification

To assure that each new user story works as intended, we’ll have to verify it. To do that, we’ll use three built-in (and semi-automated) traceability paths:

  • User story to code – path to the SCM record that illustrates when and where the code was changed to achieve the story
  • User story to code-level unit test (the “white box” test which assures assure the new code works to its internal specifications, also maintained via path to the SCM change set)
  • User Story to Story acceptance tests (black box testing of the user story to make sure the system functions as intended)

To do this explicitly, we can use a standard tasking model for each new user story. Each task (maintained in our agile project management tooling, and directly associated with the story) can be used to establish a traceability link to the specific artifact we need as pictured below:

These artifacts (SRS, code and unit tests, story acceptance tests) must persist indefinitely, (so long as our solution is used in the market) so we’ll need competent tooling to help manage this.

Taken together, these activities

1)   a defined and controlled statement of requirements (collection of user stories and other items)

2)   traceability to code that implements the user story

3)   traceability to a unit test that assures the code works as it is designed

4)   traceability to a story acceptance test which verifies that the system behaves as intended based on this new user story

should provide sufficient verification that the code fulfills the story as intended. After all, what more can we do at this level?

In this manner, the requirements evolve one iteration at a time, the system evolves with it, the SRS evolves too, and we can demonstrate throughout that the system always behaves as we intended it to.

User Story Informal Validation

In addition, at the end of the iteration, each user story is demonstrated to the product owner and other stakeholders. This provides a first, informal validation that the story does meet its intended requirements. And if by any chance it isn’t right, it can be addressed in the next iteration.

Looking Forward

This completes the first half of our agile, high assurance verification and validation story. Validation will be the subject of the next post in the series.

One thought on “Software Verification and Validation in High Assurance Agile Development: Verification: SRS and User Stories

  1. Thanks for putting together the comments you’ve published here. Though I’ve seen some of the presentations you reference (e.g. the J.R. Jenks talk at Agile 2009), you’ve found other examples I wasn’t aware of.

    FDA has other guidances relevant to software besides the Design Control Guidance and General Principles of Software Validation, though those two are certainly the most important.
    A more complete list would be:
    •Design Control Guidance For Medical Device Manufacturers (March 11, 1997)
    •General Principles of Software Validation (January 11, 2002)
    •Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices (May 11, 2005)
    •Off-The-Shelf Software Use in Medical Devices (Sep. 9, 1999)
    •Cybersecurity for Networked Medical Devices Containing Off-the-Shelf (OTS) Software (Jan. 14, 2005)
    •Computerized Systems Used in Clinical Investigations (May 2007 – note this is not directly relevant to medical device software)
    •21 CFR Part 11: Electronic Records and Signatures (Aug. 1997)
    •Draft Guidance: Radio-Frequency Wireless Technology in Medical Devices (PDF) (January 3, 2007)

    The “Premarket Submissions” guidance (third in the list) is often misread as specifying what processes a firm needs to use based on the hazard classification of the device (termed “Level of Concern” in the guidance). This is not the intent at all – rather, the guidance states what documents should be SUBMITTED based on the Level of Concern; the documents should exist anyway, whether they’re submitted or not.

    I’ve argued for several years, just as you do, that nothing in GPSV specifies that medical device developers are required to use a waterfall approach.
    In the October 23 post, you interpret the various activities required for Design Control. I disagree that waterfall methodology is implied here, but I can see how that interpretation can slip in – and I’ve worked with plenty of clients who believed that a waterfall approach is required.
    Notice that the terms in the Design Control section of Part 820 all come directly from ISO 9000 (or ISO 13485, its medical device half-brother). I’m convinced that one can match up every quality-related activity in a software process (i.e. quality planning, requirements development, design and coding, configuration management, defect tracking) with an appropriate subparagraph in part 820.
    Also notice that “verify” and “validate” are ISO 9000 concepts.

    In the projects I’ve worked on with client companies, the “Product Requirements Document” or “System Requirements Specification” originated with Marketing, for better or worse (the marketing folks are often harder to train than engineers when it comes to concisely and clearly expressing requirements).

    The concept of a “define-build-verify” team is powerful enough that, in my opinion, it should be described in a planning document (Software Development Plan or Quality Plan, depending on how the company structures such things). Planning documents not only outline steps to be followed, but give specific responsibility for specific steps – describing the team’s structure and tasks are important not only for the participants but also for the reviewer who checks the documentation after the fact.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s