Series Background: In this series of posts, I’ve been using Medical Device Development (as Regulated by U.S. FDA via CFR 820.30 and international standard IEC62304) as an exemplar for suggesting ways to develop high quality software in regulated (and other high assurance, high economic cost of failure) environments in an agile manner. This series is sponsored, in part, by Rally Software Development. Thanks to Craig Langenfeld of Rally for his ongoing collaboration on this series.
In prior posts, I’ve introduced a suggested agile lifecycle model for verification and validation of software in high assurance development. In the last post, I’ve defined a set of ground rules that agile teams will have to follow. For convenience, the lifecycle model graphic is reproduced here:
I’ve also provided definitions of the key terms, verification, validation and traceability from the medical device exemplar.
This post describes the continuous verification activities implied by the model in additional detail.
Iterations and Increments
In the agile model, development of software assets occurs in a series of system iterations (or sprints). Iterations are short, timeboxed envelopes in which teams, define (requirements and design), build (code) and test new software. In support of our high assurance strategy, we’ll replace “test” with a more robust “verification” process here. So the activity becomes define|build|verify.
Periodically, the iterations are validated so as to become potentially shippable increments (PSIs). they may or may not be shipped, but they are at least valuable and evaluable, and fully validated to their intended use.
In Scaling Software Agility, I described a common development team model, based primarily on Scrum, which organizes the resources to more effectively do all these activities in a short time box. We called that the Define|Build|Test team. Here, our Define|Build|Validate teams consists of
- A product owner who has the responsibility to define and manage the backlog (the things we still need to have the system do). In our exemplar, this person must have the requisite medical domain knowledge and authority to make critical decisions on the safety and efficacy of the product.
- Developers who implement the code and unit tests that test the code
- Testers who develop and execute tests intended to assure that the system always meets its requirements
- (and optionally) Quality assurance personnel who assist with verification and necessary documentation activities.
We mention the team construct here because it is assumed in this model (and all agile development). However, for a high assurance company moving from waterfall to agile development, we also recognize that this step alone may be a big challenge, as it breaks down the functional silos and reorganizes along value delivery lines. Without this step, real agility cannot likely occur. Also, because this is how we do what we do, formalization of the team construct may be important enough to be called out in the companies quality management system (QMS) practice guidelines.
Software Verification in the Course of the Iteration
With such a team in place, the team has all the skills necessary to build the system incrementally. However, in order to avoid a large manual “end game” (and the inevitable delayed feedback loops that come when you defer quality (and verification) to the end) we’ll need to be able to verify that the software conforms to its intended uses as it is developed. Fortunately, with properly implemented agile, quality is built in, one user story at a time. That’s one of the prime benefits of agile: higher innate quality+better fitness for use, so here we’ll just be leveraging what good agile teams already do. We’ll also have tooling to help us automate as much of this as possible.
Leveraging the Agile Requirements Backlog Model
Speaking of leveraging, in support of verification (and soon, validation) we’ll also be building on the agile requirements model I’ve described in the new Agile Software Requirements book and blog series. Relevant portions of that full model appear in the figure below:
Capturing the User Stories as the Software Requirements Specification
For now, we are primarily concerned with the lower right portion (user stories, tasks, acceptance tests) of this model. (Note: we’ll return to Features, Feature Acceptance Tests, Nonfunctional requirements, etc. when we elevate the discussion to discuss Product Requirements and validation in later posts).
Of course, in accordance with the ground rules we established, in order to actually do verification of the software assets, we’ll have to always maintain a current copy of the software requirements, which will consist primarily of the collection of user stories we use to evolve the system over the course of each iteration. We can picture that simply as follows:
A user story is a form of requirements expression used in agile development. (see user story whitepaper). Each user story is a brief statement of intent that describes something the system needs to do for the user. Typically, each user story is expressed in the “user voice” form as follows?
As a <role> I can <activity> so that < value>
- Role – represents who is performing the action.
- Activity – represents the action to be performed
- Value – represents the value to the user (often the patient in our exemplar) that results from the activity.
For (a fairly high level) example:
“As an EPAT (Extracorporeal Pulse Activation Technology) technician, (<role>) I can adjust the energy delivered (<what I do with the system>) so as to deliver higher or lower energy pulses to the patient’s treatment area (<value the patient receives from my action>).”
User Story Verification
To assure that each new user story works as intended, we’ll have to verify it. To do that, we’ll use three built-in (and semi-automated) traceability paths:
- User story to code – path to the SCM record that illustrates when and where the code was changed to achieve the story
- User story to code-level unit test (the “white box” test which assures assure the new code works to its internal specifications, also maintained via path to the SCM change set)
- User Story to Story acceptance tests (black box testing of the user story to make sure the system functions as intended)
To do this explicitly, we can use a standard tasking model for each new user story. Each task (maintained in our agile project management tooling, and directly associated with the story) can be used to establish a traceability link to the specific artifact we need as pictured below:
These artifacts (SRS, code and unit tests, story acceptance tests) must persist indefinitely, (so long as our solution is used in the market) so we’ll need competent tooling to help manage this.
Taken together, these activities
1) a defined and controlled statement of requirements (collection of user stories and other items)
2) traceability to code that implements the user story
3) traceability to a unit test that assures the code works as it is designed
4) traceability to a story acceptance test which verifies that the system behaves as intended based on this new user story
should provide sufficient verification that the code fulfills the story as intended. After all, what more can we do at this level?
In this manner, the requirements evolve one iteration at a time, the system evolves with it, the SRS evolves too, and we can demonstrate throughout that the system always behaves as we intended it to.
User Story Informal Validation
In addition, at the end of the iteration, each user story is demonstrated to the product owner and other stakeholders. This provides a first, informal validation that the story does meet its intended requirements. And if by any chance it isn’t right, it can be addressed in the next iteration.
This completes the first half of our agile, high assurance verification and validation story. Validation will be the subject of the next post in the series.