New Whitepaper: Agile and High Assurance

Finally.

For those who have been following the High Assurance and Regulated Environments series, I am pleased to announce the completion and publication of the whitepaper: Agile Software Development with Verification and Validation in High Assurance and Regulated Environments.

(Click on the image below to download a copy.)

This document was the result of a year-long research project and collaboration with a number of other experts in the field. I’d like to particularly thank Craig Langenfeld of Rally, who persistently moved the project forward and built all the examples, and Michael Messier, VP for Software R&D at Omnyx, who provided in-depth domain expertise and helpful reviews along the way.

High Assurance Series Update

A few quick notes to those interested in the agile and high assurance and regulated environments series..

Whitepaper Update

A draft of our whitepaper Agile Software Development with Verification and Validation in High Assurance and Regulated Environments was made available to attendees at Agile 2011 the week before last. If you’d like a copy, please contact Craig Langenfeld at Rally Software (clangenfeld@rallydev.com). The final paper will be released in mid to late September. When it is, I’ll post a copy here.

Upcoming Webinar

Rally Software just announced a webinar on this very topic:

Agile in High Assurance and Regulated Environments: September 27th – EMEA 15:00 (GMT + 01:00), 10am ET / 7am PT
September 27th – Americas 1pm ET / 10am PT.

Craig and I, along with another industry expert, will be covering this topic in a one hour webinar. You can register here:

Upcoming AAMI report : Guidance on the use of agile practices in the development of medical device software.

A month or so back, Mike Russell sent me a draft of the upcoming AAMA (Association for the Advancement of Medical Instrumentation) Technical Information Report of the title above.

I reviewed the report and it is exceedingly well done. It provides a comprehensive view of adapting agile practices in the context of regulated medical device development, including an analysis of the impact on the company’s quality management system. Whether intended or not, it also provides a detailed look at high quality agile practices, so it can serve as a bit of a primer for those in the industry who are not familiar with Agile development.  I suspect this will be a pivot point in the movement to agile in this industry, as it is specific enough to provide excellent guidance for those making this transition.

Upcoming AAMI TIR

Mike notes that “the report is currently in the working draft stage, preceding the formal first committee draft, that is due out probably early September.”

Scaled Agility Framework for High Assurance Development

Finally, in prepping for the upcoming webinar, I put together this version of the Big Picture Scaled Agility Framework. The changes aren’t that significant, but they may be material (and perhaps helpful) to anyone following this blog series.

Draft of a High Assurance Scaled Agility Framework Diagram

HIgh Assurance Series Update

I haven’t posted much in this category lately as I’ve been busy drafting and redrafting the new whitepaper: Agile Software Development with Verification and Validation in High Assurance and Regulated Environments.

(Ok, I’ve been goofing off some, too.) But finally, Craig Langenfeld and I finished the final draft and it’s now in Rally’s hands for release. (Note: the body of the whitepaper is solely method guidance and is tool agnostic; but Craig has provided 8-10 screenshots of using Rally’s app in the Appendix. As I have mentioned, it’s foolish to attempt enterprise agility, much less high assurance agility, without proper tooling of some sort. Pick your vendor, but pick a competent tool).

I’d expect the whitepaper will be released by Rally next week at Agile 2011, and as soon as it is, I’ll post a copy here, too. I’ll be presenting at Agile 2011 on Wednesday, though not on this topic. My presentation topic will be Advanced Practices in Enterprise Agility: Five Keys to building the Lean|Agile Enterprise. I’ll post that briefing here this weekend.

In the meantime, those tooling high assurance will want to take a look at Craig’s latest post HIgh Assurance Agile Software Development Traceability Matrix Examples which illustrates traceability options inherent in a well formed, and tooled, requirements hierarchy.

Tooling for User Story Verification, Part 2

Note: This post is a continuation of the series on Agile in High Assurance and Regulated Environments.

In the last post in this series, Tooling to Automate User Story Verification, I pointed to a post, Tools to Automate User Story Verification – Part 1: Rally and Quality Center, by my collaborator Craig Langenfeld, wherein he describes how HP Quality Center can be used to perform Verification Testing on User Stories that are planned, tracked, and managed within Rally. That post describes one of the three potential user story-level verification paths that I described in an earlier post, Software Verification and Validation in High Assurance Agile Development: SRS and User Story Verification, illustrated below as follows.

High Assurance User Story Traceability Paths

This user-story to story-acceptance-test  verification/traceability path is probably the most critical verification step, as it illustrates that the story has been tested and accepted into the baseline via the story acceptance test. For many in high assurance, that’s as far as they need to go with verification at the story level. However, if the risk analysis determines that certain pieces of code are extremely  critical for safety and efficacy, then one or both of the other two paths:

  • user-story to code-that-implements-the-story, and
  • user-story to unit-tests-that-tests-the-code-that-implements-the-story,

may also be necessary.

In this new post, Tools to Automate User Story Verification – Part 2: Incorporating SCM, Craig shows how to link the user story to a changeset in the SCM system, in this case, Subversion. And since its likely that the source file will be revised a number of times through the course of an iteration, Craig illustrates how, with the addition of a small custom program provided by Barry Mullen , “an Iteration Diff Report App can be used to pull all those revisions together and, in the case of viewvc users, create a link to the diff for all changes. This is useful for conducting code and/or security audits of the code changed for that iteration.”

This part completes the user-story-verification-via-tooling part of our high assurance story.

However, the second part of the post illustrates another important step, which is an in-sprint validation step. Since, the inclusion of the new story into the baseline changes the behavior of the system, it is necessary to assure that the whole system still works as intended.  To help make this a largely automated function, Craig shows how an  an integration from the agile project management tool (in this case Rally) to the automated build system (in this case Hudson) monitors the build health during the course of the iteration, thereby assuring that all the build verification tests (new and old) still pass when the new story is added to the baseline.

In the next post in this series, we’ll look at tooling to support the higher level traceability path, from the feature in the Product Requirements Document to the User Stories in the Software Requirements Specification, (or equivalent repository).

Tooling to Automate User Story Verification

In a recent post, I noted that Craig Langenfeld, my co-contributor on the Agile in High-Assurance and Regulated Environments series, was beginning to describe how tooling can be used to automate (or semi-automate) much of the formal verification activities we’ll need to assure that a) our software works exactly as intended, and b) leaving a traceability-audit trail for the device history records as called out by the company’s quality management system (QMS).

In earlier posts in this series, I described an important sub-thread, which is the verification of user stories in the course of each iteration. I pointed that in order to assure that each new user story works as intended, we’ll want to verify it via three traceability paths:

  • User story to code – path to the SCM record that illustrates when and where the code was changed to implement the story
  • User story to code-level unit test (the “white box” test which helps assure the new code works to its internal (code level) specifications
  • User Story to Story acceptance tests (black box testing of the user story to make sure the system functions as intended)

Craig has recently published his first two posts on this important sub-thread in our larger verification and validation picture. In his first post he describes two things:

1)   How built in story to task relationships increase the rigor and traceability of the TASKS we’ll use to define|build|verify each user story, and

2)   How you can use Rally to persist the story, the story acceptance tests and the story to acceptance test verification paths. (Bullet #3 above).

However, for many practitioners Rally is not the only tool in the project environment, so in a second post he describes how HP Quality Center can be used to perform Verification Testing on User Stories that are planned, tracked, and managed within Rally.

As I mentioned in a prior post in this series, effective, scalable agile project management tooling is a critical component to developing and documenting high-assurance software, so I am happy so see these tooling examples and look forward to the next few posts in Craig’s series. As usual, I’ll keep you “post”ed.

Agile Tooling for High Assurance Development Series: Intro

For those who have been following the High Assurance and Regulated Environments and Agile and FDA series on this blog, you’ll be interested to note that my co-conspirator, Craig Langenfeld of Rally Software, has committed to describing some best practices for tooling high assurance projects using Rally’s Agile Lifecycle Management and compatible tools such as HP Quality Center and Apache Subversion. (SVN). In this series of upcoming posts on Rally’s agile blog, Craig will be illustrating various tooling solutions for artifact management, version control, test management and traceability. Even more interestingly, our collaboration and feedback from the broader community has incented Rally to provide some tooling extensions which will materially improve the automation and record keeping required in these more formal process systems.

Craig will be authoring posts on topics including:

1)   Use Rally to track and report on the basics – Define | Verify | Build

2)   Achieve tight traceability and reporting from Product Requirements to

3)   System Requirement to Verification & Validation

4)   Don’t overlook the implementation of the Validation Sprint

5)   Testing and test traceability in High Assurance Environments

Readers may note that I’ve never really discussed tooling in this blog. There are a number of reasons for this: 1) I’ve simply assumed it 2) tools tend to limit definition and discussion of the abstract methods to that which the tools can actually support, and 3) tools change faster than the underlying principles and practices do, so it’s hard to keep up, and 4) I am stakeholder in Rally.

I’m also well aware of the idiom  “a fool with a tool is still a fool”, and yet it also true that those trying to scale agile development, much less high assurance development, without appropriate enterprise-class, agile-native tooling may well be the biggest fools of all.

To that end, I really look forward to this series as it puts some “beef” into the abstractions we’ve described so far, and will materially help those who head down the agile, high assurance, path. You might want to subscribe directly to Rally’s blog, and add your voice to the community input that is driving these new practices and tooling enhancements. I’ll also be posting parallel comments on this blog as well.

And, oh yeah, Craig’s introductory post is here.

Software V&V in High Assurance Agile: Validation: Nonfunctional Requirements

Background: In this series of posts, I’ve been using medical device development (as Regulated by U.S. FDA via CFR 820.30 and international standard IEC62304) as an exemplar for suggesting ways to develop high quality software in regulated (and other high assurance, high economic cost of failure) environments in an agile manner. This series is sponsored, in part, by Rally Software Development, with special thanks to Craig Langenfeld for his contribution.

=====================================

A note on V&V: As we described in earlier posts, while there is no perfect discriminator between verification and validation activities, in agile verification (…providing objective evidence that the design outputs of a particular phase of the software development life cycle meet all of the specified requirements for that phase) occurs primarily and continuously during the course of each iteration. Validation (… confirmation by examination and provision of objective evidence that software specifications conform to user needs and intended uses, and that the particular requirements implemented through software can be consistently fulfilled) typically occurs primarily in the course of special iterations dedicated to this purpose. Generally, the testing of nonfunctional requirements occurs during these special iterations, and it is therefore probably sensible to think of this testing as a primarily a validation activity.

=====================================

The first 90% of the software takes 90% of the development time. The remaining 10% of the code takes up the other 90% of the time.
— Tom Cargill, Bell Labs

In the last post, we described testing the features described in the Product Requirements Document as an important verification activity, which is largely independent of the testing we perform for the individual stories. In this post, we’ll describe the other dimension of product requirements the nonfunctional requirements (which are typically also contained in the PRD) that describe the “ilities” of the subject device or system. If we are not careful, these special “quality requirements” will take up the “other 90%” of our total development time.

In Agile Software Requirements, I described nonfunctional requirements, and their even uglier stepsister, design constraints, as the “URPS” part of our FURPS (Functionality, Usability, Reliability, Performance and Supportability) acronym and noted the following discriminators:

Functional requirements. Express how the system interacts with its users —its inputs, its outputs, the functions and features it provides
Nonfunctional requirements Criteria used to judge the operation or qualities of a system
Design Constraints Restrictions on the design of a system, or the process by which a system is developed, but that must be fulfilled to meet technical, business, or contractual obligations

It can be useful to think about the major categories of NFRs via the “URPS” acronym:

Usability – Includes elements such as expected training times, task times, number of control activities required to accomplish a function, help systems, compliance with usability standards, usability safety features

Reliability – includes such things as availability, mean time between failures (MTBF), mean time to repair (MTTR), accuracy, precision, security, safety and override features

Performance – response time, throughput, capacity, scalability, degradation modes, , resource utilization.

Supportability (Maintainability) – ability of the software to be easily modified to accommodate planned enhancements and repairs

Note: A more complete list of potential nonfunctional requirement considerations can be found in my book, Agile Software Requirements and even on Wikipedia (http://en.wikipedia.org/wiki/Non-functional_requirement.)

Design Constraints can also be particularly relevant in the development of high assurance systems. These can refer to items such as: follow all internal processes per the companies Quality Management System, and use only components which themselves have been validated,  adherence to comprehensive safety standards such as IEC 60601-1 (which covers generic safety requirements for medical devices including a list of hazards and their tolerable limit of risk) as well as a potentially long list of other such requirements.

No matter their nature or source, these requirements are just as critical as the functional requirements we’ve described in user stories and features, for if a system isn’t reliable (become unavailable on occasion) or marketable (fails to meet some regulatory requirement) or isn’t as accurate as the patient/user requires, then, agile or not, we will fail just as badly as if we forgot some critical functional requirement.

As seen in the requirements metamodel below, we’ve modeled them a little differently than we did features and user stories, which from an agile implementation perspective were all modeled as transient backlog items.

This is because the implementation of features and stories tend to somewhat transient in nature, and (subject to automated testing and verification infrastructure), you can discover-them, implement-them, auto-test-them and “almost-forget-them”.

Such is rarely the case with NFRS as many of these must be revisited at every increment to make sure the system — with all its new features — still meets this set of imposed quality requirements.

Testing Nonfunctional Requirements

To assure that the system works as intended, and is illustrated by the metamodel:

Nonfunctional requirements and systems qualities tests

most identified NFRs must typically (0..*) be associated with some “System Qualities Test” which is used to validate that the system is still in compliance with that specific NFR. (And perhaps some system qualities tests may test multiple NFRS (1..*))

These types of requirements are indeed “special” in the way we need to think about them, for example:

  • some can be tested by inspection only. Example: use only components which themselves have been validated
  • some must be tested with special harnessing or equipment, and therefore may not be practical in each iteration. Example: application pressure must be accurate to within +/50 millibars across the entire operating range.
  • some require continuous reasoning and analysis each time the system behavior changes (at each increment). Example: adhere to IEC 60601-1 device safety standard.

As we can see, the testing of many NFRs is simply not automatable, and therefore some amount of manual NFR testing is left to the validation sprint. If the items can be automated, the teams should surely do so, but either way, comprehensive regression testing (and documentation of results) of these “system qualities” is an integral part of the validation process.

Conclusion

In this post, we’ve described how full regression testing of a systems nonfunctional (quality) requirements is a validation activity typically required at each increment (PSI) boundary, and how some of this effort will likely be manual. To this point, we’ve described most of the testing activities required for building systems of high assurance. In the next post, we’ll take an even broader look at agile testing practices in the context of high assurance development.

Software V&V in High Assurance Agile: Verification: Testing Features

Background: In this series of posts, I’ve been using medical device development (as Regulated by U.S. FDA via CFR 820.30 and international standard IEC62304) as an exemplar for suggesting ways to develop high quality software in regulated (and other high assurance, high economic cost of failure) environments in an agile manner. This series is sponsored, in part, by Rally Software Development, with special thanks to Craig Langenfeld for his ongoing contribution.

====================================

As we described in earlier posts, verification occurs continuously during the course of each iteration. Validation also occurs on a cadence, (see post) primarily in the course of special iterations dedicated to this purpose. This is the third in a series of verification posts, covering the testing of the features found in the PRD.

In the last post, we described the Product Requirements Document as the higher level (superordinate to the SRS)  governing document for our device or system. Its content includes the purpose of the device, as well as a set of features and nonfunctional requirements that describe the behavior of the system at a fairly high level. As I described in Agile Software Requirements, features are primary elements of the agile requirements metamodel, as the graphic below illustrates:

Features in the requirements metamodel

Since features are implemented by stories, and we have described a pretty thorough job of testing those:

Stories are unit and acceptance tested

the question arises as to whether or not features must be independently tested as well. As implied by the model, the answer is yes. This is a key part of our verification strategy as “software verification provides objective evidence that the design outputs of a particular phase of the software development life cycle meet all of the specified requirements for that phase”. While an argument can be made that this is accomplished solely through story testing as the stories were derived from the features, the facts are often otherwise:

  1. It can take a large number of stories to implement a single feature. Even if they all deliver value, most stories do not stand alone.
  2. There are a large number of paths through the stories, which cannot be naturally tested by the independent story acceptance tests.
  3. In systems of scale and complexity, features often require full system integration with other systems components and features, and in many cases a single team may not be even able to access the resources necessary to fully test the feature they contributed to.

For these reasons, we have indicated a separate Feature Acceptance Test in the requirements model.

Each time a new feature from the PRD is implemented by the team, in addition to the story implementations and story tests, a FAT must typically be developed and persisted by the team in the system level regression test repository. Whenever these tests are automated (typically with some business logic-level,  FIT-like (Framework for Integrated Tests) custom framework, the FATs can be executed continuously in the course of each iteration. Where automation is not possible, then these tests tend to build overhead that must be executed, at least, at each system increment. Often this latter effort becomes a part of the formal validation process, which, as we have described, typically occurs in special hardening sprints dedicated to that and similar purposes. As feature testing is continuous, it is integral to verification and mitigates late discovery of changes which cause defects.  Running the full set of feature regression tests, including all manual tests, is an integral part of our system validation process.

Conclusion

For the moment, that’s it for the verification discussion. However, in a like manner to features, the teams will also need to address the nonfunctional requirements for the system and they get handled a little but differently. That will be the subject of the next post in the series.

Software V&V in High Assurance Agile Development: Verification: PRD, Features and PRD to SRS Traceability

Background: In this series of posts, I’ve been using medical device development (as Regulated by U.S. FDA via CFR 820.30 and international standard IEC62304) as an exemplar for suggesting ways to develop high quality software in regulated (and other high assurance, high economic cost of failure) environments in an agile manner. This series is sponsored, in part, by Rally Software Development, with a special thanks to Craig Langenfeld for his ongoing contribution.

As we described earlier, verification occurs continuously during the course of each iteration. Validation occurs primarily in the course of special iterations dedicated to this purpose. This is the second in a series of verification posts, covering the Product Requirements Document (PRD), product features, and relationships to the Software Requirements Specification (SRS). (Note: this post may appear out of sequence as I initially wrote it as a validation post, but on reflection, these activities are really part of continuous verification.)

===============================

In an earlier post, we described the SRS as a mandatory document, the center of much of our development activity, that consists primarily of a set (a data base, really) of user stories that described the behavior of the system in specific detail. Though we didn’t mention it there, in agile we use the INVEST (Independent, Negotiable, Valuable, Estimable, Small and Testable) model to help us keep our user stories, small and independent. Applying this agile construct helps us build more resilient systems incrementally, which is indeed the core construct in agility. However, since our iterations are short, our stories must be small. And even when we express them in user voice form (As an operator, rotating the energy knob past the point where the system is delivering 5 bar will have no further effect), we are likely to end up with a LARGE number of user stories, potentially numbering in the thousands. That’s ok, because these systems are complex and it takes a lot of little correct behaviors to build a system that works in aggregate to deliver the overall system value we need.

We also noted that there was another governing document, the Product Requirements Document, that provided the overall definition for the subject system.

Product Requirements Document

From the standpoint of the overall product (as opposed to the software that runs much of the product), the PRD is the document that defines the external behavior of the system in clear and unambiguous terms. As such it is a precedent  document (a design input in FDA terms) to the SRS and is under the purview of those marketing and clinical personnel who have the relevant authority to decide what the system is intended to do. These are often described as the “labeling claims” for a device, i.e. the set of things the device does to deliver its therapy.

From the perspective of the market, management and the FDA, the PRD is probably the most critical document because it describes exactly what the system is intended to do for the users, and it does so at a level of abstraction intended to make sure that it is understandable by all these key stakeholders. Keeping the level of abstraction sufficiently high enough to make sure the document is understandable, and yet specific enough to drive development is the art of good product definition. The PRD typically contains at least these major elements:

–       statement of device purpose

–       system context diagram

–       descriptions of users and operators

–       features of the device

–       nonfunctional requirements

–       operation and maintenance requirements

–       safety features and hazard mitigation

_____________________________________________________
Note: While some of these items are specific to medical devices, guidance for the content of the PRD closely mimics the generic “Vision” document from RUP and other methods. I’ve provided a template/outline for a Vision document in most of my books, starting with (a more classical representation) in Managing Software Requirements: First Edition, A Unified Approach (1999) towards a more Use Case- driven view in Managing Software Requirements: Second Edition, A Use-Case Approach (2003) and a slightly repurposed, more current version in Agile Software Requirements: Lean Requirements Practices for Teams, Programs, and the Enterprise. (2011). (If there is interest, I’ll likely post one or more of these on this blog at some point).

_________________________________________________________

Product Features

Fortunately, making the document understandable by managing the level of detail, and simultaneously not over constraining the software development with over-specificity are sympatric goals, so keeping the descriptions high level in the PRD serves both purposes.

In any case, from the standpoint of system behavior, the primary content of this document is the set of features that the system provides to deliver its efficacy and safety. Features are high-level descriptions of some system behavior that addresses a user need. They can be expressed in user voice form (as a <role>…..) or, more typically, they are expressed in a short keyword phrase or sentence. For example, some claims for our exemplary EPAT (Extracorporeal Pulse Activation Therapy) device might include:

  • Adjustable pulse wave frequency from 1-21 hertz
  • Pressure amplitudes from 1-5 bar
  • Acoustic energy may be concentrated to a focal area of 2- 8 mm in diameter

Tracing from PRD to SRS

While the PRD makes the “labeling claims” for the device via these features, the actual work of implementation is left to the software (and hardware, of course) so part of our verification effort is to assure that every feature traces to one or more user stories (or other forms of software requirements expression) as is illustrated below.

PRD to SRS Traceability

In this way we can be assured that every feature of the system traces to (is implemented by) one or more user stories. In addition, if through analysis we discover stories that do not trace to a feature, we’ll need to at least understand their purpose (some stories needn’t have a parent feature to justify their existence) to make sure that the system behaves only as intended. This particular traceability path gives us a way to reason about the impact of potential feature level changes, and to trace approved changes to features into implementation by the children stories.

Conclusion

That’s it for this second post on Verification, describing the PRD, features and PRD to SRS traceability. Coming next in the series: testing features.

Note: Some readers might (have!) noted the ambiguous description of “agile project management/document management” label in the above graphic. Managing the details of features, nonfunctional requirements, user stories, traceability etc. in agile project management tools, PRD and SRS documents, spreadsheets and the like is a significant challenge in a regulated environment where these documents take on the role of prima facia evidence of efficacy and safety. We’ll return to this topic in future posts as well, but first we must continue to lay out the overall method.

Software Verification and Validation in High Assurance Agile Development: Validation Sprint

Series Background: In this series of posts, I’ve been using medical device development (as Regulated by U.S. FDA via CFR 820.30 and international standard IEC62304) as an exemplar for suggesting ways to develop high quality software in regulated (and other high assurance, high economic cost of failure) environments in an agile manner. This series is sponsored, in part, by Rally Software Development. Thanks to Craig Langenfeld of Rally for his ongoing collaboration on this series.

In prior posts, I’ve introduced a suggested agile lifecycle model for verification and validation of software and a set of ground rules that agile teams will have to follow. I’ve also referenced an agile, requirements metamodel which defines the requirements artifacts and relationships we’ll use to implement and document required system behavior. For convenience, these two graphics are reproduced as thumbnails here:

We can see from the model, and as we describe in an earlier verification post, that verification occurs continuously during the course of each iteration. Validation, however, is a different matter entirely. This is the first in a short series of validation posts.

=============================================

In Agile Software Requirements: Lean Requirements Practices for Teams, Programs, and the Enterprise (which was released just ten days ago, YES!), I described a “big picture” of enterprise agility which illustrated a series of development iterations punctuated periodically by a “hardening iteration” (sprint). This is pictured below, (with the iteration cadence highlighted in red):

This picture illustrates four development iterations (indicated by full backlogs) followed by a “hardening” iteration. This pattern is arbitrary but fairly typical as it produces a Potentially Shippable Increment (PSI) (valuable and evaluate-able in our medical device setting)  every 10 weeks or so. (Shorter than that, and the overhead of the hardening iteration may be too high, longer than that and the teams will have deferred final system integration and validation too long, resulting in delayed risk discovery—and delayed delivery…..).

The hardening iteration has an empty backlog, implying no new user stories, and yes, somewhat of a (note: agilists, prepare thyself for the following heretical words …) “code freeze” for new functionality. As , ideally, we could ship our product every day, the dedication of time to a “hardening” (or stabilization, or pick your word <here>) is not ideal from an agile purist standpoint,  the fact is that it is gives credence to a practical reality, development of the code itself is only part of the job, especially for systems of complexity and scale, and some of the work to prepare to release the code is likely to be most efficiently done outside a normal development iteration.

Therefore, in our experience, the hardening sprint is a high value, dedicated time for focusing on some of these remaining activities. Teams use the hardening sprint time to do whatever they have to do to prepare the system increment for potential use. This can include:

  • Technical debt such as defects, minor code refactoring,
  • Final, full, system-wide integration and regression testing
  • Testing of system qualities (nonfunctional requirements) such as performance, reliability, or standards compliance
  • Exploratory testing
  • Finalization of user documentation, release notes, internationalization etc
  • Preparation of assets for distribution and deployment

The Validation Sprint

In our medical device exemplar, this particular iteration takes on additional significance as it often used to perform the full “system validation” necessary to assure efficacy and safety for human use. Performing full validation for a high assurance system is no trivial effort, and will likely include at least the activities identified below:

  • Resolve/close/note outstanding defects
  • Final integration with other components and systems
  • Finalize trace of all Product Requirements Specifications to System Requirements SpecificationsValidate (and document) the software works as intended
    • Final regression test all Unit Tests and Story Acceptance Tests (note: this should be accomplished normally in the course of each dev iteration)
    • Regression test all Feature Acceptance tests (same comment as above)
    • Regression test all System Qualities tests
    • Perform any exploratory testing
    • Perform all internal acceptance (development, clinical, product marketing) testing
    • Perform any/all user acceptance testing
  • Update quality systems documentation
  • Update, version control and “sign” PRD
  • Update, version control and “sign” SRS
  • Update and finalize all traceability matrices
    • PRD to SRS
    • PRD to Feature Acceptance Tests
    • SRS to Story Acceptance Test
    • SRS to code
    • SRS to unit tests
  • Finalize user documentation
    • Manuals and on line help
    • Release notes and any other user guidance
    • Installation and maintenance

Validation iteration length

Depending on the levels of test automation and other tooling, accomplishing this in a single, two-week iteration may not be practical. In that case, some teams treat this iteration as “fixed scope”, not fixed length, allowing them to take the time necessary to do a proper job. However, there is a negative impact to this as cadence (cadence and synchronization are our primary tools to manage product development variability, [see Reinertsen 2009]) is lost and the schedule becomes unpredictable. Instead, it is generally better to have an additional iteration dedicated to this purpose, or perhaps a longer one than normal. Then, if the teams fail to get the job done, they can always add another iteration for this purpose, and the “failure” will indicate where they need to invest in additional automation and documentation tooling, or address whatever other impediments remain. (After all, completing critical work in a quality fashion in a short timebox, is what good agile teams do.)

All of the activities we have described above are necessary for effective validation, but three are of particular interest to our agile approach:

  • PRD to SRS validation and traceability
  • Validation that all features are tested
  • Validation that the system conforms to its non-functional requirements

These will be the subjects of the next posts in this series.