Software V&V in High Assurance Agile: Verification: Testing Features

Background: In this series of posts, I’ve been using medical device development (as Regulated by U.S. FDA via CFR 820.30 and international standard IEC62304) as an exemplar for suggesting ways to develop high quality software in regulated (and other high assurance, high economic cost of failure) environments in an agile manner. This series is sponsored, in part, by Rally Software Development, with special thanks to Craig Langenfeld for his ongoing contribution.

====================================

As we described in earlier posts, verification occurs continuously during the course of each iteration. Validation also occurs on a cadence, (see post) primarily in the course of special iterations dedicated to this purpose. This is the third in a series of verification posts, covering the testing of the features found in the PRD.

In the last post, we described the Product Requirements Document as the higher level (superordinate to the SRS)  governing document for our device or system. Its content includes the purpose of the device, as well as a set of features and nonfunctional requirements that describe the behavior of the system at a fairly high level. As I described in Agile Software Requirements, features are primary elements of the agile requirements metamodel, as the graphic below illustrates:

Features in the requirements metamodel

Since features are implemented by stories, and we have described a pretty thorough job of testing those:

Stories are unit and acceptance tested

the question arises as to whether or not features must be independently tested as well. As implied by the model, the answer is yes. This is a key part of our verification strategy as “software verification provides objective evidence that the design outputs of a particular phase of the software development life cycle meet all of the specified requirements for that phase”. While an argument can be made that this is accomplished solely through story testing as the stories were derived from the features, the facts are often otherwise:

  1. It can take a large number of stories to implement a single feature. Even if they all deliver value, most stories do not stand alone.
  2. There are a large number of paths through the stories, which cannot be naturally tested by the independent story acceptance tests.
  3. In systems of scale and complexity, features often require full system integration with other systems components and features, and in many cases a single team may not be even able to access the resources necessary to fully test the feature they contributed to.

For these reasons, we have indicated a separate Feature Acceptance Test in the requirements model.

Each time a new feature from the PRD is implemented by the team, in addition to the story implementations and story tests, a FAT must typically be developed and persisted by the team in the system level regression test repository. Whenever these tests are automated (typically with some business logic-level,  FIT-like (Framework for Integrated Tests) custom framework, the FATs can be executed continuously in the course of each iteration. Where automation is not possible, then these tests tend to build overhead that must be executed, at least, at each system increment. Often this latter effort becomes a part of the formal validation process, which, as we have described, typically occurs in special hardening sprints dedicated to that and similar purposes. As feature testing is continuous, it is integral to verification and mitigates late discovery of changes which cause defects.  Running the full set of feature regression tests, including all manual tests, is an integral part of our system validation process.

Conclusion

For the moment, that’s it for the verification discussion. However, in a like manner to features, the teams will also need to address the nonfunctional requirements for the system and they get handled a little but differently. That will be the subject of the next post in the series.

Software V&V in High Assurance Agile Development: Verification: PRD, Features and PRD to SRS Traceability

Background: In this series of posts, I’ve been using medical device development (as Regulated by U.S. FDA via CFR 820.30 and international standard IEC62304) as an exemplar for suggesting ways to develop high quality software in regulated (and other high assurance, high economic cost of failure) environments in an agile manner. This series is sponsored, in part, by Rally Software Development, with a special thanks to Craig Langenfeld for his ongoing contribution.

As we described earlier, verification occurs continuously during the course of each iteration. Validation occurs primarily in the course of special iterations dedicated to this purpose. This is the second in a series of verification posts, covering the Product Requirements Document (PRD), product features, and relationships to the Software Requirements Specification (SRS). (Note: this post may appear out of sequence as I initially wrote it as a validation post, but on reflection, these activities are really part of continuous verification.)

===============================

In an earlier post, we described the SRS as a mandatory document, the center of much of our development activity, that consists primarily of a set (a data base, really) of user stories that described the behavior of the system in specific detail. Though we didn’t mention it there, in agile we use the INVEST (Independent, Negotiable, Valuable, Estimable, Small and Testable) model to help us keep our user stories, small and independent. Applying this agile construct helps us build more resilient systems incrementally, which is indeed the core construct in agility. However, since our iterations are short, our stories must be small. And even when we express them in user voice form (As an operator, rotating the energy knob past the point where the system is delivering 5 bar will have no further effect), we are likely to end up with a LARGE number of user stories, potentially numbering in the thousands. That’s ok, because these systems are complex and it takes a lot of little correct behaviors to build a system that works in aggregate to deliver the overall system value we need.

We also noted that there was another governing document, the Product Requirements Document, that provided the overall definition for the subject system.

Product Requirements Document

From the standpoint of the overall product (as opposed to the software that runs much of the product), the PRD is the document that defines the external behavior of the system in clear and unambiguous terms. As such it is a precedent  document (a design input in FDA terms) to the SRS and is under the purview of those marketing and clinical personnel who have the relevant authority to decide what the system is intended to do. These are often described as the “labeling claims” for a device, i.e. the set of things the device does to deliver its therapy.

From the perspective of the market, management and the FDA, the PRD is probably the most critical document because it describes exactly what the system is intended to do for the users, and it does so at a level of abstraction intended to make sure that it is understandable by all these key stakeholders. Keeping the level of abstraction sufficiently high enough to make sure the document is understandable, and yet specific enough to drive development is the art of good product definition. The PRD typically contains at least these major elements:

–       statement of device purpose

–       system context diagram

–       descriptions of users and operators

–       features of the device

–       nonfunctional requirements

–       operation and maintenance requirements

–       safety features and hazard mitigation

_____________________________________________________
Note: While some of these items are specific to medical devices, guidance for the content of the PRD closely mimics the generic “Vision” document from RUP and other methods. I’ve provided a template/outline for a Vision document in most of my books, starting with (a more classical representation) in Managing Software Requirements: First Edition, A Unified Approach (1999) towards a more Use Case- driven view in Managing Software Requirements: Second Edition, A Use-Case Approach (2003) and a slightly repurposed, more current version in Agile Software Requirements: Lean Requirements Practices for Teams, Programs, and the Enterprise. (2011). (If there is interest, I’ll likely post one or more of these on this blog at some point).

_________________________________________________________

Product Features

Fortunately, making the document understandable by managing the level of detail, and simultaneously not over constraining the software development with over-specificity are sympatric goals, so keeping the descriptions high level in the PRD serves both purposes.

In any case, from the standpoint of system behavior, the primary content of this document is the set of features that the system provides to deliver its efficacy and safety. Features are high-level descriptions of some system behavior that addresses a user need. They can be expressed in user voice form (as a <role>…..) or, more typically, they are expressed in a short keyword phrase or sentence. For example, some claims for our exemplary EPAT (Extracorporeal Pulse Activation Therapy) device might include:

  • Adjustable pulse wave frequency from 1-21 hertz
  • Pressure amplitudes from 1-5 bar
  • Acoustic energy may be concentrated to a focal area of 2- 8 mm in diameter

Tracing from PRD to SRS

While the PRD makes the “labeling claims” for the device via these features, the actual work of implementation is left to the software (and hardware, of course) so part of our verification effort is to assure that every feature traces to one or more user stories (or other forms of software requirements expression) as is illustrated below.

PRD to SRS Traceability

In this way we can be assured that every feature of the system traces to (is implemented by) one or more user stories. In addition, if through analysis we discover stories that do not trace to a feature, we’ll need to at least understand their purpose (some stories needn’t have a parent feature to justify their existence) to make sure that the system behaves only as intended. This particular traceability path gives us a way to reason about the impact of potential feature level changes, and to trace approved changes to features into implementation by the children stories.

Conclusion

That’s it for this second post on Verification, describing the PRD, features and PRD to SRS traceability. Coming next in the series: testing features.

Note: Some readers might (have!) noted the ambiguous description of “agile project management/document management” label in the above graphic. Managing the details of features, nonfunctional requirements, user stories, traceability etc. in agile project management tools, PRD and SRS documents, spreadsheets and the like is a significant challenge in a regulated environment where these documents take on the role of prima facia evidence of efficacy and safety. We’ll return to this topic in future posts as well, but first we must continue to lay out the overall method.

Software Verification and Validation in High Assurance Agile Development: Validation Sprint

Series Background: In this series of posts, I’ve been using medical device development (as Regulated by U.S. FDA via CFR 820.30 and international standard IEC62304) as an exemplar for suggesting ways to develop high quality software in regulated (and other high assurance, high economic cost of failure) environments in an agile manner. This series is sponsored, in part, by Rally Software Development. Thanks to Craig Langenfeld of Rally for his ongoing collaboration on this series.

In prior posts, I’ve introduced a suggested agile lifecycle model for verification and validation of software and a set of ground rules that agile teams will have to follow. I’ve also referenced an agile, requirements metamodel which defines the requirements artifacts and relationships we’ll use to implement and document required system behavior. For convenience, these two graphics are reproduced as thumbnails here:

We can see from the model, and as we describe in an earlier verification post, that verification occurs continuously during the course of each iteration. Validation, however, is a different matter entirely. This is the first in a short series of validation posts.

=============================================

In Agile Software Requirements: Lean Requirements Practices for Teams, Programs, and the Enterprise (which was released just ten days ago, YES!), I described a “big picture” of enterprise agility which illustrated a series of development iterations punctuated periodically by a “hardening iteration” (sprint). This is pictured below, (with the iteration cadence highlighted in red):

This picture illustrates four development iterations (indicated by full backlogs) followed by a “hardening” iteration. This pattern is arbitrary but fairly typical as it produces a Potentially Shippable Increment (PSI) (valuable and evaluate-able in our medical device setting)  every 10 weeks or so. (Shorter than that, and the overhead of the hardening iteration may be too high, longer than that and the teams will have deferred final system integration and validation too long, resulting in delayed risk discovery—and delayed delivery…..).

The hardening iteration has an empty backlog, implying no new user stories, and yes, somewhat of a (note: agilists, prepare thyself for the following heretical words …) “code freeze” for new functionality. As , ideally, we could ship our product every day, the dedication of time to a “hardening” (or stabilization, or pick your word <here>) is not ideal from an agile purist standpoint,  the fact is that it is gives credence to a practical reality, development of the code itself is only part of the job, especially for systems of complexity and scale, and some of the work to prepare to release the code is likely to be most efficiently done outside a normal development iteration.

Therefore, in our experience, the hardening sprint is a high value, dedicated time for focusing on some of these remaining activities. Teams use the hardening sprint time to do whatever they have to do to prepare the system increment for potential use. This can include:

  • Technical debt such as defects, minor code refactoring,
  • Final, full, system-wide integration and regression testing
  • Testing of system qualities (nonfunctional requirements) such as performance, reliability, or standards compliance
  • Exploratory testing
  • Finalization of user documentation, release notes, internationalization etc
  • Preparation of assets for distribution and deployment

The Validation Sprint

In our medical device exemplar, this particular iteration takes on additional significance as it often used to perform the full “system validation” necessary to assure efficacy and safety for human use. Performing full validation for a high assurance system is no trivial effort, and will likely include at least the activities identified below:

  • Resolve/close/note outstanding defects
  • Final integration with other components and systems
  • Finalize trace of all Product Requirements Specifications to System Requirements SpecificationsValidate (and document) the software works as intended
    • Final regression test all Unit Tests and Story Acceptance Tests (note: this should be accomplished normally in the course of each dev iteration)
    • Regression test all Feature Acceptance tests (same comment as above)
    • Regression test all System Qualities tests
    • Perform any exploratory testing
    • Perform all internal acceptance (development, clinical, product marketing) testing
    • Perform any/all user acceptance testing
  • Update quality systems documentation
  • Update, version control and “sign” PRD
  • Update, version control and “sign” SRS
  • Update and finalize all traceability matrices
    • PRD to SRS
    • PRD to Feature Acceptance Tests
    • SRS to Story Acceptance Test
    • SRS to code
    • SRS to unit tests
  • Finalize user documentation
    • Manuals and on line help
    • Release notes and any other user guidance
    • Installation and maintenance

Validation iteration length

Depending on the levels of test automation and other tooling, accomplishing this in a single, two-week iteration may not be practical. In that case, some teams treat this iteration as “fixed scope”, not fixed length, allowing them to take the time necessary to do a proper job. However, there is a negative impact to this as cadence (cadence and synchronization are our primary tools to manage product development variability, [see Reinertsen 2009]) is lost and the schedule becomes unpredictable. Instead, it is generally better to have an additional iteration dedicated to this purpose, or perhaps a longer one than normal. Then, if the teams fail to get the job done, they can always add another iteration for this purpose, and the “failure” will indicate where they need to invest in additional automation and documentation tooling, or address whatever other impediments remain. (After all, completing critical work in a quality fashion in a short timebox, is what good agile teams do.)

All of the activities we have described above are necessary for effective validation, but three are of particular interest to our agile approach:

  • PRD to SRS validation and traceability
  • Validation that all features are tested
  • Validation that the system conforms to its non-functional requirements

These will be the subjects of the next posts in this series.

Refactoring and Software Complexity Variability: A Whitepaper

My friend and colleague, Alex Yakyma, from Kiev, Ukraine, wrote this interesting whitepaper which describes how, based on the underlying mathematics, software complexity tends to be inherently higher than one might think. He also describes how refactoring, our key weapon in this battle, can be used to continuously manage complexity over time and thereby keep our maintenance burden within controlled bounds.

Here’s the abstract.

The inherent complexity of software design is one of the key bottlenecks affecting speed of development. The time required to implement a new feature, fix defects, or improve system qualities like performance or scalability dramatically depends on how complex the system design is. In this paper we will build  a probabilistic model for design complexity and analyze its fundamental properties. In particular we will show the asymmetry of design complexity which implies its high variability. We will explain why this variability is important, can and has to be efficiently exploited by refactoring techniques to considerably reduce design complexity.

Here’s the Refactoring and Software Complexity Variability _v1_35_: Feedback would be very welcome. Also, you can ping Alex directly at alex@yakyma.com.

Agile Software Requirements is Now “Released”!

Phew! As with any good agile release, where the availability of incremental and valuable content precedes the big event,  the actual release of my new book, Agile Software Requirements: Lean Requirements Practices for Teams, Programs, and the Enterprise,  feels a bit anti-climatic. However, after  a period of — well basically it felt like forever —  the book is now available on technical bookshelves across the country. And as with every good release, one is always reminded to take a pause to celebrate, and especially to thank the contributors.

Well, that was the pause. Now, I sure hope some people read it.

To (hopefully) initiate a bit of that, and for overview purposes, here is some of the front matter that describes the book.

Acknowledgments

Praise Quotes

Foreword by Don Reinertsen

How to Read This Book

Preface

Table of Contents

Agile Crosses the Chasm to….DoD?

I recently received an email from Dr. David F. Rico, (PMP, CSM) regarding the recent AFEI DoD Agile Development Conference, held on Tuesday, December 14, 2010 in Alexandria, Virginia. “The purpose of the conference was to promote agile acquisition and IT development practices in the U.S. DoD. AFEI is a non-profit organization who helps the U.S. DoD develop contemporary acquisition, systems, and software practices, among other valuable services.” Dr.  Rico (author of “Business Value of Agile Software Methods) wrote a synopsis of the conference and sent me a copy (permanent link: http://davidfrico.com/afei-2010.doc). Thanks David!

It is clear from his notes that agile is making an increasing impact on the acquisition of software within the DoD. A few excerpts below:

It reinforced the U.S. DoD’s commitment to the use of Agile Methods. Furthermore, it was interesting to see that Agile Methods are in widespread use by the U.S. DoD, and that no individual organization, project, group, or person is practicing them in isolation.

Prior to AFEI’s DoD Agile Development Conference, both the commercial industry and DoD contractors believed the U.S. DoD was not committed to Agile Methods, which is an enormously incorrect assumption. It’s a popular urban legend or urban myth that the U.S. DoD uses traditional methods such as DoD 5000, CMMI, PMBoK, and other waterfall-based paradigms to develop IT-intensive systems (and that no one is using Agile Methods in the U.S. DoD).

The AFEI DoD Agile Development Conference shattered that myth completely. Furthermore, it served as a forum for reinforcing the use of Agile Methods in the U.S. DoD. Psychological reinforcement or affirmation of a desired behavior is a key ingredient to successful organization change (i.e., adoption of Agile Methods and the abandonment of traditional ones).

Of course, agile has its critics within DoD (don’t we all?) as this little vignette illustrates:

This was a panel of five industry experts on Agile Methods, hosted by Chris Gunderson of the Naval Postgraduate School. Chris, an outspoken critic of Agile Methods, challenged the panel of industry experts on a variety of flash points. These included organizational change and adoption issues, scalability to large U.S. DoD programs, and empirical evidence to indicate whether they were any better or worse than traditional, waterfall-based methods. The industry experts challenged the moderator to prove traditional methods were any more scalable or applicable to large programs, citing the 67% failure rate among DoD programs using traditional methods over the last 40 years.

I suspect that many of us considered the DoD to be the last bastion for rigorous and mandated waterfall development. If the DoD can move on ( at least in some instances) to more effective, agile methods for development of high assurance software systems, I’d guess most everyone else developing such systems should be able to move on too!

2010 SSA Blog Stats – Year in Review

Hi,

This year end summary of Scaling Software Agility blog stats from wordpress might be interesting to some…. (or it might not!)

-Dean

=============================================

The stats helper monkeys at WordPress.com mulled over how this blog did in 2010, and here’s a high level summary of its overall blog health:

Healthy blog!

The Blog-Health-o-Meter™ reads Wow.

Crunchy numbers

Featured image

The Louvre Museum has 8.5 million visitors per year. This blog was viewed about 88,000 times in 2010. If it were an exhibit at The Louvre Museum, (Dean’s personal  note: yeah, interesting little market blurb, but that’s pretty unlikely!) it would take 4 days for that many people to see it.

In 2010, there were 37 new posts, growing the total archive of this blog to 224 posts. There were 68 pictures uploaded, taking up a total of 57mb. That’s about 1 pictures per week.

The busiest day of the year was November 29th with 573 views. The most popular post that day was Software Verification and Validation in High Assurance Agile Development: Definitions.

Where did they come from?

The top referring sites in 2010 were Google Reader, leffingwell.org, en.wordpress.com, agileproductowner.com, and twitter.com.

Some visitors came searching, mostly for product roadmap, product management, release management, release planning, and product owner.

Attractions in 2010

These are the posts and pages that got the most views in 2010.

1

Software Verification and Validation in High Assurance Agile Development: Definitions November 2010
4 comments

2

Agile Software Requirements (the book) November 2009
19 comments

3

Agile Product Manager in the Enterprise (5): Responsibility 3 – Maintain the Product Roadmap June 2009
2 comments

4

Agile Product Manager in the Enterprise (2): A Contemporary Framework May 2009
5 comments

5

Enterprise Agility-The Big Picture (8): The Roadmap August 2008
4 comments