2014 Model-based Testing User Survey

If you have evaluated, used, or are using any model-based testing approach, please take a few minutes to respond to the 2014 Model-based Testing User Survey.

The 2014 survey is a collaboration of Dr. Anne Kramer (sepp.med), Bruno Legeard (Smartesting), and Robert Binder (System Verification Associates). It is a follow up to the 2012 MBT User Survey http://robertvbinder.com/real-users-of-model-based-testing/ and includes questions from a survey distributed at last year’s User Conference on Advanced Automated Testing (UCAAT 2013).

The purpose is to collect data about present-day usage of Model-based Testing. We want to learn how MBT users view its efficiency and effectiveness, what works, and what does not work. Some new questions are more technical and aim at validating a common MBT classification scheme.

All responses to the survey are strictly confidential. Respondents will receive an advance copy of the report to be released at the 2014 UCCAT meeting in September http://ucaat.etsi.org/2014/

To participate, please go to

https://www.surveymonkey.com/s/MBTsurvey2014

 

System Verification Associates Launches High Assurance Services

I founded System Verification Associates (SVA) in 2009.  The last five years have been full of new challenges that led to remarkable client wins. Some highlights include:

  • Improved the software process at several FDA-regulated product companies. A particular challenge was to find the right balance of quality management system structure and the creative energy that Agile practices unleash.
  • Developed model-based testing solutions for the high-frequency trading space and aerospace.
  • Helped software service and product companies articulate unique high-value messaging for innovative services.
  • Published the first Model-based Testing user survey.
  • Developed a testing and virtualization framework for the “Internet of Things.”
  • And of course, for three years I lead an SVA team supporting Microsoft’s Open Protocols project and served as its process architect.

All my work for the last five years has resulted from recommendations and established relationships. I didn’t feel a need to market SVA’s growing capabilities. But with the rapid disruption of the entire information technology landscape, it is time to reintroduce myself and System Verification Associates.

This relaunch of the SVA web site is a first step, focused on three branded offerings.

  • 360 Degree Process Improvement
  • Advanced API Verification
  • Multidimensional Testing

Stay tuned …

 

 

What kind of contract is best for Agile Development?

The basic structure of development contracts is all about risk management and creating incentives – for both sides. Fixed fee minimizes certain risks for the customer and maximizes them for the contractor. T&E minimizes certain risks for the contractor and maximizes the customer’s risk. There are many variations between these poles.

Although there are some exceptions, most projects are funded with a finite and limited budget of time and money, with considerable consequences for not meeting expectations. Telling a check signer they’ll know what they get when they get it usually leads to an interesting discussion.

We all know that the sponsors of bespoke software development often don’t know (exactly) what they want and that developers don’t anticipate all of the ways that complex software can get really screwed up. Some kind of iteration is necessary to resolve this. Techniques for managing software development risk are well-known and have been successfully applied (and ignored with predictable results) for over 40 years, including Agile practices.

The essential risks of software development don’t change as a result of using an Agile process (for a primer on Software Risk, I like Higuera and Haimes Software Risk Management.)

So, I think “Agile versus Waterfall” as criteria for terms is a misleading distinction that can’t really provide much help in identifying and managing risks. Choosing a process model and then a risk management/contracting approach is essentially putting the cart before the horse.

Should ISO 9001 drop the Quality Policy?

Christopher Paris recently posed a provactive question about revising the ISO 9001 standard.  He argues that the “Quality Policy” requirement is unnecessary and counterproductive: It’s Time to Dump the Quality Policy.

The gist of the argument is that this requirement has lead to a lot of meaningless slogans that just get in the way of a good quality management system.

But, I think dropping the requirement to fix this problem is throwing out the baby with the bathwater.

Deming didn’t like slogans, but he did insist on clarity from top management:  From his Fourteen Points:

#10. “Eliminate slogans, exhortations, and targets for the work force asking for zero defects and new levels of productivity.”

#11b. “Eliminate management by objective. Eliminate management by numbers, numerical goals. Substitute leadership.”

I think we can all agree that bad policy is bad, except that it has been a very deep well for Scott Adams.

But, is it too much to ask that management articulates meaningful goals, their rationale, and an implementation strategy? Is doing that with regard to quality necessarily a bad thing? I don’t think so.  The quality policy should stay, but guidance is needed about what it should/shouldn’t be.

The following vignette from The Right Stuff shows how a shared understanding about the importance of quality can be genuine, simple, and effective. The original seven Mercury astronauts were each tasked to be involved with an aspect of the Mercury spacecraft or the Redstone and Atlas rockets. Gus Grissom was assigned to manual and automatic control systems. Here’s how Tom Wolfe describes Grissom’s visit to the Convair plant where the Atlas rocket was being built.

“Gus Grissom was out in San Diego in the Convair plant, where they were working on the Atlas rocket, and Gus was as uneasy at this stuff as Cooper was. Asking Gus to “just say a few words” was like handing him a knife and asking him to open a main vein. But hundreds of workers are gathered in the main auditorium of the Convair plant to see Gus and the other six, and they’re beaming at them, and the Convair brass say a few words and then the astronauts are supposed to say a few words, and all at once Gus realizes it’s his turn to say something, and he is petrified. He opens his mouth and out come the words: “Well… do good work!” It’s an ironic remark, implying: “… because it’s my ass that’ll be sitting on your freaking rocket.” But the workers started cheering like mad. They started cheering as if they had just heard the most moving and inspiring message of their lives: Do good work! After all, it’s Little Gus’s ass on top of our rocket! They stood there for an eternity and cheered their brains out while Gus gazed blankly upon them from the Pope’s balcony. Not only that, the workers—the workers, not the management but the workers!—had a flag company make up a huge banner, and they strung it up high in the main work bay, and it said: DO GOOD WORK.”

Tom Wolfe, The Right Stuff (Farrar, Straus and Giroux, 1979)

 

What’s the best way to set up a database in a test environment?

What’s the best way to initialize a database during test setup/teardown?

There are two main approaches:

  • Establish a “golden” database that corresponds with your test cases. Drop the old database in the test environment, then load the entire golden copy, just as if you were restoring it.
  • Set up the database once, then simply use it throughout your testing.

The restore/drop approach is cleanest, but even that can have some gnarly problems: date and time sensitive test cases might require database dates within a certain range of the test run; there may be external databases/services that require live synch, etc.

If DB fields contain confidential data, you may have to obfuscate certain fields (and their dependent values), or generate and use a synthetic population. It might be easier to copy/subset a live database, parameterize your test cases (i.e., make them data-driven), then generate a consistent test suite from the extracted data. You’d drop this DB after each test run, unless needed as archival evidence.

From a Linkedin Discussion