Rock-solid RESTful APIs and the Testing Backblob

I’ll be presenting “Testing Twofer: how to Release Rock-solid RESTful APIs and Ice the Testing Backblob,” on Tuesday, September 9, courtesy of SQUAD – the Software Quality Association of Denver.

REST APIs are a key enabling technology for the cloud. Mobile applications, service-oriented architecture, and the Internet of Things depend on reliable and usable REST APIs. Unlike browser, native, and mobile apps, REST APIs can only be tested with software that drives the APIs. Unlike developer-centric hand-coded unit testing, adequate testing of REST APIs is truly well-suited to advanced automated testing.

As most web service applications are developed following an Agile process, effective testing must also avoid the testing backblob, in which work to maintain hand-coded BDD-style test suites exceeds available time after several sprints.

This talk will present a methodology for developing and testing REST APIs using a model-based automation and explain how this has the beneficial side-effect of shrinking the testing backblob.

The testing backblob is my riff on out-of-control Agile testing backlogs. I’m seeing this situation quite often as Agile development teams become swamped with a test maintenance problem that doesn’t have any attractive solutions.

I’ll explain some new strategies I’ve developed for Spec Explorer to test all aspects of any REST API, including functionality, security, and performance.

To register for the meetup, go to http://www.meetup.com/SQUADCO/

#MoreModelsLessTests

Buggy APIs are Eating the World

Try searching “Buggy API” on Google or Bing. When I wrote this post, I got 16,600 and 2,660 hits, respectively. For some entertaining commentary on the offenders, try it on Twitter.

This would be funny if it wasn’t so pathetic and completely unnecessary.

In this, I see results of the “fail fast” mantra, superficial testing, the presumption that “I coded it, therefore it’s good,” and hostility to documentation as a badge of honor. These aren’t new software development problems, but the way they rapidly impact a large user base is a novel side-effect of present-day technology.

Marc Andressen famously claimed that “Software is Eating the World.” From the level of API quality that Google, Bing, and Twitter report, it looks like more has been bitten off than can be chewed.

What does it take to develop a robust and usable API? Technical tricks matter, but they can’t save a bad design. Design theory and general requirements for correctness were established a long time ago with Abstract Data Types and Meyer’s Design by Contract. Web APIs must also support a conversation between sender and receiver—a protocol. Getting the rules right for all possible conservations is a hard design problem. Security and performance flaws can be found and corrected, but not with happy path testing. Putting all this together takes a certain kind of artistry, much the same way that good music is more than just notes.

But even if you do all that, you’re not done, unless you don’t care about garnering users and growing usage. Code can’t tell the whole story. I learned this firsthand reading and critiquing thousands of pages of API documentation released through the Microsoft Open Specifications process. You have to explain how your API works so that any third party with basic programming skills can use it with minimal effort. I haven’t yet analyzed the buggy API complaints to see how many are about documentation, but it’s clear this is a common pain point.

So, releasing robust and usable APIs isn’t impossible, but it isn’t trivial. It takes thinking through and validating a design. It takes thorough testing of allowed and unallowed usage and data conditions. And it takes complete, consistent, and accurate documentation.

If all this seems like a lot, well, it is—especially when this kind of work isn’t your strong suit. That’s why we’re offering Advanced API Verification.

 

 

 

 

 

 

 

 

 

2014 Model-based Testing User Survey

If you have evaluated, used, or are using any model-based testing approach, please take a few minutes to respond to the 2014 Model-based Testing User Survey.

The 2014 survey is a collaboration of Dr. Anne Kramer (sepp.med), Bruno Legeard (Smartesting), and Robert Binder (System Verification Associates). It is a follow up to the 2012 MBT User Survey http://robertvbinder.com/real-users-of-model-based-testing/ and includes questions from a survey distributed at last year’s User Conference on Advanced Automated Testing (UCAAT 2013).

The purpose is to collect data about present-day usage of Model-based Testing. We want to learn how MBT users view its efficiency and effectiveness, what works, and what does not work. Some new questions are more technical and aim at validating a common MBT classification scheme.

All responses to the survey are strictly confidential. Respondents will receive an advance copy of the report to be released at the 2014 UCCAT meeting in September http://ucaat.etsi.org/2014/

To participate, please go to

https://www.surveymonkey.com/s/MBTsurvey2014

 

Should ISO 9001 drop the Quality Policy?

Christopher Paris recently posed a provactive question about revising the ISO 9001 standard.  He argues that the “Quality Policy” requirement is unnecessary and counterproductive: It’s Time to Dump the Quality Policy.

The gist of the argument is that this requirement has lead to a lot of meaningless slogans that just get in the way of a good quality management system.

But, I think dropping the requirement to fix this problem is throwing out the baby with the bathwater.

Deming didn’t like slogans, but he did insist on clarity from top management:  From his Fourteen Points:

#10. “Eliminate slogans, exhortations, and targets for the work force asking for zero defects and new levels of productivity.”

#11b. “Eliminate management by objective. Eliminate management by numbers, numerical goals. Substitute leadership.”

I think we can all agree that bad policy is bad, except that it has been a very deep well for Scott Adams.

But, is it too much to ask that management articulates meaningful goals, their rationale, and an implementation strategy? Is doing that with regard to quality necessarily a bad thing? I don’t think so.  The quality policy should stay, but guidance is needed about what it should/shouldn’t be.

The following vignette from The Right Stuff shows how a shared understanding about the importance of quality can be genuine, simple, and effective. The original seven Mercury astronauts were each tasked to be involved with an aspect of the Mercury spacecraft or the Redstone and Atlas rockets. Gus Grissom was assigned to manual and automatic control systems. Here’s how Tom Wolfe describes Grissom’s visit to the Convair plant where the Atlas rocket was being built.

“Gus Grissom was out in San Diego in the Convair plant, where they were working on the Atlas rocket, and Gus was as uneasy at this stuff as Cooper was. Asking Gus to “just say a few words” was like handing him a knife and asking him to open a main vein. But hundreds of workers are gathered in the main auditorium of the Convair plant to see Gus and the other six, and they’re beaming at them, and the Convair brass say a few words and then the astronauts are supposed to say a few words, and all at once Gus realizes it’s his turn to say something, and he is petrified. He opens his mouth and out come the words: “Well… do good work!” It’s an ironic remark, implying: “… because it’s my ass that’ll be sitting on your freaking rocket.” But the workers started cheering like mad. They started cheering as if they had just heard the most moving and inspiring message of their lives: Do good work! After all, it’s Little Gus’s ass on top of our rocket! They stood there for an eternity and cheered their brains out while Gus gazed blankly upon them from the Pope’s balcony. Not only that, the workers—the workers, not the management but the workers!—had a flag company make up a huge banner, and they strung it up high in the main work bay, and it said: DO GOOD WORK.”

Tom Wolfe, The Right Stuff (Farrar, Straus and Giroux, 1979)

 

What’s the best way to set up a database in a test environment?

What’s the best way to initialize a database during test setup/teardown?

There are two main approaches:

  • Establish a “golden” database that corresponds with your test cases. Drop the old database in the test environment, then load the entire golden copy, just as if you were restoring it.
  • Set up the database once, then simply use it throughout your testing.

The restore/drop approach is cleanest, but even that can have some gnarly problems: date and time sensitive test cases might require database dates within a certain range of the test run; there may be external databases/services that require live synch, etc.

If DB fields contain confidential data, you may have to obfuscate certain fields (and their dependent values), or generate and use a synthetic population. It might be easier to copy/subset a live database, parameterize your test cases (i.e., make them data-driven), then generate a consistent test suite from the extracted data. You’d drop this DB after each test run, unless needed as archival evidence.

From a Linkedin Discussion