Test Engineering

Creating test cases by hand can be a lot of effort. It takes time, and so costs plenty of money. It is estimated that testing on average costs roughly 50% of the project budget.So maybe, we could try and skip it? Well, we still need to test and, among other things, make sure that the system behaves in the way we specified. But maybe we can develop an automatic method for creating tests? And this is the core idea: Why not use the specification to generate the tests?

Automatically generating test cases from requirements?

We show you how. For this, we created a new way to create lightweight models for requirements. The advantage of lightweight models over text: These models can automatically generate test cases. How awesome is that? Check out our youtube demo to see the system in action: [embed]https://www.youtube.com/watch?v=PlaOzUmVIcM[/embed] You can find more information in two blog posts: Part 1 and Part 2. Or, check out our live demo and try it yourself:

In this blog post  I am going to introduce Specmate, the result of a research project I have been involved into. It is an open-source tool to automate test-design, among others. This is the first post of a series in which I am going to show you some of the ideas behind Specmate.

What is test-design and why does it matter?

Test-Design is the activity to come up with the right test-cases for a piece of functionality. But what are the right test-cases? There are many criteria, depending on your focus. For me, there are two main points:
  • First, they should test the right content. That means, they relate to the requirements for this functionality and cover every aspect that the requirements talk about. They should hence be able to find faults: deviations of the implementation with respect to the specification.
  • Second, they should be feasible. That means, it should be possible to execute the test-cases without wasting resources.

There are many reasons to do automated testing on the GUI level. Automated tests are fast, repeatable, and (hopefully) provide reliable test results. On the long run, they might be even cheaper than manual testing (the only alternative for GUI testing). Done the right way, you can even integrate them in your build system giving you the final verdict that each build is as you expect it to be.

One Can’t Go Wrong With Test Automation, Right?

All that sounds very promising. However, it’s quite easy to waste all these beautiful advantages of test automation by having a test suite of poor quality:
  • If your automated test suite is fragile and breaks every second time it is executed, testing becomes annoying quickly. And what is the benefit of a test suite that is unreliable? Would you trust the outcome of such a test suite?
  • If your test cases are labor intensive to change, you will slow your development pace. Put in different way: The only way to keep your test suite updated with the same speed as you develop new/changed features is to spend more effort in changing your test suite. And soon, you will ask yourself “Why am I spending so much effort in test automation? What has my automated test suite ever done for me?

At Qualicen, it's often my job to check other people's system test cases and tell the team what I think about these tests. So what do I look for? Well, in principle it is simple: After tests are written down, they are "only" executed and maintained. So this is where tests can be bad and I try to spot things that make execution and maintenance harder. For the maintenance, the largest problem here are clones, which we covered in our last blog post. For the test execution, the main problem that you want to avoid is that different testers test different things. This is called ambiguity and comes in many tastes. In this blog post, I want to explain what is structural ambiguity and why it is bad, and this way help you to create better test cases. (Scroll to the summary, if you don't care about the details) ;) Ambiguous test flow The problem for test execution that I want to discuss here, is an ambiguous test flow. This means, that for a single test case, there are multiple paths that a tester can follow when she executes the test. Let’s look at an example. [caption id="attachment_145" align="alignleft" width="810"]A simple, straight-forward natural language system test case. A simple, straight-forward natural language system test case.[/caption]

I recently reviewed a manual test suite of one of our customers. One of the first things I check very early in a review is the number of clones (i.e. duplicated parts of a test suite, usually created by copy and paste). In this recent case, I discovered that nearly 70% of the test suite is duplicated. That means, when I take some arbitrary test step, the chance is 70% that the test step is a 1:1 copy of another step. At the top of the post is a tree map that visualizes the amount of clones I found. Each rectangle represents a test, the more red a rectangle is, the bigger the amount of cloning. In my experience, cloning in test suites is the biggest problem with regard to maintainability of a test suite. Cloning causes considerable costs as the effectiveness of the test suite decreases and the effort for maintenance rockets. In this post I take a closer look on cloning in test suites. I show you an example to illustrate how clones can look like and explain where clones come from. Later, I give you good reasons why you should care about clones in tests and discusss strategies you can employ to avoid or at least deal with clones.