User stories and acceptance criterias are the backbone of agile development. Everyone knows badly written user stories provide little value. In extreme cases, they even do more harm than good. Therefore, many best practices and templates exist guiding us to
Test Smell Detection improves your manual test cases. The automatic detection of test smells helps making your test suite easy to understand and easy to maintain. In addition, the automatic detection also leads to consistent reproducible test results. The best way to find test smells is our Qualicen Scout. Scout can detect test smells in textual test descriptions automatically. Configured once, it immediately shows where you can improve your test descriptions. What kind of improvements you ask? Well, there is a wide variety of so called "Test Smells" Scout can automatically detect! Test smells are words, phrases, or language constructs that are not good for your test quality. For example, ...
- ambiguous phrases in your test descriptions (a threat to reproducible test results).
- sentences/paragraphs that are difficult to comprehend (prevents that your colleagues ask: "eh, what?")
- tests having multiple flows (shouldn’t a test focus on just one case?).
- steps that have been copied between test cases (super annoying to keep in sync when things change).
- and many more ...
This article is a sequel to our blog post Structured Test-Design With Specmate – Part 1: Requirements-Based Testing published by my colleague Maximilian. So far, Maximilian introduced you to our tool Specmate - a tool that helps you to automate your test-design. He explained how to model requirements using cause-effect-models and how to automatically generate test specifications based on them. However, Maximilian told you only half the story (I’m sure you were already guessing that based on the Part 1 in the title. ;-) ): Not all requirements are like the ones in his examples. Many requirements are of a different nature and can’t be specified easily using cause-effect models. In this post, I’ll demonstrate the second way of modeling requirements in Specmate: Business processes. Furthermore, I show how to generate automatically end-to-end tests based on these business processes.
There are many reasons to do automated testing on the GUI level. Automated tests are fast, repeatable, and (hopefully) provide reliable test results. On the long run, they might be even cheaper than manual testing (the only alternative for GUI testing). Done the right way, you can even integrate them in your build system giving you the final verdict that each build is as you expect it to be.
One Can’t Go Wrong With Test Automation, Right?All that sounds very promising. However, it’s quite easy to waste all these beautiful advantages of test automation by having a test suite of poor quality:
- If your automated test suite is fragile and breaks every second time it is executed, testing becomes annoying quickly. And what is the benefit of a test suite that is unreliable? Would you trust the outcome of such a test suite?
- If your test cases are labor intensive to change, you will slow your development pace. Put in different way: The only way to keep your test suite updated with the same speed as you develop new/changed features is to spend more effort in changing your test suite. And soon, you will ask yourself “Why am I spending so much effort in test automation? What has my automated test suite ever done for me?”
Team Foundation Server (TFS) and its software-as-a-service counterpart Microsoft Visual Studio Team Services (VSTS) are widely used application lifecycle management (ALM) and test management tools. They offer many great facilities to create tests, manage test plans, and execute them. Consequently, many of our clients as well as prospective customers wanted to use our test improvement software Test Scout along with TFS/VSTS to improve the test case quality. So, here is the question that we always face: How do we get the data from a testing tool into the Test Scout? As always in life, there is a straightforward and a fancy solution. Let me show you what I mean.
First: a simple integrationTest Scout is able to process almost any kind of text format. So, integrating test management tools such as TFS/VSTS is quite straightforward: For each test management tool, we created exports, which we imported into the Scout. For HP ALM, for example, we use a simple script to create a database dump containing all currently existing test cases. We then automatically imported and processed this data in Test Scout to evaluate the test case quality. Since Test Scout keeps versions of each import in its database, the history of all test cases is available in Test Scout. Therefore, all features, such as comparing different versions of test cases and historical development of test cases still works out of the box.