Conference Report: QS-Tag Frankfurt
[caption id="attachment_9293" align="aligncenter" width="300"] Benedikt and Henning at our booth[/caption]
Last week, we were at QS-Tag in Frankfurt, Germany. QS-Tag is a great venue for testers and everyone else who is into Quality in Software Engineering. This year's topic was Expanding Horizons, but the actual topic was AI and Automation. We were present in two talks and I was invited to a panel discussion on the future of AI in Testing. Here are our key takeaways again:
Light-weight Requirements Modelling and Test Case Generation: Try it yourself!
Automatically generating test cases from requirements?
We show you how. For this, we created a new way to create lightweight models for requirements. The advantage of lightweight models over text: These models can automatically generate test cases. How awesome is that? Check out our youtube demo to see the system in action: [embed]https://www.youtube.com/watch?v=PlaOzUmVIcM[/embed] You can find more information in two blog posts: Part 1 and Part 2. Or, check out our live demo and try it yourself:Moving in at GATE Garching (with Pictures!)
As most of you know, we moved to the GATE in Garching (a university town, 20 minutes out of Munich) recently. So in oder to celebrate our new offices, I wanted to share a few pictures from these days with you. We moved here in December. Engineers as we are, we loved all the assembling! And not too many things broke, actually ;) [gallery size="large" ids="451,450,459"]
Updates from Qualicen
Hey there! If you haven't heard from us at Qualicen in while it, it is because we are fortunately(!) very busy right now. Lot's of cool projects all over Germany and even up in Sweden. Contact us, if you would like to hear more about these projects or get in contact at one of the following venues.
Requirements quality is quality-in-use
I've worked quite some time on understanding and detecting quality defects in requirements documents and requirements quality in general. All the time, I was very dissatisfied with the current state in both research and practice on this topic. I think, the problem behind this is that there is no guidance: In times of rapid change and delivery, where every project looks different, we still have no good rule of what a good requirements document is. A few years ago, we came up with such a rule, and tried it in various applications. And - so far - it seems to work! We've collected this experience and are now ready to tell you about it, because we really believe this should change how you view requirements engineering, and this should change what you consider good requirements documents.
Efficiently control requirements quality: the best of two worlds
For high requirements quality we need quality assurance. In a previous post, I explained why automatic methods cannot replace manual methods. Instead I suggested to combine both worlds. And the ugly truth is, in both system testing and requirements engineering, we need both manual and automatic quality assurance to control requirements quality and test quality. Now you wonder, how? I got you covered. In this brief post, I want to point out how you can combine the two worlds and how you benefit from the combination.
The ugly truth about automatic methods for requirements engineering quality.
Ok, after I publish this blog post, I will probably get some angry calls from my sales department... Well, truth must be told. There are many crazy defects in requirements, and, as I wrote in my last post, you can detect quite a bunch of them automatically (and you should do so!). When I present our automatic methods for natural language requirement smells or automatic methods for detecting defects in tests to our customers, I'm proud to say that they are usually very excited. Sometimes they are too excited and then this can turn into a problem. What I mean is that I explain all the amazing things that you can detect with tools and suddenly people think that the tool will solve all the problems that they face. Spoiler alert: It doesn't. And because we're a company that is interested in happy customers, I want to briefly summarize all the problems (*that come into my mind) that a tool can't solve. And because I don't want to leave you in despair, I will also suggest some solutions, how I personally would suggest to work on that problem.
Which quality defects can you automatically detect in system tests and requirements?
I am a strong advertiser of modern, automatic methods to improve our day to day life. And so I really don't want to check by hand whether my tests and requirements fit my template, or whether my sentences are readable. So quality assurance and defect detection, for example reviews or inspections, should use automation as far as possible. BUT: When I speak to clients, sometimes people get so hooked up by the idea of automatic smell detection, that I need to slow them down. Therefore, this post tries to give a rough overview: What is possible to detect automatically? The answer basically depends on two questions:
- How much syntax (or structure) do your artifacts and tests have?
- Which language do you use?
Conditionals: Why you should avoid these two letters for better test case execution
At Qualicen, it's often my job to check other people's system test cases and tell the team what I think about these tests. So what do I look for? Well, in principle it is simple: After tests are written down, they are "only" executed and maintained. So this is where tests can be bad and I try to spot things that make execution and maintenance harder. For the maintenance, the largest problem here are clones, which we covered in our last blog post. For the test execution, the main problem that you want to avoid is that different testers test different things. This is called ambiguity and comes in many tastes. In this blog post, I want to explain what is structural ambiguity and why it is bad, and this way help you to create better test cases.
(Scroll to the summary, if you don't care about the details) ;)
Ambiguous test flow
The problem for test execution that I want to discuss here, is an ambiguous test flow. This means, that for a single test case, there are multiple paths that a tester can follow when she executes the test. Let’s look at an example.
[caption id="attachment_145" align="alignleft" width="810"] A simple, straight-forward natural language system test case.[/caption]