Last week, we were at QS-Tag in Frankfurt, Germany. QS-Tag is a great venue for testers and everyone else who is into Quality in Software Engineering. This year’s topic was Expanding Horizons, but the actual topic was AI and Automation. We were present in two talks and I was invited to a panel discussion on the future of AI in Testing. Here are our key takeaways again:
Since this is my first blog article, I would like to take the opportunity to introduce myself. My name is Jannik Fischbach and I started my PhD in the middle of May this year. I am dealing with test automation in agile software environments and how to keep requirements and tests in line despite the high dynamics of change. However, this blog article is not intended to be about my PhD topic, but rather about my first experience at a conference. More specifically, the RE Conference in South Korea (Jeju Island) last week. But first, let’s jump back a few months.
To get a glimpse of writing a paper, I converted my master’s thesis into a paper at the beginning of my PhD (of course strongly shortened 😉). While searching for a suitable venue, I came across the AIRE Workshop. This workshop deals with the application of Artifical Intelligence to Requirements Engineering and as my master thesis deals with a similar topic, I thought it might fit quite well. One month later I got the feedback that I was accepted (you can find the paper here: https://arxiv.org/abs/1908.08810). Last week (more exactly on 24.09.) I presented my paper in South Korea. While the journey to Jeju was rather sluggish due to a typhoon, the presentation at the workshop went well and overall I liked the workshop and the other presented papers very much. Nevertheless, I was not only for the one day in South Korea, I wanted to experience how a scientific conference works. Since I had never attended a conference before, I didn’t know exactly what to expect. Looking back, however, I can say that my expectations were fulfilled and that I liked the whole conference very much. Everybody at the conference (no matter if doctoral student, student volunteer or professor) was always open and willing to help. There was always the opportunity to get in touch and talk about your research project. I also enjoyed being able to attend various events at the conference: from panel discussions to tutorials to regular paper presentations. There was something for everyone due to the wealth of different topics.
All in all, the conference was a complete success and I would like to thank the organizers once again. I hope to be able to participate again next year. I already have an idea for a new paper. 😊
See you next time,
Requirement documentation is mainly done in either Natural Language (NL) or in formal models like UML or SysML. NL offers the lowest learning curve and the most flexibility, which for many companies means: “Everyone can start writing requirements without formal training”.
In contrast, formal modelling languages require a considerable effort to learn and are very restrictive. But, the flexibility of NL comes with ambiguity and inconsistency. These are two major downsides that formal modeling languages aim to eliminate.
Our customers often ask: “Is there something in the middle, keeping the benefits of NL, but reducing the downsides?” our answer: “Yes, a requirement syntax”.
But what has that children’s puzzle to do with writing requirements?
Have you ever read a text and suddenly felt like you had a déjà vu? Maybe this happened because you came across a sentence that was very similar to one that you already read before. We call this semantic duplicates.
Semantic duplicates can happen because we think one specific instruction is so important that we simply have to repeat it. But often semantic duplicates arise from simply copy-pasting text. First, semantic duplicates can lead to inconsistency within the requirements. In detail, if there are two similar sentences that explain the same requirement, the same requirement can be interpreted in two different ways. Second, if the sentences are not just similar, but rather a copy of each other, it makes the copy simply superfluous. However, semantic duplicates are redundant, which is why we decided to tackle this problem.
Anyone who has used a recent version of Specmate might has already seen an amazing new feature. In the overview screen of any Cause-Effect-Graph is now an anoccurus little button titled “Generate Model”. Clicking this button will trigger a chain of systems, that is capable of using natural language processing to turn a sentence like
If the user has no login, and login is needed, or an error is detected, a warning window is shown and a signal is emitted.
directly into a CEG, without any additional work from the user:
In this article we will do a deep dive into this feature, have a look at the natural language processing (NLP) pipeline required to make such a system work, see different approaches to implement it, and figure out how to garner the power of natural language processing for our goals.
Test Smell Detection improves your manual test cases. The automatic detection of test smells helps making your test suite easy to understand and easy to maintain. In addition, the automatic detection also leads to consistent reproducible test results.
The best way to find test smells is our Qualicen Scout. Scout can detect test smells in textual test descriptions automatically. Configured once, it immediately shows where you can improve your test descriptions.
What kind of improvements you ask? Well, there is a wide variety of so called “Test Smells” Scout can automatically detect! Test smells are words, phrases, or language constructs that are not good for your test quality. For example, …
- ambiguous phrases in your test descriptions (a threat to reproducible test results).
- sentences/paragraphs that are difficult to comprehend (prevents that your colleagues ask: “eh, what?”)
- tests having multiple flows (shouldn’t a test focus on just one case?).
- steps that have been copied between test cases (super annoying to keep in sync when things change).
- and many more …
You might have seen the pictures of green swans on our twitter feed from time to time. You have probably been wondering: What the heck is going on at Qualicen? How is this related to Requirements Engineering?
Automatically generating test cases from requirements? We show you how. For this, we created a new way to c...
We observe that numerous cyber-physical systems are rapidly gaining functionality and thus development gets more and more complex. Innovations are made possible in many areas by a complex interaction of sensor systems and software. Consider the development of autonomous driving, in which a multitude of different system functions must interact safely with one another in order to make complex decisions with the highest quality in order to transport people safely. In order to master the complexity, the classical, document-centered approaches of system engineering are no longer sufficient and are increasingly being replaced by model-based systems engineering (MBSE) approaches.
The SPES modeling framework provides a comprehensive, tool- and modeling language-independent method for MBSE. It offers a whole range of concrete models, modeling techniques and activities. In this blog post, I will introduce you gently to SPES. I will explain the basic principles of SPES and give some pointers where to find more.
How we investigated whether our Qualicen Scout is a useful tool for companies in the domains of software and systems engineering.
Why we wanted to answer this question
As science showed, the quality of the requirements documentation influences the subsequent activities of the software engineering process. Detecting errors late in a software engineering process leads to very expensive changes of parts of every pre-executed activity. Accordingly, we at Qualicen help our customers to assure the quality of requirements specifications before they are used in other activities.
Continue reading Detect more Quality Defects in your Requirements