If you ever had quality defects in your requirements-suite or test-suite, you know how time-consuming and expensive they can become. However, due to the sheer size of requirements-suites and test-suites, assessing the quality of the contained artifacts is almost impossible. So, is there no way out of this mess, or do you have to stick deep in this yogurt? There is help: The automated requirements and test analysis tool Scout by Qualicen comes now in a new and improved version! Continue reading Release of Scout Version 4.6 (LTS)
Since this is my first blog article, I would like to take the opportunity to introduce myself. My name is Jannik Fischbach and I started my PhD in the middle of May this year. I am dealing with test automation in agile software environments and how to keep requirements and tests in line despite the high dynamics of change. However, this blog article is not intended to be about my PhD topic, but rather about my first experience at a conference. More specifically, the RE Conference in South Korea (Jeju Island) last week. But first, let’s jump back a few months.
To get a glimpse of writing a paper, I converted my master’s thesis into a paper at the beginning of my PhD (of course strongly shortened 😉). While searching for a suitable venue, I came across the AIRE Workshop. This workshop deals with the application of Artifical Intelligence to Requirements Engineering and as my master thesis deals with a similar topic, I thought it might fit quite well. One month later I got the feedback that I was accepted (you can find the paper here: https://arxiv.org/abs/1908.08810). Last week (more exactly on 24.09.) I presented my paper in South Korea. While the journey to Jeju was rather sluggish due to a typhoon, the presentation at the workshop went well and overall I liked the workshop and the other presented papers very much. Nevertheless, I was not only for the one day in South Korea, I wanted to experience how a scientific conference works. Since I had never attended a conference before, I didn’t know exactly what to expect. Looking back, however, I can say that my expectations were fulfilled and that I liked the whole conference very much. Everybody at the conference (no matter if doctoral student, student volunteer or professor) was always open and willing to help. There was always the opportunity to get in touch and talk about your research project. I also enjoyed being able to attend various events at the conference: from panel discussions to tutorials to regular paper presentations. There was something for everyone due to the wealth of different topics.
All in all, the conference was a complete success and I would like to thank the organizers once again. I hope to be able to participate again next year. I already have an idea for a new paper. 😊
See you next time,
Requirement documentation is mainly done in either Natural Language (NL) or in formal models like UML or SysML. NL offers the lowest learning curve and the most flexibility, which for many companies means: “Everyone can start writing requirements without formal training”.
In contrast, formal modelling languages require a considerable effort to learn and are very restrictive. But, the flexibility of NL comes with ambiguity and inconsistency. These are two major downsides that formal modeling languages aim to eliminate.
Our customers often ask: “Is there something in the middle, keeping the benefits of NL, but reducing the downsides?” our answer: “Yes, a requirement syntax”.
But what has that children’s puzzle to do with writing requirements?
We observe that numerous cyber-physical systems are rapidly gaining functionality and thus development gets more and more complex. Innovations are made possible in many areas by a complex interaction of sensor systems and software. Consider the development of autonomous driving, in which a multitude of different system functions must interact safely with one another in order to make complex decisions with the highest quality in order to transport people safely. In order to master the complexity, the classical, document-centered approaches of system engineering are no longer sufficient and are increasingly being replaced by model-based systems engineering (MBSE) approaches.
The SPES modeling framework provides a comprehensive, tool- and modeling language-independent method for MBSE. It offers a whole range of concrete models, modeling techniques and activities. In this blog post, I will introduce you gently to SPES. I will explain the basic principles of SPES and give some pointers where to find more.
How we investigated whether our Qualicen Scout is a useful tool for companies in the domains of software and systems engineering.
Why we wanted to answer this question
As science showed, the quality of the requirements documentation influences the subsequent activities of the software engineering process. Detecting errors late in a software engineering process leads to very expensive changes of parts of every pre-executed activity. Accordingly, we at Qualicen help our customers to assure the quality of requirements specifications before they are used in other activities.
Continue reading Detect more Quality Defects in your Requirements
This article is a sequel to our blog post Structured Test-Design With Specmate – Part 1: Requirements-Based Testing published by my colleague Maximilian. So far, Maximilian introduced you to our tool Specmate – a tool that helps you to automate your test-design. He explained how to model requirements using cause-effect-models and how to automatically generate test specifications based on them.
However, Maximilian told you only half the story (I’m sure you were already guessing that based on the Part 1 in the title. 😉 ): Not all requirements are like the ones in his examples. Many requirements are of a different nature and can’t be specified easily using cause-effect models.
In this post, I’ll demonstrate the second way of modeling requirements in Specmate: Business processes. Furthermore, I show how to generate automatically end-to-end tests based on these business processes.
In this blog post I am going to introduce Specmate, the result of a research project I have been involved into. It is an open-source tool to automate test-design, among others. This is the first post of a series in which I am going to show you some of the ideas behind Specmate.
What is test-design and why does it matter?
Test-Design is the activity to come up with the right test-cases for a piece of functionality. But what are the right test-cases? There are many criteria, depending on your focus. For me, there are two main points:
- First, they should test the right content. That means, they relate to the requirements for this functionality and cover every aspect that the requirements talk about. They should hence be able to find faults: deviations of the implementation with respect to the specification.
- Second, they should be feasible. That means, it should be possible to execute the test-cases without wasting resources.
When we look at requirements documents that are new to us, we often need some help on terms and abbreviations. Creating a glossary to explain these imporant domain terms and abbreviations is a fine idea. It helps new team members to get going, improves the readability of a requirements specification and helps to avoid misunderstandings. The main problem with glossaries is that we create them once and update them only rarely. In consequence, the majority of glossaries are not particulary useful. In this article, Qualicen consultant Maximilian Junker shows how you can get more out of your glossary and keep it always up-to-date.