Running empirical experiments is not so simple. It takes months to prepare, conduct and analyze experiments properly. In this tutorial we will provide hands-on insights on how to run an empirical experiments for comparing RE methods based on our experience gained over the past 4 years in evaluating and comparing security requirements engineering methods during the E-RISE challenge.
In the first part of the tutorial, we will introduce a general experimental protocol to perform empirical evaluations of (security) RE methods. Then, we will illustrate it with examples of solutions that worked out either in our controlled experiments and in other experiments from the key literature on evaluating RE methods. We will also share lessons learned during the execution of our experiments, as there are many pitfalls and traps and they are simply not written in research papers reporting empirical experiments.
In the second part, in order to show participants how the guidelines can be put into practice, we will engage them in a short experiment to evaluate the effectiveness of a security requirement elicitation method. At the end of the experiment we will look back at the experiment just run and reflect with the audience on what really happened behind the scenes and how different choices could have lead to different experiments or outright failure.
Experimenting is a difficult and time consuming process. It takes months to set up research goals, plan the experiment and then conduct it properly. Thus, most of researchers and practitioners proved that their method works by reporting its application by themselves.
One of the difficulties faced by many people is that there is very little guidance on how to actually perform experiments. Empirical papers report only the tip of the iceberg, the clean and shiny part of the study and not the bumps, crevices and sheer volume of activities below the surface which make the whole thing float. Further, they only report their particular research questions and this is of limited guidance whenever one's research question is different.
At the end of the day, the purpose of an experiment is to show that method X works in practice (or at least better than method Y). But what does it exactly mean? Everybody can use any techniques on any problem given enough time… but slicing beef with a prehistoric stone knife is not going to be quick and the results is surely not dish worth a Michelen's star.
Over the last four years, we have run the E-RISE challenge (http://securitylab.disi.unitn.it/doku.php?id=erise), a series of empirical experiments in which different researchers working in security requirements engineering across the world have seen their methodology applied by a number of students and practitioners on a number of case studies provided by industry.
What we will illustrate in the tutorial is a distilled, concrete protocol that try to validate whether a method works in practice, i.e. whether it can be used
We will discuss the various steps of the protocols, the data collection issues and the various challenges that one may need to face during the execution of a study. Examples will come from the literature and from our experience on conducting controlled experiments to evaluate RE methods and techniques.
This tutorial should then be of interest of graduate students, academic and industrial researchers who want to learn how to conduct controlled experiments to evaluate their favorite RE methods and techniques.
The tutorial will be divided into four parts: