User Tools

Site Tools


tutorial

This is an old revision of the document!


How to Empirically Evaluate your pet (Security but not only) Requirements Engineering Method

Running empirical experiments is not so simple. It takes months to prepare, conduct and analyze experiments properly. In this tutorial we will provide hands-on insights on how to run an empirical experiments for comparing RE methods based on our experience gained over the past 4 years in evaluating and comparing security requirements engineering methods during the E-RISE challenge.

In the first part of the tutorial, we will introduce a general experimental protocol to perform empirical evaluations of (security) RE methods. Then, we will illustrate it with examples of solutions that worked out either in our controlled experiments and in other experiments from the key literature on evaluating RE methods. We will also share lessons learned during the execution of our experiments, as there are many pitfalls and traps and they are simply not written in research papers reporting empirical experiments.

In the second part, in order to show participants how the guidelines can be put into practice, we will engage them in a short experiment to evaluate the effectiveness of a security requirement elicitation method. At the end of the experiment we will look back at the experiment just run and reflect with the audience on what really happened behind the scenes and how different choices could have lead to different experiments or outright failure.

Please visit our page on the RE'14 official website or download our flyer.
Early-bird registration is open!

Relevance for the RE Conference and Intended Audience of the Tutorial

Experimenting is a difficult and time consuming process. It takes months to set up research goals, plan the experiment and then conduct it properly. Thus, most of researchers and practitioners proved that their method works by reporting its application by themselves.

One of the difficulties faced by many people is that there is very little guidance on how to actually perform experiments. Empirical papers report only the tip of the iceberg, the clean and shiny part of the study and not the bumps, crevices and sheer volume of activities below the surface which make the whole thing float. Further, they only report their particular research questions and this is of limited guidance whenever one's research question is different.

At the end of the day, the purpose of an experiment is to show that method X works in practice (or at least better than method Y). But what does it exactly mean? Everybody can use any techniques on any problem given enough time… but slicing beef with a prehistoric stone knife is not going to be quick and the results is surely not dish worth a Michelen's star.

Over the last four years, we have run the E-RISE challenge (http://securitylab.disi.unitn.it/doku.php?id=erise), a series of empirical experiments in which different researchers working in security requirements engineering across the world have seen their methodology applied by a number of students and practitioners on a number of case studies provided by industry.

What we will illustrate in the tutorial is a distilled, concrete protocol that try to validate whether a method works in practice, i.e. whether it can be used

  • effectively and
  • efficiently
  • by somebody else beside the method's own inventors
  • on a real world problem.

We will discuss the various steps of the protocols, the data collection issues and the various challenges that one may need to face during the execution of a study. Examples will come from the literature and from our experience on conducting controlled experiments to evaluate RE methods and techniques.

This tutorial should then be of interest of graduate students, academic and industrial researchers who want to learn how to conduct controlled experiments to evaluate their favorite RE methods and techniques.

Structure of the Tutorial

The tutorial will be divided into four parts:

  1. Experimental Protocol (2 hrs) We will introduce you a general experimental protocol that we developed and refined during a series of controlled experiments. The protocol is specifically targeted to assess the performance of elicitation methods. We will discuss its organization along
    • the temporal development (training, execution, evaluation)
    • the conceptual dimension (execution of the experiment, and measurements of the results)
    • the data dimensions (quantitative vs qualitative feedback, perceived vs actual results)
    • the analysis dimensions (some simple statistical tests).
  2. Experimental Instantiation (1.5 hrs) We will discuss at this stage how each step can be instantiated in order to achieve your desired results with a series of examples (a semester long, a two weeks long, a two days long and a two hours long experiments).
  3. Experiment (2,5 hrs). We will show participants how guidelines can be put into practice. We will involve them into an experiment to evaluate the effectiveness of a security requirementselicitation method.
  4. Reflection (1 hr) After the practical exercise, we will dissect the experiment to explain and discuss
    • how it complies with presented theoretical aspects
    • some of the tricky issues behind its organization
    • what seemed work but didn't
    • the issues raised during the execution and
    • what was really necessary during the preparation.
tutorial.1401795733.txt.gz · Last modified: 2021/01/29 10:58 (external edit)