Among the research topics of the Security Group we try to understand whether security and risk assessment methodologies can actually work in practice.
There are many risk assessment methodologies and many security requirements methods both from industry (CoBIT, ISO2700x, ISECOM's OSST), and academia (CORAS, SecureTropos, SI*, SREP etc.).
To answer this question we need to ask first what does it mean in practice? The usual interpretation of researchers is the researchers tackle a real world problem.
But this is just the first mile of a long road. We can explain it with an anecdote: V.M., a former air traffic controller with 30+ years of experience of evaluation of controller software, was evaluating our tool for safety patterns (See Security Requirements Engineering). He told us our software generated a Windows error message (the kind of `error at exAB91A'). It was not an error, it was a window showing that a logical formula would not hold on his proposed safety pattern!
A methodology works in practice if
it can be used effectively and
efficiently
by somebody else beside the methodology's own inventors
on a real world problem.
Everybody can of course use any techniques on any problem with sufficient time and patience… but slicing beef with a stone knife is not going to be quick and the results is surely not a fine carpaccio.
In this research stream of empirical analysis we n experiments that evaluate how “normal” people (auditors, domain experts, even students etc.) uses researchers' and consultants' tools and methods to understand what's really the problem in practice.
Since our research questions are exploratory in nature, we applied a mix-method experimental methodology combining both qualitative and quantitative data collection and analysis techniques. We evaluate methods' effectiveness based on the reports delivered by the participants, while we investigate the whys methods are effective by means of questionnaires, focus group interviews and post-it notes.
One of our goals is to investigate whether the methods under evaluation could be used
effectively by users who have no prior knowledge of the methods. Therefore we have designed a protocol
to conduct comparative empirical studies in this setting. The protocol consists of three main phases:
Training. First, participants are administered a questionnaire to collect information about their level of expertise in requirement engineering, security and on other methods they may know. Then, they are divided in groups where each group is composed of one master students and two professionals. The groups are assigned to a security requirements or a risk analysis method and to an industrial case to be analyzed using the method. The participants have to attend lectures about the method and on the industrial application scenario. At the end of the Training phase, the participants are administered a questionnaire to determine their level of understanding of the methods and of the industrial applications scenario.
Application. Participants work in groups and apply the method to analyze the application scenario.Group collaboration takes place both face-to-face and remotely by using multiple communication channels (e.g. mail, chat, video conferencing facilities). At the end of this phase, participants are involved in focus groups interviews, and they are requested to fill in post-it notes and a questionnaire about their impressions on the method. To document the application of the methods, the groups are audio-video recorded. In addition, groups have to deliver a final report.
Evaluation. On one side, participants assess the methods' effectiveness: they are involved in focus groups interviews, and they are requested to fill in post-it notes and a questionnaire about their impressions on the method. On the other side, method designers assess if the participants have a followed the method while customers, instead, evaluate if the groups have identified a set of security requirements or countermeasures that are specific for the application scenario, and if they are able to justify their results based on the method's application.
Our Experimental Protocol involves five types of actors:
Method Designer is the researcher who has proposed one of the method under evaluation. His main responsibility is to train participants in the method and to answer participants' questions during the Application phase. S/he also contributes to the assessment of the methods'effectiveness by analyzing groups' reports.
Customer is an industrial partner who introduces the industrial application scenario to the participants. S/he also has to be available during the Application phase to answers all possible questions that participants may raise during analysis.
Observer plays an important role during the Application phase because they supplement audio-video recording with information about the behavior of participants e.g (if the Participants work in group vs work alone) and the difficulties that they face during the application of the method. The observer also interviews the groups and leads the post-it notes sessions.
Researcher takes care of the organization, sets the research questions, selects the participants, invites the method designers and the customers, and analyzes the data collected during the study.
Participant is the most important role. Participants work in group and apply a method provided by one of the method designers to analyze the risk and security issues of the scenario provided by the customer.
We collected a huge pile of data to have evidence of methods’ effectiveness: questionnaires, final reports, hours of focus group interviews’ audio recordings, hundreds of post it notes, and more than 300 hours of videos of the training and application phases. If you would like to have access to the raw data you are welcomed to do so but we would need to discuss access.