Empirical Validation of Risk and Security Methodologies
Among the research topics of the Security Group we try to understand whether security and risk assessment methodologies can actually work in practice.
There are many risk assessment methodologies and many security requirements methods both from industry (CoBIT, ISO2700x, ISECOM's OSST), and academia (CORAS, SecureTropos, SI*, SREP etc.).
To answer this question we need to ask first what does it mean in practice? The usual interpretation of researchers is the researchers tackle a real world problem.
But this is just the first mile of a long road. We can explain it with an anecdote: V.M., a former air traffic controller with 30+ years of experience of evaluation of controller software, was evaluating our tool for safety patterns (See Security Requirements Engineering). He told us our software generated a Windows error message (the kind of `error at exAB91A'). It was not an error, it was a window showing that a logical formula would not hold on his proposed safety pattern!
A methodology works in practice if
it can be used effectively and
efficiently
by somebody else beside the methodology's own inventors
on a real world problem.
Everybody can of course use any techniques on any problem with sufficient time and patience… but slicing beef with a stone knife is not going to be quick and the results is surely not a fine carpaccio.
In this research stream of empirical analysis we n experiments that evaluate how “normal” people (auditors, domain experts, even students etc.) uses researchers' and consultants' tools and methods to understand what's really the problem in practice.
Research Approach and Experimental Protocol
Since our research questions are exploratory in nature, we applied a mix-method experimental methodology combining both qualitative and quantitative data collection and analysis techniques. We evaluate methods' effectiveness based on the reports delivered by the participants, while we investigate the whys methods are effective by means of questionnaires, focus group interviews and post-it notes.
One of our goals is to investigate whether the methods under evaluation could be used
effectively by users who have no prior knowledge of the methods. Therefore we have designed a protocol
to conduct comparative empirical studies in this setting. The protocol consists of three main phases:
Training. First, participants are administered a questionnaire to collect information about their level of expertise in requirement engineering, security and on other methods they may know. Then, they are divided in groups where each group is composed of one master students and two professionals. The groups are assigned to a security requirements or a risk analysis method and to an industrial case to be analyzed using the method. The participants have to attend lectures about the method and on the industrial application scenario. At the end of the Training phase, the participants are administered a questionnaire to determine their level of understanding of the methods and of the industrial applications scenario.
Application. Participants work in groups and apply the method to analyze the application scenario.Group collaboration takes place both face-to-face and remotely by using multiple communication channels (e.g. mail, chat, video conferencing facilities). At the end of this phase, participants are involved in focus groups interviews, and they are requested to fill in post-it notes and a questionnaire about their impressions on the method. To document the application of the methods, the groups are audio-video recorded. In addition, groups have to deliver a final report.
Evaluation. On one side, participants assess the methods' effectiveness: they are involved in focus groups interviews, and they are requested to fill in post-it notes and a questionnaire about their impressions on the method. On the other side, method designers assess if the participants have a followed the method while customers, instead, evaluate if the groups have identified a set of security requirements or countermeasures that are specific for the application scenario, and if they are able to justify their results based on the method's application.
Our Experimental Protocol involves five types of actors:
Method Designer is the researcher who has proposed one of the method under evaluation. His main responsibility is to train participants in the method and to answer participants' questions during the Application phase. S/he also contributes to the assessment of the methods'effectiveness by analyzing groups' reports.
Customer is an industrial partner who introduces the industrial application scenario to the participants. S/he also has to be available during the Application phase to answers all possible questions that participants may raise during analysis.
Observer plays an important role during the Application phase because they supplement audio-video recording with information about the behavior of participants e.g (if the Participants work in group vs work alone) and the difficulties that they face during the application of the method. The observer also interviews the groups and leads the post-it notes sessions.
Researcher takes care of the organization, sets the research questions, selects the participants, invites the method designers and the customers, and analyzes the data collected during the study.
Participant is the most important role. Participants work in group and apply a method provided by one of the method designers to analyze the risk and security issues of the scenario provided by the customer.
Experiments
Within the main stream project we covered a number of themes.
Empirical validation of Risk and Security Requirements Methodologies
The e-RISE challenge. eRISE is an annual challenge that aims to compare the effectiveness of academic methods for the elicitation and analysis of threats and security requirements and investigate why these methods are effective. Four editions of eRISE challenge has been held:
An Experimental Comparison of Tabular vs. Graphical Security Methods. We have conducted several experiments on this topic in:
The Role of Catalogues of Threats and Security Controls in Security Risk Assessment. On this topic we have conducted three controlled experiments in:
Risk Models Comprehension: An Empirical Comparison of Tabular vs. Graphical Representations. We have conducted seven experiments on this topic on:
Empirical Evaluation of CVSS Environmental Metrics.
Nov 2016 in University of Trento, Italy (29 participants).
People
The following is a list a people that has been involved in the project at some point in time.
Projects
This activity was supported by a number of projects
Publications
Working papers
M. de Gramatica, K. Labunets, F. Massacci, F. Paci, M. Ragosta, A. Tedeschi. On the Effectiveness of Sourcing Knowledge from Catalogues in Security Risk Assessment. To be submitted to journal.
K. Labunets, F. Massacci, F. Paci. An Empirical Comparison of Security Risk Assessment Methods. To be submitted to journal.
Published papers
K. Labunets, F. Massacci, F. Paci, S. Marczak, F. Moreira de Oliveira.
Model Comprehension for Security Risk Assessment: An Empirical Comparison of Tabular vs. Graphical Representations.
Empirical Software Engineering (2014). Available at SSRN:
https://ssrn.com/abstract=2906745
K. Labunets, F. Massacci, F. Paci.
On the Equivalence Between Graphical and Tabular Representations for Security Risk Assessment. In
Proceedings of REFSQ'17.
Authors' Draft PDF.
K. Labunets, F. Paci, F. Massacci.
Which Security Catalogue Is Better for Novices? In
Proc. of EmpiRE Workshop at IEEE RE'15. PDF (preprint)
M. de Gramatica, K. Labunets, F. Massacci, F. Paci, and A. Tedeschi.
The Role of Catalogues of Threats and Security Controls in Security Risk Assessment: An Empirical Study with ATM Professionals. In
Proc. of REFSQ'15.
PDF
M. Giacalone, R. Mammoliti, F. Massacci, F. Paci, R. Perugino, and C. Selli.
Security Triage: A Report of a Lean Security Requirements Methodology for Cost-Effective Security Analysis. A short summary appears In
Proc. of EmpiRE Workshop at IEEE RE'14.
3 pages PDF. A longer Industry report appears in
Proc. of ESEM'2014.
PDF (preprint)
K. Labunets, F. Paci, F. Massacci, and R. Ruprai.
An Experiment on Comparing Textual vs. Visual Industrial Methods for Security Risk Assessment. In
Proc. of EmpiRE Workshop at IEEE RE'14 PDF
Labunets, K., Massacci, F., Paci, F., and Tran, L.M.S.
An experimental comparison of two risk-based security methods. In
Proceedings of the 7th International Symposium on Empirical Software Engineering and Measurement (ESEM), 163–172, 2013.
PDF
Fabio Massacci, Federica Paci, Le Minh Sang Tran, Alessandra Tedeschi:
Assessing a requirements evolution approach: Empirical studies in the air traffic management domain. Journal of Systems and Software 95: 70-88 (2014).
PDF at publisher
Riccardo Scandariato, Federica Paci, Le Minh Sang Tran, Katsiaryna Labunets, Koen Yskout, Fabio Massacci, Wouter Joosen: Empirical Assessment of Security Requirements and Architecture: Lessons Learned. In Engineering Secure Future Internet Services and Systems 2014: 35-64
Massacci F., and Paci F.
How to Select a Security Requirements Method? A comparative study with students and practitioners. In
Proceedings of the 17th Nordic Conference in Secure IT Systems (NordSec), 2012.
PDF
Massacci F., Nagaraj D., Paci F., Tran L.M.S, Tedeschi, A.
Assessing a Requirements Evolution Approach: Empirical Studies in the Air Traffic Management Domain. In
Proceedings of International Workshop on Empirical Requirements Engineering (EmpiRE), 49–56, 2012.
PDF
Talks and Tutorials
Federica Paci.
An experimental comparison of two risk-based security methods. International Symposium on Empirical Software Engineering and Measurement (ESEM), Baltimore, Maryland, USA, October 11, 2013.
PDF
Fabio Massacci.
How do you know that a security methodology work? An empirical approach to evaluate security design methods. ISTD, Singapore University of Technology and Design, September 13, 2012.
PDF
Federica Paci.
How do you know that a security requirements method actually work? ITT Trust and Security Seminar (TSS), University of Illinois, Urbana-Champaign, IL, USA, September 26, 2012.
PDF
Dataset
We collected a huge pile of data to have evidence of methods’ effectiveness: questionnaires, final reports, hours of focus group interviews’ audio recordings, hundreds of post it notes, and more than 300 hours of videos of the training and application phases. If you would like to have access to the raw data you are welcomed to do so but we would need to discuss access.