vulnerability_discovery_models
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
vulnerability_discovery_models [2017/06/21 21:00] – [How to (automatically) find vulnerabilities in the (10 years old) version you shipped to the customers?] fabio.massacci@unitn.it | vulnerability_discovery_models [2025/01/28 00:47] (current) – fabio.massacci@unitn.it | ||
---|---|---|---|
Line 3: | Line 3: | ||
Among the [[research_activities|research topics]] | Among the [[research_activities|research topics]] | ||
- | * How to (automatically) find vulnerabilities in the (10 years old) version you shipped to the customers? | + | |
+ | | ||
* How to automatically test them (when you get a report) | * How to automatically test them (when you get a report) | ||
* Which vulnerabilities are actually exploited in the Wild? | * Which vulnerabilities are actually exploited in the Wild? | ||
+ | * Which vulnerability scanning tool performs best on your particular project? | ||
* Empirical Validation of Vulnerability Discovery Models? | * Empirical Validation of Vulnerability Discovery Models? | ||
Line 12: | Line 14: | ||
Most importantly, | Most importantly, | ||
- | ===== How to (automatically) find vulnerabilities in the (10 years old) version you shipped to the customers? | + | ===== Bringing order to the dependency hell: which vulnerable dependencies really matter |
- | Our [[https:// | + | Vulnerable dependencies are a known problem in today’s open-source |
- | Why should one bother? After all the code is old code. This is only true if you are thinking about your free browser that is shipped to you in change | + | You may want to read first a thematic analysis study ({{: |
- | The method | + | In {{: |
+ | |||
+ | To achieve this, we carefully analysed the deployed dependencies, | ||
+ | |||
+ | To understand | ||
+ | |||
+ | We found that about 20% of the dependencies affected by a known vulnerability are not deployed, and therefore, they do not represent a danger to the analyzed library because they cannot be exploited in practice. Developers of the analyzed libraries are able to fix (and actually responsible for) 82% of the deployed vulnerable dependencies. The vast majority (81%) of vulnerable dependencies may be fixed by simply updating to a new version, while 1% of the vulnerable dependencies in our sample are halted, and therefore, potentially require a costly mitigation strategy. | ||
+ | |||
+ | Our methodology allows | ||
+ | |||
+ | Do you want to check if your project actually uses some vulnerable dependencies? | ||
+ | |||
+ | |||
+ | ===== A Screening Test for Disclosed Vulnerabilities in FOSS Components ===== | ||
+ | |||
+ | Our {{: | ||
+ | |||
+ | Why is should you worry about a disclosed vulnerabilities? | ||
+ | |||
+ | To address this challenge, we propose a //screening test//: a novel, automatic method based on thin slicing, for estimating quickly whether a given vulnerability is present in a consumed FOSS component by looking across its entire repository. We have applied it our test suit to large open source projects (e.g., Apache Tomcat, Spring Framework, Jenkins) that are routinely used by large software | ||
+ | vendors, scanning thousands of commits and hundred thousands | ||
+ | |||
+ | Further, we provide insights on the empirical probability | ||
+ | |||
+ | A previous [[https:// | ||
+ | |||
+ | If you are interested in getting the code for the analysis please let us know. | ||
+ | |||
+ | |||
+ | ===== Effort of security maintenance of FOSS components ===== | ||
+ | |||
+ | In our paper we investigated publicly available factors (from number of active users to commits, from code size to usage of popular programming languages, etc.) to identify which ones impact three potential effort models: Centralized (the company checks each component and propagates changes to the product groups), Distributed (each product group is in charge of evaluating and fixing its consumed FOSS components), | ||
+ | |||
+ | We use Grounded Theory to extract the factors from a six months study at the vendor and report the results on a sample of 152 FOSS components used by the vendor. | ||
+ | |||
+ | ===== Which static analyzer performs best on a particular FOSS project? ===== | ||
+ | |||
+ | Our {{: | ||
+ | |||
+ | We propose **Delta-Bench** – a novel approach for the automatic construction of benchmarks for SAST tools based on | ||
+ | differencing vulnerable and fixed versions in Free and Open Source (FOSS) repositories. I.e., Delta-Bench allows SAST tools to be automatically evaluated on the real-world historical vulnerabilities using only the findings that a tool produced for the analyzed vulnerability. | ||
+ | |||
+ | We applied our approach to test 7 state of the art SAST tools against 70 revisions of four major versions of Apache Tomcat spanning 62 distinct Common Vulnerabilities and Exposures (CVE) fixes and vulnerable files totalling over | ||
+ | 100K lines of code as the source of ground truth vulnerabilities. | ||
+ | |||
+ | The most interesting finding we have - tools perform differently due to the selected benchmark. | ||
+ | |||
+ | The Delta-Bench was awarded silver medal in the ESEC/FSE 2017 Graduate Student Research Competition: | ||
+ | |||
+ | Let us know if you want us to select a SAST tool that suits to your needs. | ||
Line 122: | Line 173: | ||
===== People ===== | ===== People ===== | ||
- | The following is a list a people that has been involved in the project at some point in time. | + | The following is a list of people that have been involved in the project at some point in time. |
+ | * [[http:// | ||
* [[http:// | * [[http:// | ||
* Viet Hung Nguyen | * Viet Hung Nguyen | ||
Line 138: | Line 190: | ||
===== Publications ===== | ===== Publications ===== | ||
+ | * I. Pashchenko, S. Dashevskyi, F. Massacci. **Delta-Bench: | ||
+ | * I. Pashchenko. **FOSS Version Differentiation as a Benchmark for Static Analysis Security Testing Tools**. In // Proceedings of 2017 11th Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering (ESEC/ | ||
* V.H. Nguyen, S. Dashevskyi, and F. Massacci. **An Automatic Method for Assessing the Versions Affected by a Vulnerability**, | * V.H. Nguyen, S. Dashevskyi, and F. Massacci. **An Automatic Method for Assessing the Versions Affected by a Vulnerability**, | ||
* L. Allodi. **The Heavy Tails of Vulnerability Exploitation** //In the Proceedings of ESSoS 2015// {{: | * L. Allodi. **The Heavy Tails of Vulnerability Exploitation** //In the Proceedings of ESSoS 2015// {{: | ||
Line 154: | Line 208: | ||
===== Talks and Tutorials ===== | ===== Talks and Tutorials ===== | ||
+ | * Ivan Pashchenko, Stanislav Dashevskyi, Fabio Massacci // | ||
+ | * Ivan Pashchenko, Stanislav Dashevskyi, Fabio Massacci //Design of a benchmark for static analysis security testing tools.// Presentation at ESSoS Doctoral Symposium 2016. London, UK, Apr 2016. {{https:// | ||
* Luca Allodi // | * Luca Allodi // | ||
* Luca Allodi. // | * Luca Allodi. // |
vulnerability_discovery_models.1498071650.txt.gz · Last modified: (external edit)