vulnerability_discovery_models
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
vulnerability_discovery_models [2017/06/21 20:52] – fabio.massacci@unitn.it | vulnerability_discovery_models [2025/01/28 00:47] (current) – fabio.massacci@unitn.it | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | ====== How to Find (Dangerous) Vulnerabilities? | + | ====== How to Find and Assess |
Among the [[research_activities|research topics]] | Among the [[research_activities|research topics]] | ||
- | * How to find vulnerabilities in the past versions of software | + | |
+ | | ||
* How to automatically test them (when you get a report) | * How to automatically test them (when you get a report) | ||
- | * Vulnerabilities Exploited | + | * Which vulnerabilities are actually exploited |
- | * Empirical Validation of Vulnerability Discovery Models | + | * Which vulnerability scanning tool performs best on your particular project? |
+ | * Empirical Validation of Vulnerability Discovery Models? | ||
See also our sections on [[security_economics|Security Economics]] and [[malware_analysis|Malware Analysis]]. | See also our sections on [[security_economics|Security Economics]] and [[malware_analysis|Malware Analysis]]. | ||
Line 12: | Line 14: | ||
Most importantly, | Most importantly, | ||
+ | ===== Bringing order to the dependency hell: which vulnerable dependencies really matter ===== | ||
- | ===== Vulnerabilities | + | Vulnerable dependencies are a known problem in today’s open-source software ecosystems because FOSS libraries are highly interconnected and developers do not always update their dependencies. |
+ | |||
+ | You may want to read first a thematic analysis study ({{: | ||
+ | |||
+ | In {{: | ||
+ | |||
+ | To achieve this, we carefully analysed the deployed dependencies, | ||
+ | |||
+ | To understand the industrial impact, we considered the 200 most popular FOSS Java libraries used by SAP in its own software. Our analysis included 10905 distinct GAVs (group, artifact, version) in Maven when considering all the library versions. | ||
+ | |||
+ | We found that about 20% of the dependencies affected by a known vulnerability are not deployed, and therefore, they do not represent a danger to the analyzed library because they cannot be exploited in practice. Developers of the analyzed libraries are able to fix (and actually responsible for) 82% of the deployed vulnerable dependencies. The vast majority (81%) of vulnerable dependencies may be fixed by simply updating to a new version, while 1% of the vulnerable dependencies in our sample are halted, and therefore, potentially require a costly mitigation strategy. | ||
+ | |||
+ | Our methodology allows software development companies to receive actionable information about their library dependencies, | ||
+ | |||
+ | Do you want to check if your project actually uses some vulnerable dependencies? | ||
+ | |||
+ | |||
+ | ===== A Screening Test for Disclosed | ||
+ | |||
+ | Our {{: | ||
+ | |||
+ | Why is should you worry about a disclosed vulnerabilities? | ||
+ | |||
+ | To address this challenge, we propose a //screening test//: a novel, automatic method based on thin slicing, for estimating quickly whether a given vulnerability is present in a consumed FOSS component by looking across its entire repository. We have applied it our test suit to large open source projects (e.g., Apache Tomcat, Spring Framework, Jenkins) that are routinely used by large software | ||
+ | vendors, scanning thousands of commits and hundred thousands lines of code in a matter of minutes. | ||
+ | |||
+ | Further, we provide insights on the empirical probability that, on the above mentioned projects, a potentially vulnerable component might not actually be vulnerable after all (e.g. entries to a vulnerability database such as NVD, which says that a version is vulnerable when the code is not even there), | ||
+ | |||
+ | A previous [[https:// | ||
+ | |||
+ | If you are interested in getting the code for the analysis please let us know. | ||
+ | |||
+ | |||
+ | ===== Effort of security maintenance of FOSS components ===== | ||
+ | |||
+ | In our paper we investigated publicly available factors (from number of active users to commits, from code size to usage of popular programming languages, etc.) to identify which ones impact three potential effort models: Centralized (the company checks each component and propagates changes to the product groups), Distributed (each product group is in charge of evaluating and fixing its consumed FOSS components), | ||
+ | |||
+ | We use Grounded Theory to extract the factors from a six months study at the vendor and report the results on a sample of 152 FOSS components used by the vendor. | ||
+ | |||
+ | ===== Which static analyzer performs best on a particular FOSS project? ===== | ||
+ | |||
+ | Our {{: | ||
+ | |||
+ | We propose **Delta-Bench** – a novel approach for the automatic construction of benchmarks for SAST tools based on | ||
+ | differencing vulnerable and fixed versions in Free and Open Source (FOSS) repositories. I.e., Delta-Bench allows SAST tools to be automatically evaluated on the real-world historical vulnerabilities using only the findings that a tool produced for the analyzed vulnerability. | ||
+ | |||
+ | We applied our approach to test 7 state of the art SAST tools against 70 revisions of four major versions of Apache Tomcat spanning 62 distinct Common Vulnerabilities and Exposures (CVE) fixes and vulnerable files totalling over | ||
+ | 100K lines of code as the source of ground truth vulnerabilities. | ||
+ | |||
+ | The most interesting finding we have - tools perform differently due to the selected benchmark. | ||
+ | |||
+ | The Delta-Bench was awarded silver medal in the ESEC/FSE 2017 Graduate Student Research Competition: | ||
+ | |||
+ | Let us know if you want us to select a SAST tool that suits to your needs. | ||
+ | |||
+ | |||
+ | ===== Which vulnerabilities are actually exploited | ||
Vulnerability exploitation is, reportedly, a major threat to system and software security. Assessing the risk represented by a vulnerability has therefore been at the center of a long debate. Eventually, the security community widely adopted the Common Vulnerability Scoring System (or CVSS in short) as the reference methodology for vulnerability risk assessment. The CVSS is used in reference vulnerability databases such as [[http:// | Vulnerability exploitation is, reportedly, a major threat to system and software security. Assessing the risk represented by a vulnerability has therefore been at the center of a long debate. Eventually, the security community widely adopted the Common Vulnerability Scoring System (or CVSS in short) as the reference methodology for vulnerability risk assessment. The CVSS is used in reference vulnerability databases such as [[http:// | ||
Line 114: | Line 173: | ||
===== People ===== | ===== People ===== | ||
- | The following is a list a people that has been involved in the project at some point in time. | + | The following is a list of people that have been involved in the project at some point in time. |
+ | * [[http:// | ||
* [[http:// | * [[http:// | ||
* Viet Hung Nguyen | * Viet Hung Nguyen | ||
Line 130: | Line 190: | ||
===== Publications ===== | ===== Publications ===== | ||
- | * L. Allodi, M. Corradin, F. Massacci. **Then and Now: On The Maturity of the Cybercrime Markets. The lesson black-hat marketeers learned.** //IEEE Transactions on Emerging Topics | + | * I. Pashchenko, S. Dashevskyi, F. Massacci. **Delta-Bench: Differential Benchmark for Static Analysis Security Testing Tools**. To appear |
- | * S. Dashevskyi, A. D. Brucker, F. Massacci. **On the Security | + | * I. Pashchenko. **FOSS Version Differentiation as a Benchmark for Static Analysis |
* V.H. Nguyen, S. Dashevskyi, and F. Massacci. **An Automatic Method for Assessing the Versions Affected by a Vulnerability**, | * V.H. Nguyen, S. Dashevskyi, and F. Massacci. **An Automatic Method for Assessing the Versions Affected by a Vulnerability**, | ||
* L. Allodi. **The Heavy Tails of Vulnerability Exploitation** //In the Proceedings of ESSoS 2015// {{: | * L. Allodi. **The Heavy Tails of Vulnerability Exploitation** //In the Proceedings of ESSoS 2015// {{: | ||
Line 148: | Line 208: | ||
===== Talks and Tutorials ===== | ===== Talks and Tutorials ===== | ||
+ | * Ivan Pashchenko, Stanislav Dashevskyi, Fabio Massacci // | ||
+ | * Ivan Pashchenko, Stanislav Dashevskyi, Fabio Massacci //Design of a benchmark for static analysis security testing tools.// Presentation at ESSoS Doctoral Symposium 2016. London, UK, Apr 2016. {{https:// | ||
* Luca Allodi // | * Luca Allodi // | ||
* Luca Allodi. // | * Luca Allodi. // |
vulnerability_discovery_models.1498071124.txt.gz · Last modified: (external edit)