This is an old revision of the document!
Among the research topics of the Security Group the main stream of this research topic is to understand how some types of malware actually works.
That seems to be pretty obvious but we would like to find some sort of general approach to test a malware and try to find commonalities. We focussed on exploit kits and recently on server side web applications.
See also our section on Predictive Models for Vulnerabilities
Over the past couple of years a number of private exploit kit source codes leaked in public. We identified information for more than 70 exploit kits and out of those we were able to successfully download and deploy 33 instances of 24 web malware families (such as Crimepack, Eleonore and Fragus).
In our analysis we pursued the following goals:
The results of our study are quite surprising. We expected exploit delivery mechanisms to be sophisticated - to work as snipers, performing a careful study of the remote machine and then delivering only the right exploit to the right victim. While the study is performed by most kits, its results are not used in a significant way to select the exploit. Instead the attack is performed in machine-gun style. It seems that the main purpose of victim fingerprinting is to make statistics and “dashboards” of traffic and malware installations. In other words exploit kits’ main target is to “improve the customer experience”. A large number of successfull infections is expected to come by large volumes of traffic instead of sophisticated attacks.
Exploit Kits are attack tools traded in the black markets for cybercrime that are responsible for hundred of millions of system infections worldwide. As you may have already read in the Security Economics section of this Wiki, we have infiltrated and are currently monitoring among the most important black markets in the cybercrime scenario. As part of our investigations, we were gathered more than 40 Exploit Kits leaked from the markets. After a thorough technical analysis of their capabilities and characteristics (see "Anatomy of Exploit Kits: Preliminary Analysis of Exploit Kits as Software Artefacts."), we started testing them.
The goal of this experimental approach is to estimate exploit kits capabilities in terms of infection potential and returned value for the attacker. In order to do that we are simulating traffic coming to the Exploit Kits and measuring metrics such as infection rates of kits. To this aim we are randomly generating plausible machine configurations spanning from 2006 to 2013, in moving windows of two years, and testing them against our Exploit Kits. A sample example is given in the Figure at the bottom. Configurations are installed (with windows calculated on release dates) on the following operative systems:
The installed software includes:
We create 30 random sample of application version per time window, and automatically deploy it on a machine. This way we are creating almost 50.000 system configurations that are then attacked by the Exploit Kits. Upon successful exploitation, our Exploit Kits deliver Casper, our own good-ghost-in-the-browser malware. If Casper is successfully executed on the attacked machine, he pings back our Malware Distribution Server that register the successful exploitation. Overall, our infrastructure keeps record of configurations, instance runs, attacked operative systems, and of course successful and unsuccessful infections.
Here's a very black and malicious picture of the MalwareLab:
If you are curious about the results… stay tuned! (or contact us)
Web applications may be exploitable not only because of the vulnerabilities in their source code, but also because of the environments on which they are deployed and run. Execution environments usually consist of application servers, databases and other supporting applications. In order to test whether known exploits can be reproduced in different settings, better understand their effects and facilitate the discovery of new vulnerabilities, we need to have a reliable testbed. This is TestREx, a testbed for repeatable exploits which can pack and run applications with their environments, inject exploits and monitor their success; and generate security reports.
The TestREx design was inspired by the following empirical observations: (1) software systems are constantly evolving, thus certain exploits might work only for certain version(s) of the application; (2) the application might be vulnerable only if deployed within a certain software environment. Therefore, apart from having the possibility to run scripted exploits against chosen applications, TestREx is able to answer the following questions:
TestREx employs Docker containers in order to create
A typical use case of the TestREx is shown on the figure to the right:
We also provide a corpus of example applications, taken from related works or implemented by us.
The following is a list a people that has been involved in the project at some point in time.
This activity was supported by a number of project
Authors: