Among the research topics of the Security Group we focus on here on enforcement mechanisms that are practical in the sense that they tolerate the fact that humans make mistakes in good faith and that can be programmed without a course in type theory.
Recently we have been looking into the Secure Multi-Execution technique for enforcing non-interference in a browser (check out the NSS-2011 paper for more details) and we have devised a general mechanism based on the idea of MAP-REDUCE that can lead to a programmable model of a whole range information flow policies (essentially generalizing Secure Multi Execution to a property of your choice).
The main idea is to execute multiple ``local'' instances of the original program, feeding different inputs to each instance of the program. The local inputs are produced from the original program inputs by the MAP component, depending on the set of security levels defined in the framework and the input channels available. Upon receiving the necessary data (for instance, after each individual program instance is terminated or a request for out arrives from an authorized instance), the REDUCE component collects the local outputs and generates the common output, thus ensuring that the overall execution is secure. MAP and REDUCE are customizable and by changing their programs the user can easily change the enforced property. See our ArXiv technical report.
So far we can show that the framework works for Non-Interference (NI) from (Devriese and Piessens, 2010), Removal of Inputs (RI) and Deletion of Inputs (DI) from (Mantel, 2000). We have proven formally soundness and precision of these enforcement mechanisms with respect to the corresponding properties for a model programming language with simple I/O instructions.
In the past we have also looked at the idea of mimicking human flexibility in access control. In particular, if we look at the way humans organizations manage security we appreciate their flexibility: a policy officer unsatisfied by our thorn driving licence will explicitly ask for another document of his liking, a project officer will not launch a major review of an EU project if a single deliverable is sent with a week delay. She might do it after a continued violation of deadlines. The current formal models for enforcement and authentication don't distinguish between small and big infringements.
The starting point is that a server should be able to compute and communicate to a client the credential that are missing to obtain a service and that it should be possible for either the server or the client to disclose such missing credentials in a piecewise fashion (a generalization of the trust negotiation by Winslett, Yu, Winsborough and others. We have actually theoretically specified by using abduction and fully implemented as web-services using PKI and PMI for credentials. It also worked well: logic only takes a fraction over the time taken by the cryptographic verification of the credentials. You can check the TAAS paper for the details and have a look at the architecture.
Yet, this is not enough because, once access has been granted, security monitors suffer from the same lack of flexibility and do not capture the real working of human organizations. Most papers (Schneider with Erlingsson, Hamlen and Morriset, Ligatti with Bauer and Walker) described in some theorems the good traces potentially enforceable with this or that enforcement mechanism (safety properties, renewal properties, etc.). In collaboration with some researchers from the San Raffaele hospital in Milano (who were interested in the practical aspect of enforcement) we showed that safety and renewal properties are not what you want. They key observation is that most real-life tasks are a repetitions of sub-tasks. We called them iterative properties and you can see the difference between classical security properties such as safety and renewal in the figure on the side. As an example consider a drug dispensation process (a process running hundreds of times and lasting for tens of steps in the hospital IT system ). Safety says that as soon as one single process is wrong you halt the system. Renewal says that until the first mistake is corrected the system will start to silently gobble all other actions. Hardly appealing behaviors for any practical purposes…
Yet many of their proponents have actually implemented systems that enforced those properties. There is a catch here that many people overlook. What distinguishes an enforcement mechanism is not what happens when traces are good, because nothing should happen! The interesting part is how precisely bad traces (that don't satisfy the policy P) are converted into good ones (that do satisfy the policy P). The picture on the sides shows a classification of edit automata which enforce a renewal property P from Bauer, Ligatti and Walker. Implemented systems, being by definition implemented, will actually take care of correcting bad traces that are not in P, in some way. But this part is simply not reflected in the current theories which sits on the bottom of the pile (the Ligatti automata on the left).
Within the main stream project we covered a number of themes.
The following is a list a people that has been involved in the project at some point in time.
Contact us via email firstname.lastname@example.org