Multi-level Security

Lecturer: Professor Fred B. Schneider

Lecture notes by Lynette I. Millett


Our access control discussion has been primarily about discretionary access control (DAC) policies and mechanism. The underlying philosophy has been that subjects can determine who has access to their objects. There is a difference, though, between trusting a person and trusting a program. E.g., A gives B a program that A trusts, and since B trusts A, B trusts the program, while neither of them is aware that the program is buggy. Avoiding such problems leads to a different approach to access control mentioned in the last lecture: mandatory access control (MAC). In MAC, the system imposes an access control policy and object owners cannot change that policy.

Today we will discuss multi-level security (MLS), as an example of MAC. Consider a typical system that uses DAC. Suppose a subject S has access to some highly secret object O. Moreover, suppose that another subject S' does not have access to O, but would like to. What can S' do to gain access? S' can write a program that does two things, the first of which is the following sequence of commands:

The second thing the program does is to act like a video game (or some other application that S might be interested in running). If the program is then run by S, S' will get access to the contents of O (now in O'). The upshot is that programs cannot be trusted, and it is not possible to trust S (or any other user) to know whether a given program is trustworthy. This type of program is referred to as a Trojan horse. Discretionary access control is insufficient to protect against Trojan horse attacks, since not all programs can be analyzed for all possible unexpected behaviors.

A solution to the above problem was proposed by Bell & LaPadula. Their goal was to provide access control for military systems. In the military, documents are labeled according to their sensitivity levels, which range from unclassified (anyone can see this) to confidential to secret and finally (we believe) to top secret.

The military does physical access control on a 'need to know' basis. We have seen this before--it is just the principle of least privilege.

It is not sufficient to use the sensitivity labels to classify objects if one wants to comply with the need to know principle. A given subject may warrant top-secret clearance for cryptology files, but only warrant sensitive clearance for files concerning nuclear weapons.) To handle this decomposition of information, we introduce compartments. Every object is associated with a set of compartments (e.g. crypto, nuclear, biological, reconnaissance, etc.). For example, each paragraph in a document could have pairs associated with it, each pair consisting of a compartment and sensitivity. This provides us with a full classification of a given paragraph for our purposes. The classification of a file might then be the most restrictive classification given to a paragraph in that file.

People are also classified according to their security clearance for each given area/compartment.

Classification labels are of the form (Sr, Sc) where Sr is a sensitivity and Sc is a set of compartments. We say that (Or, Oc) dominates (Sr, Sc) if (Sr, Sc) <= (Or, Oc). This <= relation is true when

Notice that <= is a partial order. Moreover, it is possible to have two labels that are incomparable (e.g. (secret, {crypto}) vs. (top secret, {nuclear})) according to <=.

We can extend this structure to a lattice. (Recall that in a lattice any two elements have an upper bound and a lower bound.)

We are interested in ensuring information flows between objects that are in accord with the need to know principle. We do this as follows. A user that logs in is given a clearance level. Therefore, we can assume that a subject is always executing with some clearance. A subject can log in with any clearance that is dominated by some bound associated with that subject (typically resulting from a character investigation, lie-detector test, etc.). And, we postulate the following rules about what objects can be read or written. Let C(O) denote the clearance/label associated with O.

Do these rules enforce our need to know principle? We claim that if these rules are followed, then information does not flow from a higher security clearance to a lower one or between incomparable levels. The latter is easy to see, since subjects can only read or write things that are comparable. To demonstrate the former, first note that information flows only if something is first read and then another thing is written. Suppose that object O is read and then object O' is written. This possibility contradicts what we need to prove only if C(O') < C(O). So, we need to show that O' can never be less than O. If a subject reads O we can conclude (from the rules above) that C(O) <= C(S). Similarly, if a subject writes O' we can conclude that C(S) <= C(O'). The dominate relation is transitive and therefore C(O) <= C(O'), as we require.

The above was a significant result when it first was proved. But there are still some problems with this entire scheme.

First, in practice, read and write operations are not atomic, contrary to our assumptions. So, it is possible that the security level for a subject could be changed in mid-operation. This change could violate the information flow constraints we wish to preserve. For example, suppose that a subject has top secret clearance and performs an operation that involves first reading an object labeled (top secret, {x}). The subject then downgrades to the secret clearance level and as part of the operation, writes an object labeled (secret, {y}). In this case, the subject has 'written down' which is forbidden by the rules. We thus need to make sure that subjects do not downgrade security levels in the middle of an operation. Two solutions have been proposed:

A second problem with Bell LaPadula is that this model allows "blind writes." That is, this policy is more concerned with inadvertent disclosure of information to insecure subjects than it is with maintaining the integrity of said information. Subjects can "write up" and not be able to read what's been written, thus, a subject deemed unsuitable for reading an object is permitted to make changes to that object. Moreover, because of this, it is also not possible to do end-to-end checks. One solution is to insist that writes be restricted to the same security level as the subject.

Remote reads are a third source of problems. In a distributed system, communication is done through message passing. Sends and receives provide a way for a user A at one site to read a file on a remote disk B. In this scenario, sends are equivalent to writes and receives to reads. Suppose that A is labeled confidential and B is unclassified. When A sends a message to B, then it is writing down. That is in violation of the rules above. This, even though A should be allowed to read disk B.

Fourth, our scheme also does not say anything about execute privileges. Read and execute are not the same types of access. Complete freedom to execute anything is, again, an instance of trading secrecy for integrity.

Finally, some processes must be allowed to violate the rules about "read up" and "write down". For example, an encryption program, by definition, takes secret information and outputs encrypted but unclassified information. Similarly, to do accounting, programs that may access confidential data produce summary (billing) information that is not confidential. Such programs would seem to be violating the no "write down" restriction. To handle these sorts of cases, we postulate the existence of trusted subjects, which are not subject to the "read up" and "write down" rules. This introduces a potential vulnerability into the system, however. A program masquerading as a trusted subject could, in fact, be a Trojan horse, which is what we were trying to avoid in the first place.

Domain Type Enforcement

The problems with the schemes above is that information flow restrictions do not fit into the lattice structure that we are using. To address this problem, we introduce a new kind of matrix. It looks like the access control matrix but, this is only a superficial resemblance. The rows in this matrix correspond to domains and the columns to types. Each entry contains a set of access rights. An entry [s,t] is the maximum permissions that domain s has with respect to an object of type t. In contrast to an access control matrix, this type enforcement matrix does not have commands associated with it. It cannot be manipulated by owners of objects; instead, it is controlled by system administrators (MAC as opposed to DAC).

Note that a type enforcement matrix allows us to encode more than a lattice. For example, information flow is not necessarily transitive, and the matrix lets us express this, whereas the lattice does not. Consider the following example: a program sends data to an encryption routine that then sends encrypted data to the network. We would like an application program to be able to write to the encryption routine. The encryption routine should be able to read from the application program and write encrypted data to the network, and the network should be able to read from the encryption routine. The matrix is as follows:

This matrix provides stronger constraints than simply making the encryption routine a trusted subject. A trusted subject can do what it wants, but here we make the encryption program's access rights more restrictive. Thus, if we still wish to do an analysis of the encryption program (e.g. to make sure any data that it writes is encrypted), we don't know need to worry about it writing anywhere other than to the network, so the scope of the analysis is narrowed (and therefore the analysis is easier.)