Lecture notes by
Lynette I. Millett
Revised by
Borislav Deianov
We start by defining policy. A policy is a set of acceptable behaviors. Note that we have not formally defined "behavior" but it can be modelled as the sequence of instructions the computer executes or the sequence of images that appear on the screen, etc.
Policies are defined with respect to abstract models. Consider the following example: A student walks to the entrance of a room and shows his/her ID to the guard. The guard looks up the student's name in a list of students and lets the student in the room if the name is on the list. This model ignores all sorts of details. What if the student's ID is dirty and illegible? What if the student borrows somebody else's ID or enters through the window instead of the door? Models are abstractions, and in choosing to deal with abstractions we ignore some aspects of reality. It is important to keep in mind that anything ignored by the model may constitute a vulnerability
In this course we discuss policies and mechanisms for enforcing those policies. Ideally, policies and mechanisms would be completely disjoint. We like to believe that any policy can be enforced by the set of mechanisms that our computer systems provides. In practice, this is not true. In choosing a security mechanism, we restrict our choice of security policies. Example: with only a list of student names at the door we cannot restrict admission to the figure skaters, since we have no way of determining who is a figure skater.
We now describe a particular model of access control. Our model will be concerned with subjects and objects. The subjects are the active entities (a.k.a. principals) that do things, the objects are the passive entities to which things are done. Examples of subjects might be a person or a process on a computer, an example of an object might be a file or a subroutine. The sets of subjects and objects need not be disjoint (and there is good reason for them not to be, as we see below).
Our model of access control is illustrated as follows:
It turns out that with this model we have actually ruled out some interesting policies. Consider the following example: We might want a computer system that restricts the ability to learn the professor's salary (say that it is $20,000). If a subject attempts the operation "read the salary file" the mediation mechanism can deny that request. But suppose some program attempts to print "$20,000". Should this action be allowed? If we disallow the action because this happens to be the professor's salary, then we have problems. First, there might be many salaries in the salary file, and we will have to restrict any program from printing those different numbers. Second, a user might note that requests to print certain numbers are blocked and thereby infer something about professorial salaries. On the other hand, there are many ways a program might learn about a salary (for example, by looking in some buffer in memory or observing the timing and frequency of disk accesses, etc.), so if printing $20,000 is allowed then the program could reveal the salary. A mechanism to enforce this sort of information flow must be able to identify the source of information it is printing, and this is not something that a mechanism as outlined above can do (because analyzing the program text is required). Conclusion: Some useful policies are sacrificed by choosing the model we have.
The system state with respect to access control can be represented in a matrix, as follows:
Systems are not static, and there will often be changes in access rights of subjects to objects. We therefore specify commands that will change state as follows:
Using this language, we can postulate protection commands that model, for example, creating a file, conferring read rights and revoking read rights, as follows:
In the proposed scheme, any right a subject S has to an object can be conferred to any other subject S' if S has "own" rights over the object.
Subjects can be processes or users. Recall the principle of least privilege from the last lecture. We wish to only give enough access to subjects so that they can do their intended jobs, and no more. Suppose that a row in the access matrix corresponds to a process. This would be inconsistent with the least privilege principle. A process has a definite lifetime and may not need all of its access rights throughout that entire lifetime. We therefore would like to find a way to maintain small protection domains for subjects.
The issue now is to identify criteria for a domain change, and make sure causing these domain changes does not end up adding work to the programmer. We solve this problem by overloading procedure call with domain changes to get protected procedures. Each protected procedure executes in its own protection domain. Thus, execution in a protected procedure has certain inalienable rights. Some of these rights come from arguments in the call, others from information obtained statically.
We give an example of a protected domain. Imagine that subjects consist of a user and an editor, and that the objects are some files and a spelling-checker dictionary. The matrix may look as follows (without the dotted arrows):
Now, suppose there are two users that wish to invoke the editor: