Introduction: Problem Scope and Vulnerabilities

Lecturer: Professor Fred B. Schneider

Lecture notes by Lynette I. Millett


We begin the course today with a discussion of some nontechnical aspects of system security. Much of the course will be spent on technical issues, but unlike many other computer science courses, the subject matter, security, is not necessitated by technical problems. Computers do not attack other computers. Thus, to appreciate viable technical solutions we need to understand the nontechnical landscape.

We start with the question: Why are we here? Aside from the mildly cynical reasons, such as academic credits, lack of other courses, learning how to break into systems (NB: Such knowledge is not likely to be transmitted.), and so on, many people, the instructor included, have come to believe that security problems the United States and the world are up against must not be ignored.

Network information systems (NIS) are becoming more and more prevalent. Examples include: the public telephone system, the power grid, etc. At the same time these systems are not trustworthy or dependable. And, the alarming trend seems to be for people, companies, and governments to be more and more dependent on them. As computer scientists we are in a position to help.

For the purposes of our discussion we define trustworthiness in a NIS to occur when the system does what's intended in spite of:

Contrary to what might be assumed, hostile attacks are today actually the least significant cause of system crashes and problems. It is important to keep this in mind. Systems need to be able to tolerate squirrels and backhoes (chewing and slicing through wires respectively, as has happened) as well as ill-intentioned hackers. Environmental disruption and operator-error are the biggest sources of problems followed by buggy software and only then external attacks.

Trustworthiness of NIS

Building software to be trustworthy is qualitatively different than other more typical software development issues. Most of your experience in building software has been concerned with functional requirements--what outputs must be produced for given inputs. Requirements involving attacks, on the other hand, are decidedly nonfunctional. We are not told what attacks to expect so the specification of the problem is inherently incomplete. By their very nature attacks are unpredictable and should not be formalized. Any attempt at formalization could rule out possible attacks and, therefore, be incorrect.

Consider the four trustworthiness dimensions discussed previously (malevolent attacks, software bugs, user error and environmental disruption). These are intrinsically different from functional requirements; these are sometimes referred to as "negative properties." The problem is that we have an open system, one in which some components are unspecified, and yet are required to reason about all instantiations of these unspecified components. (For example, we must reason about how the system would behave under a hostile attack without knowing what form this attack will take.)

Trustworthiness is a multi-dimensional problem. Is it possible that all of these four dimensions are really the same thing? After all, an environmental disruption can be seen as a random perturbation of the system and each of the other dimensions produces perturbations, so are they not all closely related? At a very coarse level they can be seen that way, but closer study reveals that they are very different.

Environmental disruptions are events that are uncorrelated. When events are independent, then it makes sense to use replication in order to build a system that will tolerate some number of failures. Hostile attacks, on the other hand, are correlated. Replication does not work for correlated failures. Operator error is in some ways even worse than a hostile attack, because operators are often trusted users who will have privileges that outside attackers will not. Moreover, software bugs are worse than operator error because the buggy software may have arbitrarily high levels of privilege.

Increasing Prevalence of NIS's

We said that network information systems were untrustworthy and also becoming more and more prevalent. It may be useful to know why such systems are becoming more prevalent and what is driving that process.

In the private sector, organizations today seem to be driven by the need to operate faster and more efficiently. Profit margins are thinner and expectations are high. (For example, consider "just in time" manufacturing wherein inventory and material are not warehoused but instead shipped to arrive exactly when needed.) In this kind of environment, timely information (who needs what and when?) becomes essential, thus the need for network information systems.

In the quasi-public sector there is a new climate of deregulation. Less regulation produces competition, which produces a need for low prices. Companies thus need to lower expenses, and one way to do this is to decrease excess capacity (power reserves, bandwidth, etc.). Lower excess capacity results in the need for finer control over the existing capacity, which requires a good information system. Lower excess capacity also results in less trustworthiness by creating a less stable system. Excess capacity can, in some cases, take up the slack in the event of a failure. With less "slack" it becomes more likely that a "small" failure could have large repercussions.

Another result of the need to lower expenses and attract customers in deregulated industries is the introduction of new and complicated features to existing services (e.g., in the telephone industry consider things like call-forwarding and *69). The more complicated a system becomes, the less reliable it will be. The addition of features increases complexity which may well result in unanticipated and undesirable behavior. In telephony, this is known as the "feature interaction problem."

The development of new industries exploiting NIS, such as electronic commerce, is a third reason for the growing prevalence of such systems.

In short, it seems as if we are heading towards a situation in which there will be many untrustworthy network information systems. This problem will need to be fixed, but one might ask: How bad is this problem? What are its dimensions? The consequences of untrustworthiness include denial of service. Yet availability can be extremely important. Telephone and power outages can result in loss of life and civil unrest. Information disclosure is another problem. It can result in personal embarrassment, revelation of corporate strategies, and disclosure of government information. Information alteration is yet another possible problem that can obviously affect everything from a student's grades to national economic health. All of this adds up to the potential for a new form of warfare termed "information warfare." Such warfare can be overt or subtle, ranging from interfering with military communication to planting sleeper programs to manipulate the stock market. Information warfare opportunities exist only because we as a society are so dependent on information systems. With such systems, it is possible to attack anonymously without ever being physically present.

Why Don't Trustworthy Systems Exist?

The next obvious question is, given all of the above, why are network information systems not built to be trustworthy? One answer is that it's not clear in all instances how to do it. But the real reason is costs, both direct and indirect. The software and systems market today is dominated by COTS (commercial, off-the-shelf) products. There is a huge economy of scale involved in building COTS components. Imagine someone in charge of integrating a large system which needs to be completed on-time and on-budget. It is faster and cheaper to use COTS components, and this also reduces project risk. Another incentive of COTS is interoperability. Upgrading from one version of COTS software to the next is usually straightforward and the easiest thing for users to do, even though there may very well be better products available. As an example, the government, even the NSA, uses COTS equipment for all but its most secure communications.

Those who provide COTS products (such as Microsoft and Intel) know that the market prefers features over trustworthiness. It may be that the market and individual consumers are not really conscious of this, but it seems to hold just the same. It's generally not clear to consumers what trustworthiness would provide, and the market is not aware of the risks of its lack. (One counter-example is in the area of hardware failure. Fault tolerance is much appreciated by the market, perhaps due to the fact that failure of a machine is obvious and immediately impacts productivity.)

On the other hand, in the past, there have been instances where the market did not necessarily appreciate something, and yet the manufacturers still provided it. Why don't COTS producers provide trustworthiness? The COTS market rule of thumb is that the earliest entrant to a market is the most likely to succeed. In other words, time to market dictates success. Implementing trustworthiness increases the time to market. It requires extra functionality, fault tolerance, better debugging, ways to provide assurances, and so on, which all add to development time. In short, there is every incentive for COTS producers not to provide trustworthiness, and given the current climate of deregulation, it is not likely that the government will legislate requirements on trustworthiness any time soon.

Another reason for the lack of trustworthy NIS's is the existing communication infrastructure. Ultimately, the telephone companies have control, and they still function under a very old tariff system. This system does not encourage them to provide things like path-disjoint (more fault-tolerant) service. The internet today is very easy to 'crash' with denial of service attacks. U.S. government policy also does not encourage the production of trustworthy products, particularly with its restrictions on the export of cryptographic equipment and its advocation of key escrow. The consequence is that there is no incentive for manufacturers to build equipment which has strong cryptography.

All of the above paints a depressing picture. Are there any glimmers of hope to be found? In fact, there a few factors in favor of change: