Friday, October 26, 2007

Trusted Computer Systems

Control Objectives

Three basic control objectives for computer security in Trusted Computer Systems need to be satisfied and not overlooked. These control objectives deal with:

  • Security Policy

  • Accountability

  • Assurance

This chapter provides a discussion of these general control objectives and their implication in terms of designing trusted systems. These control objectives lay the foundation for the requirements outlined in the criteria.

Security Policy

In the most general sense, computer security is concerned with controlling the way in which a computer can be used, i.e., controlling how information processed by it can be accessed and manipulated. However, at closer examination, computer security can refer to a number of areas. Symptomatic of this, FIPS Publication 39, Glossary For Computer Systems Security, does not have a unique definition for computer security. Instead there are eleven separate definitions for security which include: ADP systems security, administrative security, data security, etc. A common thread running through these definitions is the word "protection."

In summary, protection requirements must be defined in terms of the perceived threats, risks, and goals of an organization. This is often stated in terms of a security policy. It has been pointed out in the literature that it is external laws, rules, regulations, etc. that establish what access to information is to be permitted, independent of the use of a computer. In particular, a given system can only be said to be secure with respect to its enforcement of some specific policy. Thus, the control objective for security policy is:

Security Policy Control Objective:

A statement of intent with regard to control over access to and dissemination of information, to be known as the security policy must be precisely defined and implemented for each system that is used to process sensitive information. The security policy must accurately reflect the laws, regulations, and general policies from which it is derived.

Accountability

The second basic control objective addresses one of the fundamental principles of security, i.e., individual accountability. Individual accountability is the key to securing and controlling any system that processes information on behalf of individuals or groups of individuals. A number of requirements must be met in order to satisfy this objective. The first requirement is for individual user identification. Second, there is a need for authentication of the identification. Identification is functionally dependent on authentication. Without authentication, user identification has no credibility. Without a credible identity, neither mandatory nor discretionary security policies can be properly invoked because there is no assurance that proper authorizations can be made. The third requirement is for dependable audit capabilities. That is, a trusted computer system must provide authorized personnel with the ability to audit any action that can potentially cause access to, generation of, or effect the release of classified or sensitive information. The audit data will be selectively acquired based on the auditing needs of a particular installation and/or application. However, there must be sufficient granularity in the audit data to support tracing the auditable events to a specific individual who has taken the actions or on whose behalf the actions were taken. The control objective is:

Accountability Control Objective:

Systems that are used to process or handle classified or other sensitive information must assure individual accountability whenever either a mandatory or discretionary security policy is invoked. Furthermore, to assure accountability, the capability must exist for an authorized and competent agent to access and evaluate accountability information by a secure means, within a reasonable amount of time, and without undue difficulty.

Assurance

The third basic control objective is concerned with guaranteeing or providing confidence that the security policy has been implemented correctly and that the protection-relevant elements of the system do, indeed, accurately mediate and enforce the intent of that policy. By extension, assurance must include a guarantee that the trusted portion of the system works only as intended. To accomplish these objectives, two types of assurance are needed. They are life-cycle assurance and operational assurance. Life-cycle assurance refers to steps taken by an organization to ensure that the system is designed, developed, and maintained using formalized and rigorous controls and standards. Computer systems that process and store sensitive or classified information depend on the hardware and software to protect that information. It follows that the hardware and software themselves must be protected against unauthorized changes that could cause protection mechanisms to malfunction or be bypassed completely. For this reason trusted computer systems must be carefully evaluated and tested during the design and development phases and reevaluated whenever changes are made that could affect the integrity of the protection mechanisms. Only in this way can confidence be provided that the hardware and software interpretation of the security policy is maintained accurately and without distortion. While life-cycle assurance is concerned with procedures for managing system design, development, and maintenance; operational assurance focuses on features and system architecture used to ensure that the security policy is uncircumventably enforced during system operation. That is, the security policy must be integrated into the hardware and software protection features of the system. Examples of steps taken to provide this kind of confidence include: methods for testing the operational hardware and software for correct operation, isolation of protection-critical code, and the use of hardware and software to provide distinct domains. The control objective is:

Assurance Control Objective:

Systems that are used to process or handle classified or other sensitive information must be designed to guarantee correct and accurate interpretation of the security policy and must not distort the intent of that policy. Assurance must be provided that correct implementation and operation of the policy exists throughout the system's life-cycle.


The Reference Monitor Concept

In October of 1972, the Computer Security Technology Planning Study, conducted by James P. Anderson & Co., produced a report for the Electronic Systems Division (ESD) of the United States Air Force.[1] In that report, the concept of "a reference monitor which enforces the authorized access relationships between subjects and objects of a system" was introduced. The reference monitor concept was found to be an essential element of any system that would provide multilevel secure computing facilities and controls.

The Anderson report went on to define the reference validation mechanism as "an implementation of the reference monitor concept ... that validates each reference to data or programs by any user (program) against a list of authorized types of reference for that user." It then listed the three design requirements that must be met by a reference validation mechanism:

1. The reference validation mechanism must be tamper proof.

2. The reference validation mechanism must always be invoked.

3. The reference validation mechanism must be small enough to be subject to analysis and tests, the completeness of which can be assured."

Extensive peer review and continuing research and development activities have sustained the validity of the Anderson Committee's findings. Early examples of the reference validation mechanism were known as security kernels. The Anderson Report described the security kernel as "that combination of hardware and software which implements the reference monitor concept." In this vein, it will be noted that the security kernel must support the three reference monitor requirements listed above.

A Formal Security Policy Model

Following the publication of the Anderson report, considerable research was initiated into formal models of security policy requirements and of the mechanisms that would implement and enforce those policy models as a security kernel. Prominent among these efforts was the ESD-sponsored development of the Bell and LaPadula model, an abstract formal treatment of DoD security policy.[2] Using mathematics and set theory, the model precisely defines the notion of secure state, fundamental modes of access, and the rules for granting subjects specific modes of access to objects. Finally, a theorem is proven to demonstrate that the rules are security-preserving operations, so that the application of any sequence of the rules to a system that is in a secure state will result in the system entering a new state that is also secure. This theorem is known as the Basic Security Theorem. A subject can act on behalf of a user or another subject. The subject is created as a surrogate for the cleared user and is assigned a formal security level based on their classification. The state transitions and invariants of the formal policy model define the invariant relationships that must hold between the clearance of the user, the formal security level of any process that can act on the user's behalf, and the formal security level of the devices and other objects to which any process can obtain specific modes of access. The Bell and LaPadula model, for example, defines a relationship between formal security levels of subjects and objects, now referenced as the "dominance relation." From this definition, accesses permitted between subjects and objects are explicitly defined for the fundamental modes of access, including read-only access, read/write access, and write-only access. The model defines the Simple Security Condition to control granting a subject read access to a specific object, and the *-Property (read "Star Property") to control granting a subject write access to a specific object. Both the Simple Security Condition and the *-Property include mandatory security provisions based on the dominance relation between formal security levels of subjects and objects the clearance of the subject and the classification of the object. The Discretionary Security Property is also defined, and requires that a specific subject be authorized for the particular mode of access required for the state transition. In its treatment of subjects (processes acting on behalf of a user), the model distinguishes between trusted subjects (i.e., not constrained within the model by the *- Property) and untrusted subjects (those that are constrained by the *- Property).

From the Bell and LaPadula model there evolved a model of the method of proof required to formally demonstrate that all arbitrary sequences of state transitions are security-preserving. It was also shown that the *- Property is sufficient to prevent the compromise of information by Trojan Horse attacks.


The Trusted Computing Base (TCB)

In order to encourage the widespread commercial availability of trusted computer systems, these evaluation criteria have been designed to address those systems in which a security kernel is specifically implemented as well as those in which a security kernel has not been implemented. The latter case includes those systems in which objective (c) is not fully supported because of the size or complexity of the reference validation mechanism. For convenience, these evaluation criteria use the term Trusted Computing Base to refer to the reference validation mechanism, be it a security kernel, front-end security filter, or the entire trusted computer system.

The heart of a trusted computer system is the Trusted Computing Base (TCB) which contains all of the elements of the system responsible for supporting the security policy and supporting the isolation of objects (code and data) on which the protection is based. The bounds of the TCB equate to the "security perimeter" referenced in some computer security literature. In the interest of understandable and maintainable protection, a TCB should be as simple as possible consistent with the functions it has to perform. Thus, the TCB includes hardware, firmware, and software critical to protection and must be designed and implemented such that system elements excluded from it need not be trusted to maintain protection. Identification of the interface and elements of the TCB along with their correct functionality therefore forms the basis for evaluation.

For general-purpose systems, the TCB will include key elements of the operating system and may include all of the operating system. For embedded systems, the security policy may deal with objects in a way that is meaningful at the application level rather than at the operating system level. Thus, the protection policy may be enforced in the application software rather than in the underlying operating system. The TCB will necessarily include all those portions of the operating system and application software essential to the support of the policy. Note that, as the amount of code in the TCB increases, it becomes harder to be confident that the TCB enforces the reference monitor requirements under all circumstances.

Assurance

The third reference monitor design objective is currently interpreted as meaning that the TCB "must be of sufficiently simple organization and complexity to be subjected to analysis and tests, the completeness of which can be assured."

Clearly, as the perceived degree of risk increases (e.g., the range of sensitivity of the system's protected data, along with the range of clearances held by the system's user population) for a particular system's operational application and environment, so also must the assurances be increased to substantiate the degree of trust that will be placed in the system. The hierarchy of requirements that are presented for the evaluation classes in the trusted computer system evaluation criteria reflect the need for these assurances.

The systems to which security enforcement mechanisms have been added, rather than built-in as fundamental design objectives, are not readily amenable to extensive analysis since they lack the requisite conceptual simplicity of a security kernel. This is because their TCB extends to cover much of the entire system. Hence, their degree of trustworthiness can best be ascertained only by obtaining test results. Since no test procedure for something as complex as a computer system can be truly exhaustive, there is always the possibility that a subsequent penetration attempt could succeed. It is for this reason that such systems must fall into the lower evaluation classes. On the other hand, those systems that are designed and engineered to support the TCB concepts are more amenable to analysis and structured testing. Formal methods can be used to analyze the correctness of their reference validation mechanisms in enforcing the system's security policy. Other methods, including less-formal arguments, can be used in order to substantiate claims for the completeness of their access mediation and their degree of tamper-resistance. More confidence can be placed in the results of this analysis and in the thoroughness of the structured testing than can be placed in the results for less methodically structured systems. For these reasons, it appears reasonable to conclude that these systems could be used in higher-risk environments. Successful implementations of such systems would be placed in the higher evaluation classes.

The Classes

It is highly desirable that there be only a small number of overall evaluation classes. Three major divisions have been identified in the evaluation criteria with a fourth division reserved for those systems that have been evaluated and found to offer unacceptable security protection. Within each major evaluation division, it was found that "intermediate" classes of trusted system design and development could meaningfully be defined. These intermediate classes have been designated in the criteria because they identify systems that:

· are viewed to offer significantly better protection and assurance than would systems that satisfy the basic requirements for their evaluation class; and

· there is reason to believe that systems in the intermediate evaluation classes could eventually be evolved such that they would satisfy the requirements for the next higher evaluation class.

Except within division A it is not anticipated that additional "intermediate" evaluation classes satisfying the two characteristics described above will be identified.

Distinctions in terms of system architecture, security policy enforcement, and evidence of credibility between evaluation classes have been defined such that the "jump" between evaluation classes would require a considerable investment of effort on the part of implementers. Correspondingly, there are expected to be significant differentials of risk to which systems from the higher evaluation classes will be exposed.

Summary Of Evaluation Criteria Classes

The classes of systems recognized under the trusted computer system evaluation criteria are as follows. They are presented in the order of increasing desirability from a computer security point of view.

Class (D): Minimal Protection

This class is reserved for those systems that have been evaluated but that fail to meet the requirements for a higher evaluation class.

Class (C1): Discretionary Security Protection

The Trusted Computing Base (TCB) of a class (C1) system nominally satisfies the discretionary security requirements by providing separation of users and data. It incorporates some form of credible controls capable of enforcing access limitations on an individual basis, i.e., ostensibly suitable for allowing users to be able to protect project or private information and to keep other users from accidentally reading or destroying their data. The class (C1) environment is expected to be one of cooperating users processing data at the same level(s) of sensitivity.

Class (C2): Controlled Access Protection

Systems in this class enforce a more finely grained discretionary access control than (C1) systems, making users individually accountable for their actions through login procedures, auditing of security-relevant events, and resource isolation.

Class (B1): Labeled Security Protection

Class (B1) systems require all the features required for class (C2). In addition, an informal statement of the security policy model, data labeling, and mandatory access control over named subjects and objects must be present. The capability must exist for accurately labeling exported information. Any flaws identified by testing must be removed.

Class (B2): Structured Protection

In class (B2) systems, the TCB is based on a clearly defined and documented formal security policy model that requires the discretionary and mandatory access control enforcement found in class (B1) systems be extended to all subjects and objects in the ADP system. In addition, covert channels are addressed. The TCB must be carefully structured into protection-critical and non- protection-critical elements. The TCB interface is well-defined and the TCB design and implementation enable it to be subjected to more thorough testing and more complete review. Authentication mechanisms are strengthened, trusted facility management is provided in the form of support for system administrator and operator functions, and stringent configuration management controls are imposed. The system is relatively resistant to penetration.

Class (B3): Security Domains

The class (B3) TCB must satisfy the reference monitor requirements that it mediate all accesses of subjects to objects, be tamperproof, and be small enough to be subjected to analysis and tests. To this end, the TCB is structured to exclude code not essential to security policy enforcement, with significant system engineering during TCB design and implementation directed toward minimizing its complexity. A security administrator is supported, audit mechanisms are expanded to signal security- relevant events, and system recovery procedures are required. The system is highly resistant to penetration.

Class (A1): Verified Design

Systems in class (A1) are functionally equivalent to those in class (B3) in that no additional architectural features or policy requirements are added. The distinguishing feature of systems in this class is the analysis derived from formal design specification and verification techniques and the resulting high degree of assurance that the TCB is correctly implemented. This assurance is developmental in nature, starting with a formal model of the security policy and a formal top-level specification (FTLS) of the design. In keep with the extensive design and development analysis of the TCB required of systems in class (A1) , more stringent configuration management is required and procedures are established for distributing the system to sites. A system security administrator is supported.


A Guideline On Covert Channels

A covert channel is any communication channel that can be exploited by a process to transfer information in a manner that violates the system's security policy. There are two types of covert channels: storage channels and timing channels. Covert storage channels include all vehicles that would allow the direct or indirect writing of a storage location by one process and the direct or indirect reading of it by another. Covert timing channels include all vehicles that would allow one process to signal information to another process by modulating its own use of system resources in such a way that the change in response time observed by the second process would provide information.

From a security perspective, covert channels with low bandwidths represent a lower threat than those with high bandwidths. However, for many types of covert channels, techniques used to reduce the bandwidth below a certain rate (which depends on the specific channel mechanism and the system architecture) also have the effect of degrading the performance provided to legitimate system users. Hence, a trade-off between system performance and covert channel bandwidth must be made. Because of the threat of compromise that would be present in any multilevel computer system containing classified or sensitive information, such systems should not contain covert channels with high bandwidths. This guideline is intended to provide system developers with an idea of just how high a "high" covert channel bandwidth is. A covert channel bandwidth that exceeds a rate of one hundred (100) bits per second is considered "high" because 100 bits per second is the approximate rate at which many computer terminals are run. It does not seem appropriate to call a computer system "secure" if information can be compromised at a rate equal to the normal output rate of some commonly used device.

In any multilevel computer system there are a number of relatively low-bandwidth covert channels whose existence is deeply ingrained in the system design. Faced with the large potential cost of reducing the bandwidths of such covert channels, it is felt that those with maximum bandwidths of less than one (1) bit per second are acceptable in most application environments.

Though maintaining acceptable performance in some systems may make it impractical to eliminate all covert channels with bandwidths of 1 or more bits per second, it is possible to audit their use without adversely affecting system performance. This audit capability provides the system administration with a means of detecting -- and procedurally correcting -- significant compromise. Therefore, a Trusted Computing Base should provide, wherever possible, the capability to audit the use of covert channel mechanisms with bandwidths that may exceed a rate of one (1) bit in ten (10) seconds.

No comments: