Friday, October 26, 2007

Understanding Computer Security

Understanding Computer Security

Table of Contents

1. Introduction to Computer Security

2. Security Principles to Know and Consider

3. Good Security Practices for Computer Users

4. Threats to Computer Systems

5. Computer Security Policy

6. Computer Security Risk Management

7. Developing IT System Security Plans

8. Behavior

9. Logical Access Controls

10. Passwords, Memory Tokens, and Biometrics

11. Audit Trails

12. Cryptography

13. Intrusion Detection Systems

14. Trusted Computer Systems

15. Mitigating Hacker Threats

16. Routers and Firewalls

17. Security for Unix Networks and Web Servers

18. Security for Microsoft Windows NT, 2000 and Applications

19. Computer Security Glossary and Acronyms

20. Good Security Resources on the Web

21. References


Feel free to pass this computer security e-book on to your friends and co-workers.

Introduction to Computer Security

Computers today are very important, and even integral to all aspects of the activities and operations of organizations and even individuals. As we become critically dependent upon computer information system, we recognize that computers and computer-related problems must be understood and managed, the same as any other resource.

Adequately secure systems deter, prevent, or detect unauthorized disclosure, modification, or use of information. Much of today’s information and data requires protection from intruders, as well as from individuals with authorized computer access privileges who attempt to perform unauthorized actions. Protection is achieved not only by technical, physical and personnel safeguards, but also by clearly articulating and implementing policy regarding authorized system use to information users and processing personnel at all levels.

This e-book introduces information and computer systems security concerns and outlines the issues that must be addressed by those responsible to protect information systems within their organizations. It describes essential components of an effective information resource protection process that applies to a stand-alone personal computer or to a large data processing facility.

Security protects an information system from unauthorized attempts to access information or interfere with its operation. It is concerned with:

  • Confidentiality: Information is disclosed only to users authorized to access it.
  • Integrity: Information is modified only by users who have the right to do so, and only in authorized ways. It is transferred only between intended users and in intended ways.
  • Accountability: Users are accountable for their security relevant actions.
  • Availability: Use of the system cannot be maliciously denied to authorized users.

Security is enforced using security functionality such as authentication, access control, auditing, encryption and associated administration. In addition, there are constraints on how the system is constructed, for example, to ensure adequate separation of data and functions so objects don't interfere with each other and separation of user's duties so the damage an individual user can do is limited.

Security is pervasive, affecting many components of a system, including some that are not directly security related. Additional components - an authentication service, for instance –provide services that are specific to security.


Eight Elements of Computer Security

The eight elements of computer security are essential to understand and keep in mind when implementing security practices and procedures. Here they are:

1. Computer security should support the mission of the organization.

The purpose of computer security is to protect an organization's valuable resources, such as information, hardware, and software. Through the selection and application of appropriate safeguards, security helps the organization's mission by protecting its physical and financial resources, reputation, legal position, employees, and other tangible and intangible assets.

2. Computer security is an integral element of sound management.

Information and computer systems are often critical assets that support the mission of an organization. Protecting them can be as critical as protecting other organizational resources, such as money, physical assets, or employees.

3. Computer security should be cost-effective.

The costs and benefits of security should be carefully examined in both monetary and non-monetary terms to ensure that the cost of controls does not exceed expected benefits. Security should be appropriate and proportionate to the value of and degree of reliance on the computer systems and to the severity, probability and extent of potential harm. Requirements for security vary, depending upon the particular computer system. Security benefits do have both direct and indirect costs. Solutions to security problems should not be chosen if they cost more, directly or indirectly, than simply tolerating the problem.

4. Computer security responsibilities and accountability should be made explicit.

The responsibilities and accountability of owners, providers, and users of computer systems and other parties concerned with the security of computer systems should be explicit and defined.

5. System owners have computer security responsibilities outside their own organizations.

If a system has external users, its owners have a responsibility to share appropriate knowledge about the existence and general extent of security measures so that other users can be confident that the system is adequately secure.

6. Computer security requires a comprehensive and integrated approach.

When providing computer security one needs a comprehensive approach that considers a variety of areas both within and outside of the computer security field. This includes interdependencies of security controls and also such factors as system management, legal issues, quality assurance, and internal and management controls. Computer security needs to work with traditional security disciplines including physical and personnel security.

7. Computer security should be periodically reassessed.

System technology and users, data and information in the systems, risks associated with the system and, therefore, security requirements are always changing. In addition, security is never perfect when a system is implemented. System users and operators discover new ways to intentionally or unintentionally bypass or subvert security. Changes in the system or the environment can create new vulnerabilities.

8. Computer security is constrained by societal factors.

The ability of security to support the mission of the organization(s) may be limited by various factors, such as social issues. For example, security and workplace privacy can conflict. Security measures should be implemented recognizing the rights and legitimate interests of others. Rules and expectations change with regard to the appropriate use of security controls and these changes may either increase or decrease security. The relationship between security and societal norms is not necessarily antagonistic. Security can enhance the access and flow of data and information by providing more accurate and reliable information and greater availability of systems.

Security Principles to Know and Consider

This chapter will introduce the reader to several important principles concerning computer and information system security. These are important to understand when planning and implementing computer security frameworks and controls.

Achieve Cost-Effective Security

The dollars spent for security measures to control or contain losses should never be more than the projected dollar loss if something adverse happened to the information resource. Cost-effective security results when reduction in risk through implementation of safeguards is balanced with costs. The greater the value of information processed, or the more severe the consequences if something happens to it, the greater the need for control measures to protect it.

Maintain Integrity

Integrity of information means you can trust the data and the processes that manipulate it. Not only does this mean that errors and omissions are minimized, but also that the information system is protected from deliberate actions to wrongfully change the data. Information can be said to have integrity when it corresponds to the expectations and assumptions of the users.

Assure Confidentiality

Confidentiality of sensitive data is often a requirement of organization's and individual's computer systems. Privacy requirements for personal information is dictated by statute, while confidentiality of other information is determined by the nature of that information, e.g., information submitted by bidders in procurement actions. The impact of wrongful disclosure must be considered in understanding confidentiality requirements.

Recoverability

An important design consideration is the ability to easily recover from troublesome events, whether minor problems or major disruptions of the system. From a design point of view, systems should be designed to easily recover from minor problems, and to be either transportable to another backup computer system or replaced by manual processes in case of major disruption or loss of computer facility.

Access Decisions

Decisions must be made regarding access to the system and the information it contains. For example, many individuals require the ability to access and view data, but not the ability to change or delete data. Even when computer systems have been designed to provide the ability to narrowly designate access authorities, a knowledgeable and responsible official must actually make those access decisions. The care that is taken in this process is a major determining factor of the level of security and control present in the system. If sensitive data is being transmitted over unprotected lines, it can be intercepted or passive eavesdropping can occur. Encrypting the files will make the data unintelligible and port protection devices will protect the files from unauthorized access, if warranted.

Protecting Against Malicious Software and Hardware

The recent occurrences of destructive computer viruses point to the need to ensure that agencies do not allow unauthorized software to be introduced to their computer environments. Unauthorized hardware can also contain hidden vulnerabilities. Management should adopt a strong policy against unauthorized hardware/software, inform personnel about the risks and consequences of unauthorized additions to computer systems, and develop a monitoring process to detect violations of the policy.

Data Security

Management must ensure that appropriate security mechanisms are in place that allow responsible officials to designate access to data according to individual computer users' specific needs. Security mechanisms should be sufficient to implement individual authentication of system users, allow authorization to specific information and transaction authorities, maintain audit trails as specified by the responsible official, and encrypt sensitive files if required by user management.

The Concept Of Least Privilege

Least privilege is a basic tenet of computer security that means users should be given only those rights required to do their job. Malicious code runs in the security context of the user launching the code. The more privileges the user has, the more damage the code can do. Recommendations pertaining to the least privilege principle include:

  • Keep the number of administrative accounts to a minimum
  • Administrators should use a regular account as much as possible instead of logging in as administrator or root to perform routine activities such as reading mail
  • Set resource permissions properly. Tighten the permissions on tools that an attacker might use once he has gained a foothold on the system, e.g., explorer.exe, regedit.exe, poledit.exe, taskman.exe, at.exe, cacls.exe, cmd.exe, finger.exe, ftp.exe, nbstat.exe, net.exe, net1.exe, netsh.exe, rcp.exe, regedt32.exe, regini.exe, regsvr32.exe, rexec.exe, rsh.exe, runas.exe, runonce.exe, svrmgr.exe, sysedit.exe, telnet.exe, tftp.exe, tracert.exe, usrmgr.exe,wscript.exe, and xcopy.exe.
  • Unix tools or utilities that should be restricted are debuggers, compilers, and scripting languages such as gcc, perl, etc.
  • The least privilege concept also applies to server applications. Where possible, run services and applications under a non-privileged account.

Monitoring and Review

Another aspect of information resource protection to be considered is the need for ongoing management monitoring and review. To be effective, a security program must be a continuous effort. Ideally, ongoing processes should be adapted to include information protection checkpoints and reviews. Information resource protection should be a key consideration in all major computer system initiatives.

Personnel Management

Managers must be aware that information security is more a people issue than a technical issue. Personnel are a vital link in the protection of information resources, as information is gathered by people, entered into information resource systems by people, and ultimately used by people. Security issues should be addressed with regard to:

  • People who use computer systems and store information in the course of their normal job responsibilities;

  • People who design, program, test, and implement critical or sensitive systems; and

  • People who operate computer facilities that process critical or sensitive data

Personnel Security

From the point of hire, individuals who will have routine access to sensitive information resources should be subject to special security procedures. More extensive background or reference checks may be appropriate for such positions, and security responsibilities should be explicitly covered in employee orientations. Position descriptions and performance evaluations should also explicitly reference unusual responsibilities affecting the security of information resources.

Individuals in sensitive positions should be subject to job rotation, and work flow should be designed in such a way as to provide as much separation of sensitive functions as possible. Upon decision to terminate or notice of resignation, expedited termination or rotation to less sensitive duties for the remainder of employment is a reasonable precaution.

Training

Most information resource security problems involve people. Problems can usually be identified in their earliest stages by people who are attuned to the importance of information protection issues. A strong training program will yield large benefits in prevention and early detection of problems and losses. To be most effective, training should be tailored to the particular audience being addressed, e.g., executives and policy makers; program and functional managers; IRM security and audit: ADP management and operations; end users.

Most employees want to do the right thing, once policy and expectations are clearly communicated. Internal policies can be enforced when staff have been made aware of their individual responsibilities. All people who access an organization's computer systems should be aware of their responsibilities, as well as obligations. Disciplinary actions and legal penalties should be communicated.

Security Attributes

There are some common security attributes that should be present in any system that processes valuable personal or sensitive information. System designs should include mechanisms to enforce the following security attributes.

Identification and Authentication of Users - Each user of a computer system should have a unique identification on the system, such as an account number or other user identification code. There must also be a means of verifying that the individual claiming that identity (e.g., by typing in that identifying code at a terminal) is really the authorized individual and not an imposter. The most common means of authentication is by a secret password, known only to the authorized user.

Authorization Capability Enforcing the Principle of Least Possible Privilege - Beyond ensuring that only authorized individuals can access the system, it is also necessary to limit the users access to information and transaction capabilities. Each person should be limited to only the information and transaction authority that is required by their job responsibilities. This concept, known as the principle of least possible privilege, is a long-standing control practice. There should be a way to easily assign each user just the specific access authorities needed.

Individual Accountability - From both a control and legal point of view, it is necessary to maintain records of the activities performed by each computer user. The requirements for automated audit trails should be developed when a system is designed. The information to be recorded depends on what is significant about each particular system. To be able to hold individuals accountable for their actions, there must be a positive means of uniquely identifying each computer user and a routinely maintained record of each user's activities.

Audit Mechanisms - Audit mechanisms detect unusual events and bring them to the attention of management. This commonly occurs by violation reporting or by an immediate warning to the computer system operator. The type of alarm generated depends on the seriousness of the event.

A common technique to detect access attempts by unauthorized individuals is to count attempts. The security monitoring functions of the system can automatically keep track of unsuccessful attempts to gain access and generate an alarm if the attempts reach an unacceptable number.

Good Security Practices for Computer Users

Ultimately, computer security is the user's responsibility. You, the user, must be alert to possible breaches in security and adhere to the security regulations that have been established within your agency. The security practices listed are not inclusive, but rather designed to remind you and raise your awareness towards securing your information resources:

Protect your equipment:

  • Keep it in a secure environment

  • Keep food, drink, and cigarettes AWAY from it

  • Know where the fire suppression equipment is located and know how to use it

Protect your area:

  • Keep unauthorized people AWAY from your equipment and data

  • Challenge strangers in your area

Protect your password:

  • Never write it down or give it to anyone

  • Don't use names, numbers or dates which are personally identified with you

  • Change it often, but change it immediately if you think it has been compromised

Protect your files:

  • Don't allow unauthorized access to your files and data, never leave your equipment unattended with your password activated – sign off before leaving your computer workstation.

  • Activate your screen saver with a password logon required.

Protect against viruses:

  • Don't use unauthorized software

  • Back up your files before implementing ANY new software

  • Lock up storage media containing sensitive data: If the data or information is sensitive or critical to your operation, lock it up!

Back up your data:

  • Keep duplicates of your sensitive data in a safe place, out of your immediate area.

  • Back it up as often as necessary.

Report security violations:

  • Tell your manager if you see any unauthorized changes to your data

  • Immediately report any loss of data or programs, whether automated or hard copy

Network Printer:

Today's network printers contain built-in FTP, WEB, and Telnet services as part of their OS. Enabled network printers can be readily exploited and are often overlooked by system administrators as a security threat. These network printers can and are often exploited as FTP bound servers, Telnet jump-off platforms, or exploited by web management services.

  • Change the default password to a complex password.

  • Explicitly block the printer ports at the boundary router/firewall and disable these services if not needed.

Simple Network Management Protocol (SNMP):

SNMP is widely used by network administrators to monitor and administer all types of computers (e.g., routers, switches, printers). SNMP uses an unencrypted "community string" as its only authentication mechanism. Attackers can use this vulnerability in SNMP to possibly gather information from, reconfigure or shut down a computer remotely. If an attack can collect SNMP traffic on a network, then he can learn a great deal about the structure of the network as well as the systems and devices attached to it.

Disable all SNMP servers on any computer where it is not necessary. However, if SNMP is a requirement, then consider the following.

  • Allow read-only access and not read-write access via SNMP.

  • Do not use standard community strings (e.g., public, private).

  • If possible, only allow a small set of computers access to the SNMP server on the computer.

Network Security Testing:

  • Test regularly the security of all of the following computers on the network: clients, servers, switches, routers, firewalls and intrusion detection systems.

  • Do this after any major configuration changes on the network.

Block Certain E-Mail Attachment Types:

There are numerous kinds of executable file attachments that many organizations do not need to routinely distribute via e-mail. If possible, block these at the perimeter as a countermeasure against the malicious code threat. The specific file types that can be blocked are:

.bas .hta .msp .url
.bat .inf .mst .vb
.chm .ins .pif .vbe
.cmd .isp .pl .vbs
.com .js .reg .ws
.cpl .jse .scr .wsc
.crt .lnk .sct .wsf
.exe .msi .shs .wsh

It may be prudent to add, or delete files from this list depending upon operational realities. For example, it may be practical to block applications within the Microsoft Office family, all of which can contain an executable component. Most notable are Microsoft Access files, which unlike other members of the Office family have no intrinsic protection against malicious macros.


Guidelines to Protect Information

In the modern world of computer and information technology, personal computers, on-line and Internet access, has placed the power of the computer into the hands of the users. These users are developing and using many different types of computer applications, and perform other data processing functions which previously were only done by the computer operations personnel. These advances have greatly improved our efficiency and effectiveness but have also presented a serious challenge in achieving adequate data security. This section will make you aware of some of the undesirable things that can happen to data and will provide some practical solutions for reducing your risks to these threats.

Some common-sense protective measures can reduce the risk of loss, damage, or disclosure of information. Following are the most important areas of information systems controls that assure that the system is properly used, resistant to disruptions, and reliable.

Make certain no one can impersonate you. If a password is used to verify your identity, this is the key to system security. Do not disclose your password to anyone, or allow anyone to observe your password as you enter it during the sign-on process. If you choose your own password, avoid selecting a password with any personal associations, or one that is very simple or short. The aim is to select a password that would be difficult to guess or derive. "1REDDOG" would be a better password than "DUKE."

If your system allows you to change your own password, do so regularly. Find out what your agency requires, and change passwords at least that frequently. Periodic password changes keep undetected intruders from continuously using the password of a legitimate user.

After you are logged on, the computer will attribute all activity to your user id. Therefore, never leave your terminal without logging off-even for a few minutes. Always log off or otherwise inactivate your terminal so no one could perform any activity under your user id when you are away from the area.

Safeguard sensitive information from disclosure to others. People often forget to lock up sensitive reports and computer media containing sensitive data when they leave their work areas. Information carelessly left on top of desks and in unlocked storage can be casually observed, or deliberately stolen. Every employee who works with sensitive information should have lockable space available for storage when information is not in use. If you aren't sure what information should be locked up or what locked storage is available, ask your manager.

While working, be aware of the visibility of data on your personal computer or terminal display screen. You may need to reposition equipment or furniture to eliminate over-the-shoulder viewing. Be especially careful near windows and in public areas. Label all sensitive diskettes and other computer media to alert other employees of the need to be especially careful. When no longer needed, sensitive information should be deleted or discarded in such a way that unauthorized individuals cannot recover the data. Printed reports should be finely shredded, while data on magnetic media should be overwritten. Files that are merely deleted are not really erased and can still be recovered.

Install physical security devices or software on personal computers.

The value and popularity of personal computers make theft a big problem, especially in low-security office areas. Relatively inexpensive hardware devices greatly reduce the risk of equipment loss. Such devices involve lock-down cables or enclosures that attach equipment to furniture. Another approach is to place equipment in lockable cabinets.

When data is stored on a hard disk, take some steps to keep unauthorized individuals from accessing that data. A power lock device only allows key-holders to turn on power to the personal computer. Where there is a need to segregate information between multiple authorized users of a personal computer, additional security in the form of software is probably needed. Specific files could be encrypted to make them unintelligible to unauthorized staff, or access control software can divide storage space among authorized users, restricting each user to their own files.

Avoid costly disruptions caused by data or hardware loss. Disruptions and delays are expensive. No one enjoys working frantically to re-enter work, do the same job twice, or fix problems while new work piles up. Most disruptions can be prevented, and the impact of disruptions can be minimized by advance planning.

Proper environmental conditions and power supplies minimize equipment outages and information loss. Many electrical circuits in office areas do not constitute an adequate power source, so dedicated circuits for computer systems should be considered. Make certain that your surroundings meet the essential requirements for correct equipment operation. Cover equipment when not in use to protect it from dust, water leaks, and other hazards.

For protection from accidental or deliberate destruction of data, regular data backups are essential. Complete system backups should be taken at intervals determined by how quickly information changes or by the volume of transactions. Backups should be stored in another location, to guard against the possibility of original and backup copies being destroyed by the same fire or other disaster.

Maintain the authorized hardware/software configuration. Some organizations have been affected by computer "viruses" acquired through seemingly useful or innocent software obtained from public access bulletin boards or other sources; others have been liable for software illegally copied by employees. The installation of unauthorized hardware can cause damage, invalidate warranties, or have other negative consequences. Install only hardware or software that has been acquired through normal acquisition procedures and comply with all software licensing agreement requirements.

Threats to Computer Systems

Computer systems are vulnerable to many threats that can inflict various types of damage resulting in significant losses. The effects of various threats varies considerably: some affect the confidentiality or integrity of data while others affect the availability of a system. This section discusses some of the more common threats in the risky environment in which systems operate today. The threats and associated losses listed here were selected based on their prevalence and significance in the current computing environment and their expected growth in the future.

Errors and Omissions

Errors and omissions are an important threat to data and system integrity. These errors are caused not only by data entry clerks processing hundreds of transactions per day, but also by all types of users who create and edit data. Users, data entry clerks, system operators, and programmers frequently make errors that contribute directly or indirectly to security problems. In some cases, the error is the threat, such as a data entry error or a programming error that crashes a system. In other cases, the errors create vulnerabilities. Programming and development errors or "bugs," can range in severity from benign to catastrophic. While there have been great improvements in program quality, as reflected in decreasing errors per 1,000 lines of code, the concurrent growth in program size often seriously diminishes the beneficial effects of these program quality enhancements. Installation and maintenance errors are another source of security problems.

Fraud and Theft

Computer systems can be exploited for both fraud and theft both by "automating" traditional methods of fraud and by using new methods. For example, individuals may use a computer to skim small amounts of money from a large number of financial accounts, assuming that small discrepancies may not be investigated. Financial systems are not the only ones at risk. Systems that control access to any resource are targets. Computer fraud and theft can be committed by insiders or outsiders. Insiders (i.e., authorized users of a system) are responsible for the majority of fraud. Since insiders have both access to and familiarity with the victim computer system (including what resources it controls and its flaws), authorized system users are in a better position to commit crimes. In addition to the use of technology to commit fraud and theft, computer hardware and software may be vulnerable to theft.

Employee Sabotage

Employees are most familiar with their employer's computers and applications, including knowing what actions might cause the most damage, mischief, or sabotage. The downsizing of organizations in both the public and private sectors has created a group of individuals with organizational knowledge, who may retain potential system access (e.g., if system accounts are not deleted in a timely manner). The number of incidents of employee sabotage is believed to be much smaller than the instances of theft, but the cost of such incidents can be quite high. The motivation for sabotage can range from altruism to revenge.

Loss of Physical and Infrastructure Support

The loss of supporting infrastructure includes power failures (outages, spikes, and brownouts), loss of communications, water outages and leaks, sewer problems, lack of transportation services, fire, flood, civil unrest, and strikes.

Malicious Hackers

The term malicious hackers, sometimes called crackers, refers to those who break into computers without authorization. They can include both outsiders and insiders. The hacker threat should be considered in terms of past and potential future damage. Although current losses due to hacker attacks are significantly smaller than losses due to insider theft and sabotage, the hacker problem is widespread and serious.

The hacker threat often receives more attention than more common and dangerous threats. The U.S. Department of Justice's Computer Crime Unit suggests three reasons for this. First, the hacker threat is a more recently encountered threat. Second, organizations do not know the purposes of a hacker -- some hackers browse, some steal, some damage. This inability to identify purposes can suggest that hacker attacks have no limitations. Third, hacker attacks make people feel vulnerable, particularly because their identity is unknown

Industrial Espionage

Industrial espionage is the act of gathering proprietary data from private companies or the government for the purpose of aiding another company(ies). Industrial espionage can be perpetrated either by companies seeking to improve their competitive advantage or by governments seeking to aid their domestic industries. Foreign industrial espionage carried out by a government is often referred to as economic espionage. Industrial espionage is on the rise. A 1992 study sponsored by the American Society for Industrial Security (ASIS) found that proprietary business information theft had increased 260 percent since 1985. The data indicated 30 percent of the reported losses in 1991 and 1992 had foreign involvement. The study also found that 58 percent of thefts were perpetrated by current or former employees.

Malicious Code

Malicious code refers to viruses, worms, trojan horses, logic bombs, and other "uninvited" software. Sometimes mistakenly associated only with personal computers, malicious code can attack other platforms as well.

Threats to Personal Privacy

The accumulation of vast amounts of electronic information about individuals by governments, credit bureaus, and private companies, combined with the ability of computers to monitor, process, and aggregate large amounts of information about individuals have created a threat to individual privacy. As more of these cases come to light, many individuals are becoming increasingly concerned about threats to their personal privacy. To guard against such intrusion, Congress has enacted legislation, over the years, such as the Privacy Act of 1974 and the Computer Matching and Privacy Protection Act of 1988, which defines the boundaries of the legitimate uses of personal information collected by the government.

Computer Security Policy

In discussions of computer security, the term policy has more than one meaning. Policy is senior management's directives to create a computer security program, establish its goals, and assign responsibilities. The term policy is also used to refer to the specific security rules for particular systems. Additionally, policy may refer to entirely different matters, such as the specific managerial decisions setting an organization's e-mail privacy policy or fax security policy.

In this chapter the term computer security policy is defined as the "documentation of computer security decisions"—which covers all the types of policy described above. In making these decisions, managers face hard choices involving resource allocation, competing objectives, and organizational strategy related to protecting both technical and information resources as well as guiding employee behavior. Managers at all levels make choices that can result in policy, with the scope of the policy's applicability varying according to the scope of the manager's authority. In this chapter we use the term policy in a broad manner to encompass all of the types of policy described above—regardless of the level of manager who sets the particular policy.

Managerial decisions on computer security issues vary greatly. To differentiate among various kinds of policy, this chapter categorizes them into three basic types:

  • Program policy is used to create an organization's computer security program. It establishes the security program and assigns program management and supporting responsibilities.
  • Issue-specific policies address specific issues of concern to the organization. Both new technologies and the appearance of new threats often require the creation of issue-specific policies.
  • System-specific policies focus on decisions taken by management to protect a particular system. An example would be the direction to be used in establishing an access control list or in training users on what actions are permitted. More information describing this policy type is given below.

Procedures, standards, and guidelines are used to describe how these policies will be implemented within an organization. Some organizations issue overall computer security manuals, regulations, handbooks, or similar documents. These may mix policy, guidelines, standards, and procedures, since they are closely linked. While manuals and regulations can serve as important tools, it is often useful if they clearly distinguish between policy and its implementation. This can help in promoting flexibility and cost-effectiveness by offering alternative implementation approaches to achieving policy goals.

It is helpful to consider a two-level model for system security policy: security objectives and operational security rules, which together comprise the system-specific policy. It is often accompanied by implementing procedures and guidelines. Closely linked and often difficult to distinguish, however, is the implementation of the policy in technology.

Security objectives consist of a series of statements that describe meaningful actions about explicit resources. These objectives should be based on system functional or mission requirements, but should state the security actions that support the requirements. An example would be: "Only individuals in the accounting and personnel departments are authorized to provide or modify information used in payroll processing".

After management determines the security objectives, the rules for operating a system can be laid out, for example, to define authorized and unauthorized modification. Who (by job category, organization placement, or name) can do what (e.g., modify, delete) to which specific classes and records of data, and under what conditions. An example would be: "Personnel specialists may update salary information. No employees may update their own records".

In general, good practice suggests a reasonably detailed formal statement of the access privileges for a system. Documenting access controls policy will make it substantially easier to follow and to enforce. Another area that normally requires a detailed and formal statement is the assignment of security responsibilities. Other areas that should be addressed are the rules for system usage and the consequences of noncompliance.

Effective policies ultimately result in the development and implementation of a better computer security program and better protection of systems and information.

To be effective, policy requires visibility. Visibility aids implementation of policy by helping to ensure policy is fully communicated throughout the organization. Management presentations, videos, panel discussions, guest speakers, question/answer forums, and newsletters increase visibility. The organization's computer security training and awareness program can effectively notify users of new policies. It also can be used to familiarize new employees with the organization's policies.

Computer security policies should be introduced in a manner that ensures that management's unqualified support is clear, especially in environments where employees feel inundated with policies, directives, guidelines, and procedures. The organization's policy is the vehicle for emphasizing management's commitment to computer security and making clear their expectations for employee performance, behavior, and accountability.


What Makes a Good Security Policy?

The characteristics of a good security policy are:

(1) It must be implementable through system administration procedures, publishing of acceptable use guidelines, or other appropriate methods.

(2) It must be enforcible with security tools, where appropriate, and with sanctions, where actual prevention is not technically feasible.

(3) It must clearly define the areas of responsibility for the users, administrators, and management.

The components of a good security policy include:

(1) Computer Technology Purchasing Guidelines which specify required, or preferred, security features. These should supplement existing purchasing policies and guidelines.

(2) A Privacy Policy which defines reasonable expectations of privacy regarding such issues as monitoring of electronic mail, logging of keystrokes, and access to users' files.

(3) An Access Policy which defines access rights and privileges to protect assets from loss or disclosure by specifying acceptable use guidelines for users, operations staff, and management. It should provide guidelines for external connections, data communications, connecting devices to a network, and adding new software to systems. It should also specify any required notification messages (e.g., connect messages should provide warnings about authorized usage and line monitoring, and not simply say "Welcome").

(4) An Accountability Policy which defines the responsibilities of users, operations staff, and management. It should specify an audit capability, and provide incident handling guidelines (i.e., what to do and who to contact if a possible intrusion is detected).

(5) An Authentication Policy which establishes trust through an effective password policy, and by setting guidelines for remote location authentication and the use of authentication devices (e.g., one-time passwords and the devices that generate them).

(6) An Availability statement which sets users' expectations for the availability of resources. It should address redundancy and recovery issues, as well as specify operating hours and maintenance down-time periods. It should also include contact information for reporting system and network failures.

(7) An Information Technology System & Network Maintenance Policy which describes how both internal and external maintenance people are allowed to handle and access technology. One important topic to be addressed here is whether remote maintenance is allowed and how such access is controlled. Another area for consideration here is outsourcing and how it is managed.

(8) A Violations Reporting Policy that indicates which types of violations (e.g., privacy and security, internal and external) must be reported and to whom the reports are made. A non-threatening atmosphere and the possibility of anonymous reporting will result in a greater probability that a violation will be reported if it is detected.

(9) Supporting Information which provides users, staff, and management with contact information for each type of policy violation; guidelines on how to handle outside queries about a security incident, or information which may be considered confidential or proprietary; and cross-references to security procedures and related information, such as company policies and governmental laws and regulations.

There may be regulatory requirements that affect some aspects of your security policy (e.g., line monitoring). The creators of the security policy should consider seeking legal assistance in the creation of the policy. At a minimum, the policy should be reviewed by legal counsel.

Once your security policy has been established it should be clearly communicated to users, staff, and management. Having all personnel sign a statement indicating that they have read, understood, and agreed to abide by the policy is an important part of the process. Finally, your policy should be reviewed on a regular basis to see if it is successfully supporting your security needs.

Computer Security Risk Management

Risk is the possibility of something adverse happening. Risk management is the process of assessing risk, taking steps to reduce risk to an acceptable level and maintaining that level of risk. People manage risks daily, recognize various threats to their best interests and take precautions to guard against them or to minimize their effects.

The first step in assessing risk is to identify the system under consideration, the part of the system that will be analyzed, and the analytical method including its level of detail and formality. Risk has many different components: assets, threats, vulnerabilities, safeguards, consequences, and likelihood. This examination normally includes gathering data about the threatened area and synthesizing and analyzing the information to make it useful.

A risk management effort should focus on those areas that result in the greatest consequence to the organization (i.e., can cause the most harm). This can be done by ranking threats and assets. A risk management methodology does not necessarily need to analyze each of the components of risk separately. For example, assets/consequences or threats/likelihoods may be analyzed together.

When analyzing risk, we should concentrate on those threats most likely to occur and affect important assets. The risk assessment is used to support two related functions: the acceptance of risk and the selection of cost-effective controls. To accomplish these functions, the risk assessment must produce a meaningful output that reflects what is truly important to the organization. Limiting the risk interpretation activity to the most significant risks is another way that the risk management process can be focused to reduce the overall effort while still yielding useful results.

The risk assessment is used to support two related functions: the acceptance of risk and the selection of cost-effective controls. To accomplish these functions, the risk assessment must produce a meaningful output that reflects what is truly important to the organization. Limiting the risk interpretation activity to the most significant risks is another way that the risk management process can be focused to reduce the overall effort while still yielding useful results.

Risk mitigation involves the selection and implementation of security controls to reduce risk to a level acceptable to management, within applicable constraints. The process of risk mitigation involves the following activities:

1. Selecting Safeguards - A primary function of computer security risk management is the identification of appropriate controls. In designing (or reviewing) the security of a system, it may be obvious that some controls should be added (e.g., because they are required by law or because they are clearly cost-effective). It may also be just as obvious that other controls may be too expensive (considering both monetary and nonmonetary factors).

2. Accept Residual Risk - At some point, management needs to decide if the operation of the computer system is acceptable, given the kind and severity of remaining risks. It should take into account the limitations of the risk assessment.

3. Implementing Controls and Monitoring Effectiveness - The safeguards selected need to be effectively implemented. Moreover, to continue to be effective, risk management needs to be an ongoing process. This requires a periodic reassessment and improvement of safeguards and re-analysis of risks.

One method of selecting safeguards uses a "what if" analysis. With this method, the effect of adding various safeguards (and, therefore, reducing vulnerabilities) is tested to see what difference each makes with regard to cost, effectiveness, and other relevant factors. Another method is to categorize types of safeguards and recommend implementing them for various levels of risk. For example, stronger controls would be implemented on high-risk systems than on low-risk systems.

What Is a What If Analysis?

A what if analysis looks at the costs and benefits of various combinations of controls to determine the optimal combination for a particular circumstance. In this simple example (which addresses only one control), suppose that hacker break-ins alert agency computer security personnel to the security risks of using passwords. They may wish to consider replacing the password system with stronger identification and authentication mechanisms, or just strengthening their password procedures. First, the status quo is examined. The system in place puts minimal demands upon users and system administrators, but the agency has had three hacker break-ins in the last six months.

What if passwords are strengthened? Personnel may be required to change passwords more frequently or may be required to use a numeral or other non-alphabetic character in their password. There are no direct monetary expenditures, but staff and administrative overhead (e.g., training and replacing forgotten passwords) is increased. Estimates, however, are that this will reduce the number of successful hacker break-ins to three or four per year.

What if stronger identification and authentication technology is used? The agency may wish to implement stronger safeguards in the form of one-time cryptographic-based passwords so that, even if a password were obtained, it would be useless. Direct costs may be estimated at $45,000, and yearly recurring costs at $8,000. An initial training program would be required, at a cost of $17,500. The agency estimates, however, that this would prevent virtually all break-ins.

Computer security personnel use the results of this analysis to make a recommendation to their management officer, who then weighs the costs and benefits, takes into account other constraints (e.g., budget), and selects a solution.

Good documentation of risk assessments will make later risk assessments less time consuming and, if a question arises, will help explain why particular security decisions were made.

Developing IT System Security Plans

Introduction

The objective of system security planning is to improve protection of information technology (IT) resources. The purpose of the security plan is to provide an overview of the security requirements of the system and describe the controls in place or planned for meeting those requirements. A typical computer system security plan briefly describes the important security considerations for the system and provides references to more detailed documents, such as system security plans, contingency plans, training programs, accreditation statements, incident handling plans, or audit results. This enables the plan to be used as a management tool without requiring repetition of existing documents. For smaller systems, the plan may include all security documentation. As with other security documents, if a plan addresses specific vulnerabilities or other information that could compromise the system, it should be kept private. It also has to be kept up-to-date.

The recommended approach is to draw up the plan at the beginning of the computer system life cycle. Security, like other aspects of a computer system, is best managed if planned for throughout the computer system life cycle. It has long been a tenet of the computer community that it costs ten times more to add a feature in a system after it has been designed than to include the feature in the system at the initial design phase. The principal reason for implementing security during a system's development is that it is more difficult to implement it later (as is usually reflected in the higher costs of doing so). It also tends to disrupt ongoing operations.

The system security plan also delineates responsibilities and expected behavior of all individuals who access the system. The security plan should be viewed as documentation of the structured process of planning adequate, cost-effective security protection for a system. It should reflect input from various managers with responsibilities concerning the system, including information owners, the system operator, and the system security manager. Additional information may be included in the basic plan and the structure and format organized according to agency needs, so long as the major sections described in this document are adequately covered and readily identifiable. In order for the plans to adequately reflect the protection of the resources, a management official must authorize a system to process information or operate. The authorization of a system to process information, granted by a management official, provides an important quality control. By authorizing processing in a system, the manager accepts its associated risk.

Management authorization should be based on an assessment of management, operational, and technical controls. Since the security plan establishes and documents the security controls, it should form the basis for the authorization, supplemented by more specific studies as needed. In addition, a periodic review of controls should also contribute to future authorizations. Re-authorization should occur prior to a significant change in processing, but at least every three years. It should be done more often where there is a high risk and potential magnitude of harm.

System Analysis

Once completed, a security plan will contain technical information about the system, its security requirements, and the controls implemented to provide protection against its risks and vulnerabilities. You will need to perform an analysis of the system to determine the boundaries of the system and the type of system.

System Boundaries

Defining what constitutes a "system" for the purposes of this guide requires an analysis of system boundaries and organizational responsibilities. A system, as defined here, is identified by constructing logical boundaries around a set of processes, communications, storage, and related resources. The elements within these boundaries constitute a single system requiring a security plan. Each element of the system must:

  • Be under the same direct management control;

  • Have the same function or mission objective;

  • Have essentially the same operating characteristics and security needs; and

  • Reside in the same general operating environment.

ll components of a system need not be physically connected (e.g., [1] a group of stand-alone personal computers (PCs) in an office; [2] a group of PCs placed in employees' homes under defined telecommuting program rules; [3] a group of portable PCs provided to employees who require mobile computing capability for their jobs; and [4] a system with multiple identical configurations that are installed in locations with the same environmental and physical safeguards).

Multiple Similar Systems

An organization may have systems that differ only in the responsible organization or the physical environment in which they are located (e.g., air traffic control systems). In such instances, it is appropriate and recommended to use plans that are identical except for those areas of difference. This approach provides consistent levels of protection for similar systems.


Confidentiality, Integrity and Availability in an Information Technology System Security Plan

Both information and information systems have distinct life cycles. It is important that the degree of sensitivity of information be assessed by considering the Requirements for availability, integrity, and confidentiality of the information. This process should occur at the beginning of the information system's life cycle and be re-examined during each life cycle stage. The integration of security considerations early in the life cycle avoids costly retrofitting of safeguards. However, security requirements can be incorporated during any life cycle stage. The purpose of this section is to review the system requirements against the need for availability, integrity, and confidentiality. By performing this analysis, the value of the system can be determined. The value is one of the first major factors in risk management. A system may need protection for one or more of the following reasons:

  • Confidentiality - The system contains information that requires protection from unauthorized disclosure.
  • Integrity - The system contains information which must be protected from unauthorized, unanticipated, or unintentional modification.
  • Availability - The system contains information or provides services which must be available on a timely basis to meet mission requirements or to avoid substantial losses.

A security plan for an information technology system should describe, in general terms, the information handled by the system and the need for protective measures. It needs to relate the information handled to each of the three basic protection requirements above (confidentiality, integrity and availability). It includes statement of the estimated risk and magnitude of harm resulting from the loss, misuse, or unauthorized access to or modification of information in the system. To the extent possible, it describes this impact in terms of cost, inability to carry out mandated functions, timeliness, etc. For each of the three categories (confidentiality, integrity and availability), it provides an evaluation, and indicates if the protection requirement is:

  • High - a critical concern of the system;

  • Medium - an important concern, but not necessarily paramount in the organization's priorities; or

  • Low - some minimal level or security is required, but not to the same degree as the previous two categories.

Examples of a General Protection Requirement Statement

A high degree of security for the system is considered mandatory to protect the confidentiality, integrity, and availability of information. The protection requirements for all applications are critical concerns for the system.

Or, confidentiality is not a concern for this system as it contains information intended for immediate release to the general public concerning severe storms. The integrity of the information, however, is extremely important to ensure that the most accurate information is provided to the public to allow them to make decisions about the safety of their families and property. The most critical concern is to ensure that the system is available at all times to acquire, process, and provide warning information immediately about life-threatening storms.

Example of Confidentiality Considerations

Evaluation: High, Medium Low

High - The application contains proprietary business information and other financial information, which if disclosed to unauthorized sources, could cause unfair advantage for vendors, contractors, or individuals and could result in financial loss or adverse legal action to user organizations.

Medium - Security requirements for assuring confidentiality are of moderate importance. Having access to only small portions of the information has little practical purpose and the satellite imagery data does not reveal information involving national security.

Low - The mission of this system is to produce local weather forecast information that is made available to the news media forecasters and the general public at all times. None of the information requires protection against disclosure.

Example of Integrity Considerations

Evaluation: High, Medium Low

High - The application is a financial transaction system. Unauthorized or unintentional modification of this information could result in fraud, under or over payments of obligations, fines, or penalties resulting from late or inadequate payments, and loss of public confidence.

Medium - Assurance of the integrity of the information is required to the extent that destruction of the information would require significant expenditures of time and effort to replace. Although corrupted information would present an inconvenience to the staff, most information, and all vital information, is backed up by either paper documentation or on disk.

Low - The system mainly contains messages and reports. If these messages and reports were modified by unauthorized, unanticipated or unintentional means, employees would detect the modifications; however, these modifications would not be a major concern for the organization.

Example of Availability Considerations

Evaluation: High, Medium Low

High - The application contains personnel and payroll information concerning employees of the various user groups. Unavailability of the system could result in inability to meet payroll obligations and could cause work stoppage and failure of user organizations to meet critical mission requirements. The system requires 24-hour access.

Medium - Information availability is of moderate concern to the mission. Macintosh and IBM PC availability would be required within the four to five-day range. Information backups maintained at off-site storage would be sufficient to carry on with limited office tasks.

Low - The system serves primarily as a server for e-mail for the seven users of the system. Conference messages are duplicated between Seattle and D.C. servers. Should the system become unavailable, the D.C. users would connect to the Seattle server and continue to work with only the loss of old mail messages.


Physical and Environmental Protection

Physical and environmental security controls are implemented to protect the facility housing system resources, the system resources themselves, and the facilities used to support their operation. An organization's physical and environmental security program should address the following seven topics which are explained below. This section briefly describes the physical and environmental security controls that should be in place for a major application.

Explanation of Physical and Environment Security

Access Controls

Physical access controls restrict the entry and exit of personnel (and often equipment and media) from an area, such as an office building, suite, data center, or room containing a local area network (LAN) server. Physical access controls should address not only the area containing system hardware, but also locations of wiring used to connect elements of the system, supporting services (such as electric power), backup media, and any other elements required for the system's operation. It is important to review the effectiveness of physical access controls in each area, both during normal business hours and at other times -- particularly when an area may be unoccupied.

Environmental Conditions

For many types of computer equipment, strict environmental conditions must be maintained. Manufacturer's specifications should be observed for temperature, humidity, and electrical power requirements.

Control of Media

The media upon which information is stored should be carefully controlled. Transportable media such as tapes and cartridges should be kept in secure locations, and accurate records kept of the location and disposition of each. In addition, media from an external source should be subject to a check-in process to ensure it is from an authorized source.

Control of Physical Hazards

Each area should be surveyed for potential physical hazards. Fire and water are two of the most damaging forces with regard to computer systems. Opportunities for loss should be minimized by an effective fire detection and suppression mechanism, and planning reduces the danger of leaks or flooding. Other physical controls include reducing the visibility of the equipment and strictly limiting access to the area or equipment.

Fire Safety Factors

Building fires are a particularly important security threat because of the potential for complete destruction of both hardware and data, the risk to human life, and the pervasiveness of the damage. Smoke, corrosive gases, and high humidity from a localized fire can damage systems throughout an entire building. Consequently, it is important to evaluate the fire safety of buildings that house systems.

Failure of Supporting Utilities

Systems and the people who operate them need to have a reasonably well-controlled operating environment. Consequently, failures of electric power, heating and air-conditioning systems, water, sewage, and other utilities will usually cause a service interruption and may damage hardware. Organizations should ensure that these utilities, including their many elements, function properly.

Structural Collapse

Organizations should be aware that a building may be subjected to a load greater than it can support. Most commonly this results from an earthquake, a snow load on the roof beyond design criteria, an explosion that displaces or cuts structural members, or a fire that weakens structural members.

Plumbing Leaks

While plumbing leaks do not occur every day, they can be seriously disruptive. An organization should know the location of plumbing lines that might endanger system hardware and take steps to reduce risk (e.g., moving hardware, relocating plumbing lines, and identifying shutoff valves.)

Interception of Data

Depending on the type of data a system processes, there may be a significant risk if the data is intercepted. Organizations should be aware that there are three routes of data interception: direct observation, interception of data transmission, and electromagnetic interception.

Mobile and Portable Systems

The analysis and management of risk usually has to be modified if a system is installed in a vehicle or is portable, such as a laptop computer. The system in a vehicle will share the risks of the vehicle, including accidents and theft, as well as regional and local risks. Organizations should:

  • Securely store laptop computers when they are not in use; and
  • Encrypt data files on stored media, when cost-effective, as a precaution against disclosure of information if a laptop computer is lost or stolen.

Computer Room Example

Appropriate and adequate controls will vary depending on the individual system requirements. The example list shows the types of controls for an application residing on a system in a computer room. The list is not intended to be all- inclusive or to imply that all systems should have all controls listed.

Production, Input/Output Controls

The information technology system security plan should provide a synopsis of the procedures in place that support the operations of the application. Below is a sampling of topics that should be reported.

For Computer Room:

In Place:

  • Card keys for building and work-area entrances
  • Twenty-four hour guards at all entrances/exits
  • Cipher lock on computer room door
  • Raised floor in computer room
  • Dedicated cooling system
  • Humidifier in tape library
  • Emergency lighting in computer room
  • Four fire extinguishers rated for electrical fires
  • One B/C-rated fire extinguisher
  • Smoke, water, and heat detectors
  • Emergency power-off switch by exit door
  • Surge suppressor
  • Emergency replacement server
  • Zoned dry pipe sprinkler system
  • Uninterruptable power supply for LAN servers
  • Power strips/suppressors for peripherals
  • Power strips/suppressors for computers
  • Controlled access to file server room

Planned:

  • Plastic sheets for water protection
  • Closed-circuit television monitors

Procedures to Use:

  • How to recognize, handle, and report incidents and/or problems?
  • Procedures to ensure unauthorized individuals cannot read, copy, alter, or steal printed or electronic information.
  • Procedures for ensuring that only authorized users pick up, receive, or deliver input and output information and media.
  • Audit trails for receipt of sensitive inputs/outputs.
  • Procedures for restricting access to output products.
  • Procedures and controls used for transporting or mailing media or printed output.
  • Internal/external labeling for appropriate sensitivity (e.g., Privacy Act, Proprietary).
  • External labeling with special handling instructions (e.g., log/inventory identifiers, controlled access, special storage instructions, release or destruction dates).
  • Audit trails for inventory management.
  • Media storage vault or library physical and environmental protection controls and procedures.
  • Procedures for sanitizing electronic media for reuse (e.g., overwrite or degaussing of electronic media).
  • Procedures for controlled storage, handling, or destruction of spoiled media or media that cannot be effectively sanitized for reuse.
  • Procedures for shredding or other destructive measures for hardcopy media when no longer required.


Data Integrity/Validation Controls

Data integrity controls are used to protect data from accidental or malicious alteration or destruction and to provide assurance to the user that the information meets expectations about its quality and that it has not been altered. Validation controls refer to tests and evaluations used to determine compliance with security specifications and requirements.

Security controls should be in place providing assurance to users that the information has not been altered and that the system functions as expected. The following questions are examples of some of the controls that fit in this category:

  • Is virus detection and elimination software installed? If so, are there procedures for:
    • Updating virus signature files;
    • Automatic and/or manual virus scans (automatic scan on network log-in, automatic scan on client/server power on, automatic scan on diskette insertion, automatic scan on download from an unprotected source such as the Internet, scan for macro viruses); and
    • Virus eradication and reporting?
  • Are reconciliation routines used by the system, i.e., checksums, hash totals, record counts? Include a description of the actions taken to resolve any discrepancies.
  • Are password crackers/checkers used?
  • Are integrity verification programs used by applications to look for evidence of data tampering, errors, and omissions? Techniques include consistency and reasonableness checks and validation during data entry and processing.
  • Describe the integrity controls used within the system.
  • Are intrusion detection tools installed on the system? Describe where the tool(s) are placed, the type of processes detected/reported, and the procedures for handling intrusions.
  • Is system performance monitoring used to analyze system performance logs in real time to look for availability problems, including active attacks, and system and network slowdowns and crashes?
  • Is penetration testing performed on the system? If so, what procedures are in place to ensure they are conducted appropriately?
  • Is message authentication used in the application to ensure that the sender of a message is known and that the message has not been altered during transmission?

State whether message authentication has been determined to be appropriate for your system. If so, describe the methodology.


Documentation

Documentation is a security control in that it explains how software/hardware is to be used and formalizes security and operational procedures specific to the system. Documentation for a system includes descriptions of the hardware and software, policies, standards, procedures, and approvals related to automated information system security in the application and the support system(s) on which it is processed, to include backup and contingency activities, as well as descriptions of user and operator procedures. Documentation should be coordinated with the general support system and/or network manager(s) to ensure that adequate application and installation documentation are maintained to provide continuity of operations List the documentation maintained for the application. The example list is provided to show the type of documentation that would normally be maintained for a system and is not intended to be all inclusive or imply that all systems should have all items listed.

Examples of Documentation for a Major Application:

  • Vendor-supplied documentation of hardware
  • Vendor-supplied documentation of software
  • Application requirements
  • Application security plan
  • General support system(s) security plan(s)
  • Application program documentation and specifications
  • Testing procedures and results
  • Standard operating procedures
  • Emergency procedures
  • Contingency plans
  • Memoranda of understanding with interfacing systems
  • Disaster recovery plans
  • User rules of behavior
  • User manuals
  • Risk assessment
  • Backup procedures
  • Authorize processing documents and statement


Technical Controls

Technical controls focus on security controls that the computer system executes. The controls can provide automated protection from unauthorized access or misuse, facilitate detection of security violations, and support security requirements for applications and data. The implementation of technical controls, however, always requires significant operational considerations and should be consistent with the management of security within the organization. In this section, describe the technical control measures (in place or planned) that are intended to meet the protection requirements of the major application.

Identification and Authentication

Identification and Authentication is a technical measure that prevents unauthorized people (or unauthorized processes) from entering an IT system. Access control usually requires that the system be able to identify and differentiate among users. For example, access control is often based on least privilege, which refers to the granting to users of only those accesses minimally required to perform their duties. User accountability requires the linking of activities on an IT system to specific individuals and, therefore, requires the system to identify users.

Identification

Identification is the means by which a user provides a claimed identity to the system. The most common form of identification is the user ID. In this section of your IT system security plan, briefly describe how the major application identifies access to the system.

Unique Identification

An organization should require users to identify themselves uniquely before being allowed to perform any actions on the system unless user anonymity or other factors dictate otherwise.

Correlate Actions to Users

The system should internally maintain the identity of all active users and be able to link actions to specific users.

Maintenance of User IDs

An organization should ensure that all user IDs belong to currently authorized users. Identification data must be kept current by adding new users and deleting former users.

Inactive User IDs

User IDs that are inactive on the system for a specific period of time (e.g., three months) should be disabled.

Authentication

Authentication is the means of establishing the validity of a user's claimed identity to the system. There are three means of authenticating a user's identity which can be used alone or in combination:
something the individual knows (a secret -- e.g., a password, Personal Identification Number (PIN), or cryptographic key); something the individual possesses (a token -- e.g., an ATM card or a smart card); and something the individual is (a biometrics -- e.g., characteristics such as a voice pattern, handwriting dynamics, or a fingerprint).

For most applications, trade-offs will have to be made when evaluating the mode of authentication, including ease of use, and ease of administration, especially in modern networked environments. While it may appear that any of these means could provide strong authentication, there are problems associated with each. If people wanted to pretend to be someone else on a computer system, they can guess or learn that individual's password; they can also steal or fabricate tokens. Each method also has drawbacks for legitimate users and system administrators: users forget passwords and may lose tokens, and administrative overhead for keeping track of I&A data and tokens can be substantial. Biometric systems have significant technical, user acceptance, and cost problems as well.


This section of your IT system security plan describes the major application's authentication control mechanisms. Below is a list of items that should be considered in the description:

  • Descscribe the method of user authentication (password, token, and biometrics).
  • If a password system is used, provide the following specific information:
    • Allowable character set,
    • Password length (minimum, maximum),
    • Password aging time frames and enforcement approach,
    • Number of generations of expired passwords disallowed for use,
    • Procedures for password changes,
    • Procedures for handling lost passwords, and
    • Procedures for handling password compromise.
    • Indicate the frequency of password changes, describe how password changes are enforced (e.g., by the software or System Administrator), and identify who changes the passwords (the user, the system, or the System Administrator).
    • Note: The recommended minimum number of characters in a password is six to eight characters in a combination of alpha, numeric, or special characters.
  • Describe any biometrics controls used. Include a description of how the biometrics controls are implemented on the system.
  • Describe any token controls used on the system and how they are implemented.
  • Are special hardware readers required?
  • Are users required to use a unique Personal Identification Number (PIN)?
  • Who selects the PIN, the user or System Administrator?
  • Does the token use a password generator to create a one-time password?
  • Is a challenge-response protocol used to create a one-time password?
  • Describe the level of enforcement of the access control mechanism (network, operating system, and application).
  • Describe how the access control mechanism supports individual accountability and audit trails (e.g., passwords are associated with a user identifier that is assigned to a single individual).
  • Describe the self-protection techniques for the user authentication mechanism (e.g., passwords are transmitted and stored with one-way encryption to prevent anyone [including the System Administrator] from reading the clear-text passwords, passwords are automatically generated, passwords are checked against a dictionary of disallowed passwords, passwords are encrypted while in transmission).
  • State the number of invalid access attempts that may occur for a given user identifier or access location (terminal or port) and describe the actions taken when that limit is exceeded.
  • Describe the procedures for verifying that all system-provided administrative default passwords have been changed.
  • Describe the procedures for limiting access scripts with embedded passwords (e.g., scripts with embedded passwords are prohibited, scripts with embedded passwords are allowed only for batch applications).
  • Describe any policies that provide for bypassing user authentication requirements, single-sign-on technologies (e.g., host-to-host, authentication servers, user-to-host identifier, and group user identifiers) and any compensating controls.
  • Describe any use of digital or electronic signatures. Address the following specific issues:
  • Describe any use of digital or electronic signatures and the security control provided.
  • Discuss cryptographic key management procedures for key generation, distribution, storage, entry, use, destruction and archiving.
  • Procedures for training users and the materials covered.


Logical Access Controls (Authorization/Access Controls)

Logical access controls are the system-based mechanisms used to specify who or what (e.g., in the case of a process) is to have access to a specific system resource and the type of access that is permitted. Here, your IT system security plan discusses the controls in place to authorize or restrict the activities of users and system personnel within the application. Describe hardware or software features that are designed to permit only authorized access to or within the application, to restrict users to authorized transactions and functions, and/or to detect unauthorized activities (e.g., access control lists). The following are areas that should be considered.

  • Describe formal policies that define the authority that will be granted to each user or class of users. Indicate if these policies follow the concept of least privilege which requires identifying the user's job functions, determining the minimum set of privileges required to perform that function, and restricting the user to a domain with those privileges and nothing more. Include in the description the procedures for granting new users access and the procedures for when the role or job function changes.
  • Identify whether the policies include separation of duties enforcement to prevent an individual from having all necessary authority or information access to allow fraudulent activity without collusion.
  • Describe the application's capability to establish an Access Control List or register of the users and the types of access they are permitted.
    Indicate whether a manual Access Control List is maintained.
  • Indicate if the security software allows application owners to restrict the access rights of other application users, the general support system administrator, or operators to the application programs, data, or files.
  • Describe how application users are restricted from accessing the operating system, other applications, or other system resources not needed in the performance of their duties.
  • Indicate how often Access Control Lists are reviewed to identify and remove users who have left the organization or whose duties no longer require access to the application.
  • Describe controls to detect unauthorized transaction attempts by authorized and/or unauthorized users.
  • Describe policy or logical access controls that regulate how users may delegate access permissions or make copies of files or information accessible to other users. This "discretionary access control" may be appropriate for some applications, and inappropriate for others.
  • Document any evaluation made to justify/support use of "discretionary access control."
    Indicate after what period of user inactivity the system automatically blanks associated display screens and/or after what period of user inactivity the system automatically disconnects inactive users or requires the user to enter a unique password before reconnecting to the system or application.
  • Describe any restrictions to prevent users from accessing the system or applications outside of normal work hours or on weekends.
  • Discuss in-place restrictions.
  • Indicate if encryption is used to prevent unauthorized access to sensitive files as part of the system or application access control procedures. (If encryption is used primarily for authentication, include this information in the section above.) If encryption is used as part of the access controls, provide information about the following:
    • What cryptographic methodology (e.g., secret key and public key) is used?
    • If a specific off-the-shelf product is used, provide the name of the product.
    • If the product and the implementation method meet standards (e.g., Data Encryption Standard, Digital Signature Standard), include that information.
    • Discuss cryptographic key management procedures for key generation, distribution, storage, entry, use, destruction, and archiving.
  • If your application is running on a system that is connected to the Internet or other wide area network(s), discuss what additional hardware or technical controls have been installed and implemented to provide protection against unauthorized system penetration and other known Internet threats and vulnerabilities.
  • Describe any type of secure gateway or firewall in use, including its configuration, (e.g., configured to restrict access to critical system resources and to disallow certain types of traffic to pass through to the system).
  • Provide information regarding any port protection devices used to require specific access authorization to the communication ports, including the configuration of the port protection devices, and if additional passwords or tokens are required.
  • Identify whether internal security labels are used to control access to specific information types or files, and if such labels specify protective measures or indicate additional handling instructions.
  • Indicate if host-based authentication is used. (This is an access control approach that grants access based on the identity of the host originating the request, instead of the individual user requesting access.)


Conducting a Sensitivity Assessment

A sensitivity assessment looks at the sensitivity of both the information to be processed and the system itself. The assessment should consider legal implications, organization policy, and the functional needs of the system. Sensitivity is normally expressed in terms of integrity, availability, and confidentiality. Such factors as the importance of the system to the organization's mission and the consequences of unauthorized modification, unauthorized disclosure, or unavailability of the system or data need to be examined when assessing sensitivity. To address these types of issues, the people who use or own the system or information should participate in the assessment.

A sensitivity assessment should answer the following questions:

  • What information is handled by the system?
  • What kind of potential damage could occur through error, unauthorized disclosure or modification, or unavailability of data or the system?
  • What laws or regulations affect security (e.g., the Privacy Act or the Fair Trade Practices Act)?
  • To what threats is the system or information particularly vulnerable?
  • Are there significant environmental considerations (e.g., hazardous location of system)?
  • What are the security-relevant characteristics of the user community (e.g., level of technical sophistication and training or security clearances)?
  • What internal security standards, regulations, or guidelines apply to this system?


Operational Assurance

Security is never perfect when a system is implemented. In addition, system users and operators discover new ways to intentionally or unintentionally bypass or subvert security. Changes in the system or the environment can create new vulnerabilities. Strict adherence to procedures is rare over time, and procedures become outdated. Thinking risk is minimal, users may tend to bypass security measures and procedures.

Operational assurance is one way of becoming aware of these changes whether they are new vulnerabilities (or old vulnerabilities that have not been corrected), system changes, or environmental changes. Operational assurance is the process of reviewing an operational system to see that security controls, both automated and manual, are functioning correctly and effectively.

Design and implementation assurance addresses the quality of security features built into systems. Operational assurance addresses whether the system's technical features are being bypassed or have vulnerabilities and whether required procedures are being followed. It does not address changes in the system's security requirements, which could be caused by changes to the system and its operating or threat environment.

Security tends to degrade during the operational phase of the system life cycle. System users and operators discover new ways to intentionally or unintentionally bypass or subvert security (especially if there is a perception that bypassing security improves functionality). Users and administrators often think that nothing will happen to them or their system, so they shortcut security. Strict adherence to procedures is rare, and they become outdated, and errors in the system's administration commonly occur.

Organizations use two basic methods to maintain operational assurance:

System Audit - A one-time or periodic event to evaluate security. An audit can vary widely in scope: it may examine an entire system for the purpose of re-accreditation or it may investigate a single anomalous event.

Monitoring - An ongoing activity that checks on the system, its users, or the environment.
These terms are used loosely within the computer security community and often overlap. A system audit is a one-time or periodic event to evaluate security. Monitoring refers to an ongoing activity that examines either the system or the users. In general, the more "real-time" an activity is, the more it falls into the category of monitoring. Daily or weekly reviewing of the audit trail (for unauthorized access attempts) is generally monitoring, while an historical review of several months' worth of the trail (tracing the actions of a specific user) is probably an audit.

An audit conducted to support operational assurance examines whether the system is meeting stated or implied security requirements including system and organization policies. The essential difference between a self-audit and an independent audit is objectivity. Reviews done by system management staff, often called self-audits/ assessments, have an inherent conflict of interest.

Automated security audit tools make it feasible to review even large computer systems for a variety of security flaws. There are two types of automated tools: (1) active tools, which find vulnerabilities by trying to exploit them, and (2) passive tests, which only examine the system and infer the existence of problems from the state of the system.

Automated tools can be used to help find a variety of threats and vulnerabilities, such as improper access controls or access control configurations, weak passwords, lack of integrity of the system software, or not using all relevant software updates and patches. These tools are often very successful at finding vulnerabilities and are sometimes used by hackers to break into systems. Not taking advantage of these tools puts system administrators at a disadvantage. Many of the tools are simple to use; however, some programs (such as access-control auditing tools for large mainframe systems) require specialized skill to use and interpret.

Several types of automated tools monitor a system for security problems. Some examples follow:

Virus scanners are a popular means of checking for virus infections. These programs test for the presence of viruses in executable program files.

Checksumming presumes that program files should not change between updates. They work by generating a mathematical value based on the contents of a particular file. When the integrity of the file is to be verified, the checksum is generated on the current file and compared with the previously generated value. If the two values are equal, the integrity of the file is verified. Program checksumming can detect viruses, Trojan horses, accidental changes to files caused by hardware failures, and other changes to files. However, they may be subject to covert replacement by a system intruder. Digital signatures can also be used.

Password crackers check passwords against a dictionary (either a "regular" dictionary or a specialized one with easy-to-guess passwords) and also check if passwords are common permutations of the user ID.

Examples of special dictionary entries could be the names of regional sports teams and stars; common permutations could be the user ID spelled backwards.

Integrity verification programs can be used by such applications to look for evidence of data tampering, errors, and omissions. Techniques include consistency and reasonableness checks and validation during data entry and processing. These techniques can check data elements, as input or as processed, against expected values or ranges of values; analyze transactions for proper flow, sequencing, and authorization; or examine data elements for expected relationships. These programs comprise a very important set of processes because they can be used to convince people that, if they do what they should not do, accidentally or intentionally, they will be caught. Many of these programs rely upon logging of individual user activities.

Intrusion detectors analyze the system audit trail, especially log-ons, connections, operating system calls, and various command parameters, for activity that could represent unauthorized activity.

System performance monitoring analyzes system performance logs in real time to look for availability problems, including active attacks (such as the 1988 Internet worm) and system and network slowdowns and crashes.

An auditor can review controls in place and determine whether they are effective. The auditor will often analyze both computer and non-computer based controls. Techniques used include inquiry, observation, and testing (of both the controls themselves and the data). The audit can also detect illegal acts, errors, irregularities, or a lack of compliance with laws and regulations. Security checklists and penetration testing, discussed below, may be used.


Computer Security Incident Handling

Computer systems are subject to a wide range of mishaps -- from corrupted data files, to viruses, to natural disasters. Some of these mishaps can be fixed through standard operating procedures. For example, frequently occurring events (e.g., a mistakenly deleted file) can usually be readily repaired (e.g., by restoration from the backup file). More severe mishaps, such as outages caused by natural disasters, are normally addressed in an organization's contingency plan. Other damaging events result from deliberate malicious technical activity (e.g., the creation of viruses or system hacking).

A computer security incident can result from a computer virus, other malicious code, or a system intruder, either an insider or an outsider. Although the threats that hackers and malicious code pose to systems and networks are well known, the occurrence of such harmful events remains unpredictable. Security incidents on larger networks (e.g., the Internet), such as break-ins and service disruptions, have harmed various organizations' computing capabilities. It is cost-beneficial to develop a standing capability for quick discovery of and response to such events. This is especially true, since incidents can often "spread" when left unchecked thus increasing damage and seriously harming an organization. This chapter describes how organizations can address computer security incidents (in the context of their larger computer security program) by developing a computer security incident handling capability.

The primary benefits of an incident handling capability are containing and repairing damage from incidents, and preventing future damage. An incident handling capability provides a way for users to report incidents and the appropriate response and assistance to be provided to aid in recovery. Technical capabilities (e.g., trained personnel and virus identification software) are pre-positioned, ready to be used as necessary. Moreover, the organization will have already made important contacts with other supportive sources (e.g., legal, technical, and managerial) to aid in containment and recovery efforts. Intruder activity, whether hackers or malicious code, can often affect many systems located at many different network sites; thus, handling the incidents can be logistically complex and can require information from outside the organization. By planning ahead, such contacts can be pre-established and the speed of response improved, thereby containing and minimizing damage.

As in any set of pre-planned procedures, attention must be paid to a set of goals for handling an incident. These goals will be prioritized differently depending on the site. A specific set of objectives can be identified for dealing with incidents:

(1) Figure out how it happened.
(2) Find out how to avoid further exploitation of the same vulnerability.
(3) Avoid escalation and further incidents.
(4) Assess the impact and damage of the incident.
(5) Recover from the incident.
(6) Update policies and procedures as needed.
(7) Find out who did it (if appropriate and possible).

Due to the nature of the incident, there might be a conflict between analyzing the original source of a problem and restoring systems and services. Overall goals (like assuring the integrity of critical systems) might be the reason for not analyzing an incident. Of course, this is an important management decision; but all involved parties must be aware that without analysis the same incident may happen again.

It is also important to prioritize the actions to be taken during an incident well in advance of the time an incident occurs. Sometimes an incident may be so complex that it is impossible to do everything at once to respond to it; priorities are essential. Although priorities will vary from institution to institution, the following suggested priorities may serve as a starting point for defining your organization's response:

(1) Priority one -- protect human life and people's safety; human life always has precedence over all other considerations.

(2) Priority two -- protect classified and/or sensitive data. Prevent exploitation of classified and/or sensitive systems, networks or sites. Inform affected classified and/or sensitive systems, networks or sites about already occurred penetrations. (Be aware of regulations by your site or by government)

(3) Priority three -- protect other data, including proprietary, scientific, managerial and other data, because loss of data is costly in terms of resources. Prevent exploitations of other systems, networks or sites and inform already affected systems, networks or sites about successful penetrations.

(4) Priority four -- prevent damage to systems (e.g., loss or alteration of system files, damage to disk drives, etc.). Damage to systems can result in costly down time and recovery.

(5) Priority five -- minimize disruption of computing resources (including processes). It is better in many cases to shut a system down or disconnect from a network than to risk damage to data or systems. Sites will have to evaluate the trade-offs between shutting down and
disconnecting, and staying up. There may be service agreements in place that may require keeping systems up even in light of further damage occurring. However, the damage and scope of an incident may be so extensive that service agreements may have to be over-ridden.

An important implication for defining priorities is that once human life and national security considerations have been addressed, it is generally more important to save data than system software and hardware. Although it is undesirable to have any damage or loss during an incident, systems can be replaced. However, the loss or compromise of data (especially classified or proprietary data) is usually not an acceptable outcome under any circumstances.

An incident handling capability also assists an organization in preventing (or at least minimizing) damage from future incidents. Incidents can be studied internally to gain a better understanding of the organization's threats and vulnerabilities so more effective safeguards can be implemented. Additionally, through outside contacts (established by the incident handling capability) early warnings of threats and vulnerabilities can be provided. Mechanisms will already be in place to warn users of these risks.

The incident handling capability allows an organization to learn from the incidents that it has experienced. Data about past incidents (and the corrective measures taken) can be collected. The data can be analyzed for patterns -- for example, which viruses are most prevalent, which corrective actions are most successful, and which systems and information are being targeted by hackers. Vulnerabilities can also be identified in this process -- for example, whether damage is occurring to systems when a new software package or patch is used. Knowledge about the types of threats that are occurring and the presence of vulnerabilities can aid in identifying security solutions. This information will also prove useful in creating a more effective training and awareness program -- and thus help reduce the potential for losses. The incident handling capability assists the training and awareness program by providing information to users as to (1) measures that can help avoid incidents (e.g., virus scanning) and (2) what should be done in case an incident does occur.

A successful incident handling capability has several core characteristics:

  • an understanding of the constituency it will serve;
  • an educated constituency;
  • a means for centralized communications;
  • expertise in the requisite technologies; and
  • links to other groups to assist in incident handling (as needed).

Incident handling will be greatly enhanced by technical mechanisms that enable the dissemination of information quickly and conveniently. The technical ability to report incidents is of primary importance, since without knowledge of an incident, response is precluded. Fortunately, such technical mechanisms are already in place in many organizations.

For rapid response to constituency problems, a simple telephone "hotline" is practical and convenient. Some agencies may already have a number used for emergencies or for obtaining help with other problems; it may be practical (and cost-effective) to also use this number for incident handling. It may be necessary to provide 24-hour coverage for the hotline. This can be done by staffing the answering center, by providing an answering service for non-office hours, or by using a combination of an answering machine and personal pagers.

If additional mechanisms for contacting the incident handling team can be provided, it may increase access and thus benefit incident handling efforts. A centralized e-mail address that forwards mail to staff members would permit the constituency to conveniently exchange information with the team.

One way to establish a centralized reporting and incident response capability, while minimizing expenditures, is to use an existing Help Desk. Many agencies already have central Help Desks for fielding calls about commonly used applications, troubleshooting system problems, and providing help in detecting and eradicating computer viruses. By expanding the capabilities of the Help Desk and publicizing its telephone number (or e-mail address), an agency may be able to significantly improve its ability to handle many different types of incidents at minimal cost.