Escolar Documentos
Profissional Documentos
Cultura Documentos
Content description
Part 1
Summary:
Each member of the community must be responsible for the security and protection of electronic
information resources over which he or she has control.
Resources to be protected include networks, computers, software, and data. The physical and logical
integrity of these resources must be protected against threats such as unauthorized intrusions, malicious
misuse, or inadvertent compromise. Activities outsourced to off-campus entities must comply with the
same security requirements as in-house activities.
However, security policy must be fixed and maintained with the collaboration of managers,
administrative officials and users.
Objectives:
Upon completion of this part, the student will be able to understand:
What is information security?
The definition of information security and the factors to consider when maintaining security
Why is security design necessary?
The objective of security design and security requirements should be established.
Definition of security policy and advantages/disadvantages of having a security policy.
General outline of security policy.
Considerations when writing security policy.
What factors should be considered when writing a security policy.
August 1995
o A 24 years old student accessed CitiBank's computer system and illegally transferred
2.8 million US dollars to his bank account.
March 1999
o Mellissa, a computer virus attached to Microsoft Words, spread through the use of
emails.
February 2000
o Denial of Service attacks caused major websites such as Yahoo.com, Microsoft.com,
ebay.com, cnn.com, amazon.com to go offline.
March 2000
o Two 18 years olds hacked into Internet shopping websites, stole 26,000 credit card data,
and shopped up to an amount of 3 million US dollars, using the stolen credit card
information.
January 2003
o Computer virus (or worms) Slammer (2003.1) and Blaster spread through the Internet
attacking the security holes on the servers and client PCs.
20000
18000
16000
14000
12000
10000
8000
6000
4000
2000
1988
1990
1992
1994
1996
1998 2000
CERT is a center of internet security expertise, located at the Software Engineering Institute, a
federally funded research and development center operated by Carnegie Mellon University.
The center studies internet security vulnerabilities, research long-term changes in networked systems,
and develop information and training to help you improve security.
We in the slide the numbers of security incident in the united states reported to the CERT between
1988 and 2000.
Confidentiality:
o Confidentiality is related to the READ Action
o It concerns part of the system and not necessary all the system
Integrity:
o Integrity is related to the WRITE & MODIFY Actions
o It means that the current version is identical to a referential one
Availability
o Availability is related to the EXECUTE Action
o Its very difficult to implement it since DOS (Deny Of Service) attacks are easier than other
attacks.
o Actually computer systems try to reach 99.999% of availability.
Universal Knowledge Solutions S.A.L.
-3-
Access Control.
Authorization
Fault tolerance.
Risk Management.
Backup Policy.
Disaster Recovery Policy.
Anticipate the types of threats that will occur in accordance with the characteristics:
o System Configuration:
Site distribution
Site connection to the networks
etc
o Types of user:
How the systems are used and accessed?
Is the number of users limited or unlimited?
etc
o Expected threats
Anti-Viruses.
Password theft.
Deny of Services.
etc ...
o Damage amount
Direct loss in materials
Indirect loss in reputation.
Risks to the organization (By assessment of threats and vulnerabilities and by identifying all
possible risks
Universal Knowledge Solutions S.A.L.
-4-
Examples:
o What is important and what should be protected?
o What threats exist inside and outside of your organization?
o From whom and what do you want to protect from?
o What are your weaknesses (vulnerabilities)?
o Which security measure should have higher priority?
Success Factors
The following factors are often critical to the successful implementation of information security within
an organization:
Standards
Example:
Orange Book Summary
The following is a summary of the US Department of Defense Trusted Computer System Evaluation
Criteria, known as the Orange Book. Although originally written for military systems, the security
classifications are now broadly used within the computer industry.
In fact, the DoD security categories range from D (Minimal Protection) to A (Verified Protection).
D (Minimal Protection)
Any system that does not comply to any other category, or has failed to receive a higher classification.
D-level certification is very rare.
C (Discretionary Protection)
Discretionary protection applies to Trusted Computer Bases (TCBs) with optional object (i.e. file,
directory, devices etc.) protection.
C1 (Discretionary Security Protection)
Discretionary Access Control, for example Access Control Lists (ACLs), User/Group/World
protection.
Usually for users who are all on the same security level.
C1 certification is rare.
o Operating Systems: earlier versions of Unix, IBM RACF.
C2 (Controlled Access Protection) As C1, plus
Object protection can be on a single-user basis, e.g. through an ACL or Trustee database.
Full auditing of security events (i.e. date/time, event, user, success/failure, terminal ID)
Protected system mode of operation.
Added protection for authorization and audit data.
Documentation as C1 plus information on examining audit information.
This is one of the most common certifications.
o Operating Systems are: VMS, IBM OS/400, Windows NT, Novell NetWare 4.11, Oracle
7, DG AOS/VS II.
B (Mandatory Protection)
Division B specifies that the TCB protection systems should be mandatory, not discretionary.
B1 (Labelled Security Protection) As C2 plus
Mandatory security and access labeling of all objects, e.g. files, processes, devices etc.
Label integrity checking (e.g. maintenance of sensitivity labels when data is exported).
Auditing of labeled objects.
Mandatory access control for all operations.
Ability to specify security level printed on human-readable output (e.g. printers).
Ability to specify security level on any machine-readable output.
Enhanced auditing.
Enhanced protection of Operating System.
Improved documentation.
Operating Systems are: HP-UX BLS, Cray Research Trusted Unicos 8.0, Digital SEVMS,
Harris CS/SX, SGI Trusted IRIX.
A (Verified Protection)
Division A is the highest security division.
A1 (Verified Protection) As B3 plus:
These are the only A1-certified systems: Boeing MLS LAN, Gemini Trusted Network
Processor, Honeywell SCOMP.
A2 and above:
Provision is made for security levels higher than A2, although these have not yet been
formally defined. No OSes are rated above A1.
Managing Security:
Business Continuity Planning
Develop an Incident Recovery Plan beforehand to minimize the effects of an incident and resume
operation in a timely manner.
Example:
o Verify the incident.
o Determine the type and magnitude of the incident (number of internal/external hosts).
o Assess damage.
o Protect the evidence (capture a system snapshot for further analysis).
o Determine whether to track or trace.
o Communicate the problem and actions taken to: management, operations group, all affected
sites, other response organizations (such as the CERT Coordination Center), appropriate
investigative agencies.
o Recover (restore programs and applications from vendor-supplied media, ensure programs and
applications are securely Configured, restore data from periodic backups, install all relevant
patches)
o Assess time and resources used and damage incurred
o Prepare report and/or statistics
o Support prosecution activity (if appropriate)
o Conduct a post-mortem (review lessons learned, evaluate procedures, update response plan if
appropriate)
o Update policy and procedures as necessary
o Be prepared for media inquiries
Security Policy
Security policy is a set of rules that an organization establish in order to maintain the necessary
security level. It becomes the basis for the whole security design procedure.
More precisely, due to the changes in computer environment, a security policy is a set of basic rules or
guidelines to be followed by a system user. For example:
Privacy policy
Penalties for inappropriate behaviors
Incident handling policy
Rules of user authentication
How security audit would be performed
Plans for maintenance and recovery
In the narrow sense, security policy regards only settings of Operating Systems (such as UNIX) or
Firewall or Routers ...etc.
In the broad sense, security policy covers the entire system, including operation and audit,
management etc.
Example:
Confidential
documents are left
on an empty desk
No rule on
accessing outside
network through
modems
Who is responsible
for patching
software bug is not
clear
The number of divisions and employees within an organization increases; it becomes very
difficult to control the conduct of each employee.
The employees within different divisions have different opinion towards security, and that could
become an obstacle when sharing information.
In worst cases, because of differences of opinion, even frictions may occur between these
different divisions.
Without a common rule, or a security policy that all employees can relate to, there would always
be differences of opinion.
Security risks that cannot be countered using security tools can be controlled.
o For example security tools cannot restrict users from connecting modem to their PCs, but
security policy can deter them
Technical Considerations
(When Writing Security Policy)
The security policy must:
Be easy to understand.
Be doable.
Be up-to-date.
Establish security guidelines, standards, and procedures for all the activities in conformance with
the applicable policies, laws
Universal Knowledge Solutions S.A.L.
- 13 -
Establish security guidelines, standards, and procedures for all the activities in conformance with
the applied roles and privileges (administrative officials, users, others)
Provide detailed analysis of potential threats and the feasibility of various security measures in
order to provide recommendations to administrative officials;
Clarifies roles and responsibilities of users, administrators, and management. Provide also a clear
distribution of privileges and permissions in order to guaranty the privacy and confidentiality of the
various types of electronic data, in accordance with applicable laws and policies. Briefly, security
policy must clarify:
who should do what, why, how and at what time is clear
Provide clear security measures that mitigate threats, consistent with the level of acceptable risk
established by administrative officials;
Establish procedures to ensure that privileged accounts are kept to a minimum and that privileged
users comply with privileged access agreements;
Establish the most recent software security patches, commensurate with the identified level of
acceptable risk.
Provide suitable authentication and authorization functions, commensurate with appropriate use
and the acceptable level of risk.
Provide suitable security installations to protect room or facility where server machines are located,
Interaction Considerations
(When Writing a Security Policy)
Security policy must be fixed and maintained with the collaboration of administrative officials and
users. Hence:
Administrative Officials (individuals with administrative responsibility or individuals having
functional ownership of data) must:
Identify the electronic information resources within areas under their control;
Define the purpose and function of the resources and ensure that requisite documentation
are provided as needed;
Establish acceptable levels of security risk for resources by assessing factors such as:
o How sensitive the data is, such as sensible data or information protected by law or policy,
o The level of criticality to the continuing operations for each individual activity;
o How negatively the operations of one or more units would be affected by unavailability or
reduced availability of the resources,
o How likely it is that a resource could be used as a platform for inappropriate acts towards other
entities.
Users (individuals who access and use campus electronic information resources) must:
Protect the resources under their control, such as access passwords, computers, and data they
download.
Availability: ensuring that authorized users have access to information and associated assets
when required.
Operators belong to the Information System Department and perform on-site tasks under
supervision of System administrators.
8.6 Security Personnel
Except for the Information System Department, the head of each department selects at least one
member of the department as Security Personnel. The tasks of the Security Personnel are
enhancing department security; registering employee complaints and problems with information
system management and against security measures; and reporting to the Information System
Department.
The Board of Directors appoints the Chairperson as the Director of Information Control.
The Chairperson must be a director of the company and must be responsible for Information
Security Management.
9.4 Vice Chairperson
The Vice Chairperson is the Head of the Information System Department, and acts as an aide to
the Chairperson. When the Chairperson is unable to perform his or her duties, the Vice Chairperson
performs them.
9.5 Regular members
The regular member of the Information Security Committee is each department head.
The regular members are allowed to propose items for the meeting agendas (e.g. responses to
security issues inside and outside the company).
9.6 Organizer
The Information System Department serves as the Organizer and carries out administrative
tasks for the Information Security Committee. The Organizer manages documents developed by
the Information Security Committee, such as Information Security Management Plans and the
Information Security Policy.
9.7 Task force
The Information Security Committee may form a task force to execute specific tasks.
One of the regular members is the head of the task force. The responsibilities of the task force
include formulation of the Information Security Policy, auditing, and incident response.
10. Tasks and Responsibilities of the Information Security Committee
The tasks of the Information Security Committee are as follows:
10.1 Planning for Information Security Management
The Information Security Committee must draft and carry out plans for Information Security
Management. The plan must address risk management and risk assessment, as well as plans to raise
employee awareness of the Information Security Policy. Also, the plan must allow for review of the
policy.
10.2 Distribution of the Information Security Policy document
Once the Information Security Policy has been developed or revised, the Information Security
Committee must distribute it to employees without delay.
10.3 Employee training
The Information Security Committee shall offer continuous in-house training on information
security to raise awareness and improve skills.
10.4 Revision of the Information Security Policy and assessment of employee compliance
Universal Knowledge Solutions S.A.L.
- 21 -
The Information Security Committee shall regularly review the Information Security Policy and
assess employee compliance with the policy. The Committee shall survey and evaluate employee
opinions on the Policy, and revise it accordingly.
10.5 Evaluation of auditing results and revision
The Information Security Committee shall use the auditing results to evaluate the Information
Security Policy, and revise it as necessary.
10.6 Report to the Board of Directors
The Information Security Committee must report to the Board of Directors on information
security maintenance, management, failures and problems, and on any revisions to the Information
Security Policy.
10.7 Penalty for violators of the Information Security Policy
The Information Security Committee shall take appropriate actions against violators of the
Information Security Policy upon discovery of the violation. Depending on the case, the
Information Security Committee can request the Personnel Department to impose a penalty
according to the personnel regulations.
11. Information Security Management
To protect information assets of the company, the following measures will be taken for
Information Security Management:
11.1 Risk analysis
The Information Security Committee shall undertake risk assessment and manage information
assets of the company.
11.2 Policy formulation
The Information Security Committee shall develop, evaluate, and review the Information
Security Policy, and shall formulate the Policy (4.1) and the Standard (4.2). The personnel in
charge of information systems are appointed by the Information Security Committee to develop the
Procedure.
11.3 Implementation of security measures
The security measures in the Information Security Policy of the company must be implemented
systematically. The Information System Department must develop a plan for implementing security
measures and get it approved by the Information Security Committee.
11.4 Training and awareness
The company shall proactively provide information security training for all information asset
users with the aim of increasing skill level and understanding. All information asset users must
undergo this training. Also, should the occasion arise, inform the regular members of the
Information Security Committee of any recent developments in information security.
11.5 Auditing and evaluation
The Information Security Committee must evaluate vulnerabilities and threats to information
security on a regular basis, or whenever such problems arise. The Committee should evaluate
potential countermeasures for addition to the Information Security Policy. These actions by the
Committee are guided by auditing results, feedback from information asset users, and results from
surveys on information security vulnerabilities.
11.6 Document revision
The Board of Directors must approve revision of the Information Security Policy and the Policy
(4.1). The Information Security Committee has authority over the Standard and the Procedures.
12. Penalties for Violations
The company shall take strict measures against violators of the Information Security Policy.
The Information Security Committee shall take action consistent with the severity of the violation
of the Information Security Policy.
13. Response to Information Security Breach
Responses to information system security breaches should be timely and follow the preestablished procedure.
14. Effective Date
This Policy was approved by the Board of Directors on April 1, 2004 and will take effect on
October 1, 2004.
(1) Biometrics may be used when it is problematic to memorize and manage passwords. The
specific method should be chosen by considering the cost and the state of the art.
(2) Biometric data is an important personal information. Therefore it must be handled with
strict care.
(3) Simple biometric information (e.g., fingerprints) can add flexibility to password use.
(4) Areas such as the server room must have a biometric system that provides the appropriate
high level of access security, e.g., an iris scan identification system.
5. Exception
If any part of this document cannot be followed for work-related reasons, users must request the
Information Security Committee for approval of exceptions. ^
6. Penalties
Violators of this document may be penalized according to the circumstances of the violation. The
"Penalty Standards" will determine the penalty.
7. Disclosure
This document is disclosed only to the Persons Involved.
8. Revisions
This document was approved by the Information Security Committee on April 1, 2004 and will
take effect on Oct 1, 2004. Requests for changes to this document must be submitted to the
Information Security Committee. The Committee must deliberate on each request, and if it concludes
that the changes are necessary, the Committee must modify the document promptly and inform the
Persons Involved. The Information Security Committee must review this document annually. Any
revision must be performed immediately, and the Committee must inform the Persons Involved of the
changes.
4.0 Policy
4.1 General
All system-level passwords (e.g., root, enable, NT admin, application administration accounts,
etc.) must be changed on at least a quarterly basis.
All production system-level passwords must be part of the InfoSec administered global
password management database.
All user-level passwords (e.g., email, web, desktop computer, etc.) must be changed at least
every six months. The recommended change interval is every four months.
User accounts that have system-level privileges granted through group memberships or
programs such as "su" must have a unique password from all other accounts held by that user.
Passwords must not be inserted into email messages or other forms of electronic
communication.
Where SNMP is used, the community strings must be defined as something other than the
standard defaults of "public," "private" and "system" and must be different from the passwords
used to log in interactively. A keyed hash must be used where available (e.g., SNMPv2).
All user-level and system-level passwords must conform to the guidelines described below.
4.2 Guidelines
General Password Construction Guidelines
Passwords are used for various purposes at <Company Name>. Some of the more common uses
include: user level accounts, web accounts, email accounts, screen saver protection, voicemail
password, and local router logins.
Since very few systems have support for one-time tokens (i.e., dynamic passwords which are only
used once) everyone should be aware of how to select strong passwords.
Poor, weak passwords have the following characteristics:
5.1 Enforcement
Any employee found to have violated this policy may be subject to disciplinary action, up to and
including termination of employment.
DoD 5200.28-STD
Supersedes
CSC-STD-001-83, dtd 15 Aug 83
Library No. S225,711
DEPARTMENT OF DEFENSE STANDARD
DEPARTMENT OF DEFENSE
TRUSTED COMPUTER SYSTEM EVALUATION CRITERIA
DECEMBER 1985
FOREWORD
This publication, DoD 5200.28-STD, "Department of Defence Trusted Computer System Evaluation
Criteria," is issued under the authority of an in accordance with DoD Directive 5200.28, "Security
Requirements for Automatic Data Processing (ADP) Systems," and in furtherance of responsibilities
assigned by DoD Directive 5215.1, "Computer Security Evaluation Center." Its purpose is to provide
technical hardware/firmware/software security criteria and associated technical evaluation
methodologies in support of the overall ADP system security policy, evaluation and
approval/accreditation responsibilities promulgated by DoD Directive 5200.28.
The provisions of this document apply to the Office of the Secretary of Defence (ASD), the Military
Departments, the Organization of the Joint Chiefs of Staff, the Unified and Specified Commands, the
Defence Agencies and activities administratively supported by OSD (hereafter called "DoD
Universal Knowledge Solutions S.A.L.
- 28 -
Components").
This publication is effective immediately and is mandatory for use by all DoD Components in carrying
out ADP system technical security evaluation activities applicable to the processing and storage of
classified and other sensitive DoD information and applications as set forth herein.
Recommendations for revisions to this publication are encouraged and will be reviewed biannually by
the National Computer Security Center through a formal review process. Address all proposals for
revision through appropriate channels to: National Computer Security Center, Attention: Chief,
Computer Security Standards.
DoD Components may obtain copies of this publication through their own publications channels. Other
federal agencies and the public may obtain copies from: Office of Standards and Products, National
Computer Security Center, Fort Meade, MD 20755-6000, Attention: Chief, Computer Security
Standards.
_________________________________
Donald C. Latham Assistant Secretary of Defence (Command, Control, Communications, and
Intelligence)
ACKNOWLEDGEMENTS[toc]
Special recognition is extended to Sheila L. Brand, National Computer Security Center (NCSC), who
integrated theory, policy, and practice into and directed the production of this document.
Acknowledgment is also given for the contributions of: Grace Hammonds and Peter S. Tasker, the
MITRE Corp., Daniel J. Edwards, NCSC, Roger R. Schell, former Deputy Director of NCSC, Marvin
Schaefer, NCSC, and Theodore M. P. Lee, Sperry Corp., who as original architects formulated and
articulated the technical issues and solutions presented in this document; Jeff Makey, formerly NCSC,
Warren F. Shadle, NCSC, and Carole S. Jordan, NCSC, who assisted in the preparation of this
document; James P. Anderson, James P. Anderson & Co., Steven B. Lipner, Digital Equipment Corp.,
Clark Weissman, System Development Corp., LTC Lawrence A. Noble, formerly U.S. Air Force,
Stephen T. Walker, formerly DoD, Eugene V. Epperly, DoD, and James E. Studer, formerly Dept. of
the Army, who gave generously of their time and expertise in the review and critique of this document;
and finally, thanks are given to the computer industry and others interested in trusted computing for
their enthusiastic advice and assistance throughout this effort.
CONTENTS
FOREWORD
ACKNOWLEDGMENTS
PREFACE
INTRODUCTION
PART I: THE CRITERIA
1.0 DIVISION D: MINIMAL PROTECTION
2.0 DIVISION C: DISCRETIONARY PROTECTION
2.1 Class (C1): Discretionary Security Protection
2.2 Class (C2): Controlled Access Protection
3.0 DIVISION B: MANDATORY PROTECTION
3.1 Class (B1): Labeled Security Protection
3.2 Class (B2): Structured Protection
3.3 Class (B3): Security Domains
4.0 DIVISION A: VERIFIED
4.1 Class (A1): Verified Design
4.2 Beyond Class (A1)
PART II: RATIONALE AND GUIDELINES
5.0 CONTROL OBJECTIVES FOR TRUSTED COMPUTER SYSTEMS
5.1 A Need for Consensus
5.2 Definition and Usefulness
5.3 Criteria Control Objective
6.0 RATIONALE BEHIND THE EVALUATION CLASSES
6.1 The Reference Monitor Concept
6.2 A Formal Security Policy Model
PREFACE [toc]
The trusted computer system evaluation criteria defined in this document classify systems into four broad
hierarchical divisions of enhanced security protection. They provide a basis for the evaluation of
effectiveness of security controls built into automatic data processing system products. The criteria were
developed with three objectives in mind: (a) to provide users with a yardstick with which to assess the
degree of trust that can be placed in computer systems for the secure processing of classified
or other sensitive information; (b) to provide guidance to manufacturers as to what to build into their
new, widely-available trusted commercial products in order to satisfy trust requirements for sensitive
applications; and (c) to provide a basis for specifying security requirements in acquisition
specifications. Two types of requirements are delineated for secure processing: (a) specific security
feature requirements and (b) assurance requirements. Some of the latter requirements enable evaluation
personnel to determine if the required features are present and functioning as intended. The scope of
these criteria is to be applied to the set of components comprising a trusted system, and is not
necessarily to be applied to each system component individually. Hence, some components of a system
may be completely untrusted, while others may be individually evaluated to a lower or higher
evaluation class than the trusted product considered as a whole system. In trusted products at the high
end of the range, the strength of the reference monitor is such that most of the components can be
completely untrusted. Though the criteria are intended to be application-independent, the specific
security feature requirements may have to be interpreted when applying the criteria to specific systems
with their own functional requirements, applications or special environments (e.g., communications
processors, process control computers, and embedded systems in general). The underlying assurance
requirements can be applied across the entire spectrum of ADP system or application processing
environments without special interpretation.
INTRODUCTIO
Historical Perspective
In October 1967, a task force was assembled under the auspices of the Defence Science Board to
address computer security safeguards that would protect classified information in remote-access,
resource-sharing computer systems. The Task Force report, "Security Controls for Computer Systems,"
published in February 1970, made a number of policy and technical recommendations on actions to be
taken to reduce the threat of compromise of classified information processed on remote-access
computer systems.[34] Department of Defence Directive 5200.28 and its accompanying manual DoD
5200.28-M, published in 1972 and 1973 respectively, responded to one of these recommendations by
establishing uniform DoD policy, security requirements, administrative controls, and technical
measures to protect classified information processed by DoD computer systems.[8;9] Research and
development work undertaken by the Air Force, Advanced Research Projects Agency, and other
defence agencies in the early and mid 70's developed and demonstrated solution approaches for the
technical problems associated with controlling the flow of information in resource and information
sharing computer systems.[1] The DoD Computer Security Initiative was started in 1977 under the
auspices of the Under Secretary of Defence for Research and Engineering to focus DoD efforts
addressing computer security issues.[33]
Concurrent with DoD efforts to address computer security issues, work was begun under the leadership
of the National Bureau of Standards (NBS) to define problems and solutions for building, evaluating,
and auditing secure computer systems.[17] As part of this work NBS held two invitational workshops
on the subject of audit and evaluation of computer security.[20;28] The first was held in March 1977,
and the second in November of 1978. One of the products of the second workshop was a definitive
paper on the problems related to providing criteria for the evaluation of technical computer security
effectiveness.[20] As an outgrowth of recommendations from this report, and in support of the DoD
Computer Security Initiative, the MITRE Corporation began work on a set of computer security
evaluation criteria that could be used to assess the degree of trust one could place in a computer system
Universal Knowledge Solutions S.A.L.
- 32 -
to protect classified data.[24;25;31] The preliminary concepts for computer security evaluation were
defined and expanded upon at invitational workshops and symposia whose participants represented
computer security expertise drawn from industry and academia in addition to the government. Their
work has since been subjected to much peer review and constructive technical criticism from the DoD,
industrial research and development organizations, universities, and computer manufacturers.
The DoD Computer Security Center (the Center) was formed in January 1981 to staff and expand on
the work started by the DoD Computer Security Initiative.[15] A major goal of the Center as given in
its DoD Charter is to encourage the widespread availability of trusted computer systems for use by
those who process classified or other sensitive information.[10] The criteria presented in this document
have evolved from the earlier NBS and MITRE evaluation material.
Scope
The trusted computer system evaluation criteria defined in this document apply primarily to trusted
commercially available automatic data processing (ADP) systems. They are also applicable, as
amplified below, the evaluation of existing systems and to the specification of security requirements
for ADP systems acquisition. Included are two distinct sets of requirements: 1) specific security feature
requirements; and 2) assurance requirements. The specific feature requirements encompass the
capabilities typically found in information processing systems employing general-purpose operating
systems that are distinct from the applications programs being supported. However, specific security
feature requirements may also apply to specific systems with their own functional requirements,
applications or special environments (e.g., communications processors, process control computers, and
embedded systems in general). The assurance requirements, on the other hand, apply to systems that
cover the full range of computing environments from dedicated controllers to full range multilevel
secure resource sharing systems.
Purpose
As outlined in the Preface, the criteria have been developed to serve a number of intended purposes:
To provide a standard to manufacturers as to what security features to build into their new
and planned, commercial products in order to provide widely available systems that satisfy
trust requirements (with particular emphasis on preventing the disclosure of data) for
sensitive applications.
To provide DoD components with a metric with which to evaluate the degree of trust that
can be placed in computer systems for the secure processing of classified and other
sensitive information.
To provide a basis for specifying security requirements in acquisition specifications.
With respect to the second purpose for development of the criteria, i.e., providing DoD
components with a security evaluation metric, evaluations can be delineated into two types:
(a) an evaluation can be performed on a computer product from a perspective that
excludes the application environment; or, (b) it can be done to assess whether appropriate
security measures have been taken to permit the system to be used operationally in a
specific environment. The former type of evaluation is done by the Computer Security
Center through the Commercial Product Evaluation Process. That process is described in
Appendix A.
The latter type of evaluation, i.e., those done for the purpose of assessing a system's security attributes
with respect to a specific operational mission, is known as a certification evaluation. It must be
understood that the completion of a formal product evaluation does not constitute certification or
accreditation for the system to be used in any specific application environment. On the contrary, the
evaluation report only provides a trusted computer system's evaluation rating along with supporting
data describing the product system's strengths and weaknesses from a computer security point of view.
The system security certification and the formal approval/accreditation procedure, done in accordance
with the applicable policies of the issuing agencies, must still be followed-before a system can be
approved for use in processing or handling classified information.[8;9] Designated Approving
Authorities (DAAs) remain ultimately responsible for specifying security of systems they accredit.
The trusted computer system evaluation criteria will be used directly and indirectly in the certification
process. Along with applicable policy, it will be used directly as technical guidance for evaluation of
the total system and for specifying system security and certification requirements for new acquisitions.
Where a system being evaluated for certification employs a product that has undergone a Commercial
Product Evaluation, reports from that process will be used as input to the certification evaluation.
Technical data will be furnished to designers, evaluators and the Designated Approving Authorities to
support their needs for making decisions.
Policy
Requirement 1 - SECURITY POLICY - There must be an explicit and well-defined security policy
enforced by the system. Given identified subjects and objects, there must be a set of rules that are used
by the system to determine whether a given subject can be permitted to gain access to a specific object.
Computer systems of interest must enforce a mandatory security policy that can effectively implement
Universal Knowledge Solutions S.A.L.
- 34 -
access rules for handling sensitive (e.g., classified) information.[7] These rules include requirements
such as: No person lacking proper personnel security clearance shall obtain access to classified
information. In addition, discretionary security controls are required to ensure that only selected users
or groups of users may obtain access to data (e.g., based on a need-to-know).
Requirement 2 - MARKING - Access control labels must be associated with objects. In order to
control access to information stored in a computer, according to the rules of a mandatory security
policy, it must be possible to mark every object with a label that reliably identifies the object's
sensitivity level (e.g., classification), and/or the modes of access accorded those subjects who may
potentially access the object.
Accountability
Requirement 3 - IDENTIFICATION - Individual subjects must be identified. Each access to
information must be mediated based on who is accessing the information and what classes of
information they are authorized to deal with. This identification and authorization information must be
securely maintained by the computer system and be associated with every active element that performs
some security-relevant action in the system.
Requirement 4 - ACCOUNTABILITY - Audit information must be selectively kept and protected so
that actions affecting security can be traced to the responsible party. A trusted system must be able to
record the occurrences of security-relevant events in an audit log. The capability to select the audit
events to be recorded is necessary to minimize the expense of auditing and to allow efficient analysis.
Audit data must be protected from modification and unauthorized destruction to permit detection and
after-the-fact investigations of security violations.
Assurance
Requirement 5 - ASSURANCE - The computer system must contain hardware/software mechanisms
that can be independently evaluated to provide sufficient assurance that the system enforces
requirements 1 through 4 above. In order to assure that the four requirements of Security Policy,
Marking, Identification, and Accountability are enforced by a computer system, there must be some
identified and unified collection of hardware and software controls that perform those functions. These
mechanisms are typically embedded in the operating system and are designed to carry out the assigned
tasks in a secure manner. The basis for trusting such system mechanisms in their operational setting
must be clearly documented such that it is possible to independently examine the evidence to evaluate
their sufficiency.
Requirement 6 - CONTINUOUS PROTECTION - The trusted mechanisms that enforce these basic
requirements must be continuously protected against tampering and/or unauthorized changes. No computer
system can be considered truly secure if the basic hardware and software mechanisms that enforce the
security policy are themselves subject to unauthorized modification or subversion. The continuous
protection requirement has direct implications throughout the computer system's life-cycle.
These fundamental requirements form the basis for the individual evaluation criteria applicable for
each evaluation division and class. The interested reader is referred to Section 5 of this document,
"Control Objectives for Trusted Computer Systems," for a more complete discussion and further
Universal Knowledge Solutions S.A.L.
- 35 -
the form of user guides, manuals, and the test and design documentation required for each class.
A reader using this publication for the first time may find it helpful to first read Part II, before
continuing on with Part I.
The TCB shall define and control access between named users and named objects (e.g., files and
programs) in the ADP system. The enforcement mechanism (e.g., self/group/public controls, access
control lists) shall allow users to specify and control sharing of those objects by named individuals or
defined groups or both.
2.1.2 Accountability
2.1.2.1 Identification and Authentication
The TCB shall require users to identify themselves to it before beginning to perform any other actions
that the TCB is expected to mediate. Furthermore, the TCB shall use a protected mechanism (e.g.,
passwords) to authenticate the user's identity. The TCB shall protect authentication data so that it
cannot be accessed by any unauthorized user.
2.1.3 Assurance
2.1.3.1 Operational Assurance
The security mechanisms of the ADP system shall be tested and found to work as claimed in the
system documentation. Testing shall be done to assure that there are no obvious ways for an
unauthorized user to bypass or otherwise defeat the security protection mechanisms of the TCB. (See
the Security Testing Guidelines.)
2.1.4 Documentation
2.1.4.1 Security Features User's Guide
A single summary, chapter, or manual in user documentation shall describe the protection mechanisms
provided by the TCB, guidelines on their use, and how they interact with one another.
2.1.4.2 Trusted Facility Manual
A manual addressed to the ADP System Administrator shall present cautions about functions and
privileges that should be controlled when running a secure facility.
2.1.4.3 Test Documentation
The system developer shall provide to the evaluators a document that describes the test plan, test
procedures that show how the security mechanisms were tested, and results of the security
mechanisms' functional testing.
2.1.4.4 Design Documentation
Documentation shall be available that provides a description of the manufacturer's philosophy of
protection and an explanation of how this philosophy is translated into the TCB. If the TCB is
composed of distinct modules, the interfaces between these modules shall be described.
The TCB shall require users to identify themselves to it before beginning to perform any other actions
that the TCB is expected to mediate. Furthermore, the TCB shall use a protected mechanism (e.g.,
passwords) to authenticate the user's identity. The TCB shall protect authentication data so that it
cannot be accessed by any unauthorized user. The TCB shall be able to enforce individual
TCB shall also provide the capability of associating this identity with all auditable actions taken by that
individual.
2.2.2.2 Audit
The TCB shall be able to create, maintain, and protect from modification or unauthorized access or
destruction an audit trail of accesses to the objects it protects. The audit data shall be protected by the
TCB so that read access to it is limited to those who are authorized for audit data. The TCB shall be
able to record the following types of events: use of identification and authentication mechanisms,
introduction or objects into a user's address space (e.g., file open, program initiation), deletion of
objects, and actions taken by computer operators and system administrators and/or system security
officers, and other security relevant events. For each recorded event, the audit record shall identify:
date and time of the event, user, type of event, and success or failure of the event. For
identification/authentication events the origin of request (e.g., terminal ID) shall be included in the
audit record. For events that introduce an object into a user's address space and for object deletion
events the audit record shall include the name of the object. The ADP system administrator shall be
able to selectively audit the actions of any one or more users based on individual identity.
2.2.3 Assurance
2.2.3.1 Operational Assurance
2.2.3.1.1 System Architecture
The TCB shall maintain a domain for its own execution that protects it from external interference or
tampering (e.g., by modification of its code or data structures). Resources controlled by the TCB may
be a defined subset of the subjects and objects in the ADP system. The TCB shall isolate the resources
to be protected so that they are subject to the access control and auditing requirements.
2.2.3.1.2 System Integrity
Hardware and/or software features shall be provided that can be used to periodically validate the
correct operation of the on-site hardware and firmware elements of the TCB.
2.2.3.2 Life-Cycle Assurance
2.2.3.2.1 Security Testing
The security mechanisms of the ADP system shall be tested and found to work as claimed in the
system documentation. Testing shall be done to assure that there are no obvious ways for an
unauthorized user to bypass or otherwise defeat the security protection mechanisms of the TCB.
Testing shall also include a search for obvious flaws that would allow violation of resource isolation,
or that would permit unauthorized access to the audit or authentication data. (See the Security Testing
guidelines.)
2.2.4 Documentation
2.2.4.1 Security Features User's Guide
A single summary, chapter, or manual in user documentation shall describe the protection mechanisms
provided by the TCB, guidelines on their use, and how they interact with one another.
defined groups of individuals, or by both, and shall provide controls to limit propagation of access
rights. The discretionary access control mechanism shall, either by explicit user action or by default,
provide that objects are protected from unauthorized access. These access controls shall be capable of
including or excluding access to the granularity of a single user. Access permission to an object by
users not already possessing access permission shall only be assigned by authorized users.
3.1.1.2 Object Reuse
All authorizations to the information contained within a storage object shall be revoked prior to initial
assignment, allocation or reallocation to a subject from the TCB's pool of unused storage objects. No
information, including encrypted representations of information, produced by a prior subject's actions
is to be available to any subject that obtains access to an object that has been released back to the
system.
3.1.1.3 Labels
Sensitivity labels associated with each subject and storage object under its control (e.g., process, file,
segment, device) shall be maintained by the TCB. These labels shall be used as the basis for mandatory
access control decisions. In order to import non-labelled data, the TCB shall request and receive from
an authorized user the security level of the data, and all such actions shall be auditable by the TCB.
3.1.1.3.1 Label Integrity
Sensitivity labels shall accurately represent security levels of the specific subjects or objects with
which they are associated. When exported by the TCB, sensitivity labels shall accurately and
unambiguously represent the internal labels and shall be associated with the information being
exported.
3.1.1.3.2 Exportation of Labeled Information
The TCB shall designate each communication channel and I/O device as either single-level or
multilevel. Any change in this designation shall be done manually and shall be auditable by the TCB.
The TCB shall maintain and be able to audit any change in the security level or levels associated with a
communication channel or I/O device.
3.1.1.3.2.1 Exportation to Multilevel Devices
When the TCB exports an object to a multilevel I/O device, the sensitivity label associated with that
object shall also be exported and shall reside on the same physical medium as the exported information
and shall be in the same form (i.e., machine-readable or human-readable form). When the TCB exports
or imports an object over a multilevel communication channel, the protocol used on that channel shall
provide for the unambiguous pairing between the sensitivity labels and the associated information that
is sent or received.
3.1.1.3.2.2 Exportation to Single-Level Devices
Single-level I/O devices and single-level communication channels are not required to maintain the
sensitivity labels of the information they process. However, the TCB shall include a mechanism by
which the TCb and an authorized user reliably communicate to designate the single security level of
information imported or exported via single-level communication channels or I/O devices.
3.1.1.3.2.3 Labelling Human-Readable Output
Universal Knowledge Solutions S.A.L.
- 42 -
The ADP system administrator shall be able to specify the printable label names associated with
exported sensitivity labels. The TCB shall mark the beginning and end of all human-readable, paged,
hardcopy output (e.g., line printer output) with human-readable sensitivity labels that properly*
represent the sensitivity of the output. The TCB shall, be default, mark the top and bottom of each page
of human-readable, paged, hardcopy output (e.g., line printer output) with human-readable sensitivity
labels that properly* represent the overall sensitivity of the output or that properly* represent the
sensitivity of the information on the page. The TCB shall, by default and in an appropriate manner,
mark other forms of human- readable output (e.g., maps, graphics) with human- readable sensitivity
labels that properly* represent the sensitivity of the touput. Any override of these marking defaults
shall be auditable by the TCB.
* The hierarchical classification component in human-readable sensitivity labels shall be equal to the
greatest hierarchical classification or any of the information in the output that the labels refer to; the
non-hierarchical category component shall include all of the non-hierarchical categories of the
information in the output the labels refer to, but no other non-hierarchical categories
3.1.1.4 Mandatory Access Control
The TCB shall enforce a mandatory access control policy over all subjects and storage objects under its
control (e.g., processes, files, segments, devices). These subjects and objects shall be assigned
sensitivity labels that are a combination of hierarchical classification levels and non-hierarchical
categories, and the labels shall be used as the basis for mandatory access control decisions. The TCB
shall be able to support two or more such security levels. (See the Mandatory Access Control
Guidelines.) The following requirements shall hold for all accesses between subjects and objects
controlled by the TCB: a subject can read an object only if the hierarchical classification in the
subject's security level is greater than or equal to the hierarchical classification in the object's security
level and the non- hierarchical categories in the subject's security level include all the non-hierarchical
categories in the object's security level. A subject can write an object only if the hierarchical
classification in the subject's security level is less than or equal to the hierarchical classification in the
object's security level and all the non-hierarchical categories in the subject's security level are included
in the non-hierarchical categories in the object's security level. Identification and authentication data
shall be used by the TCB to authenti- cate the user's identity and to ensure that the security level and
authorization of subjects external to the TCB that may be created to act on behalf of the individual user
are dominated by the clearance and authorization of that user.
3.1.2 Accountability
3.1.2.1 Identification and Authentication
The TCB shall require users to identify themselves to it before beginning to perform any other actions
that the TCB is expected to mediate. Furthermore, the TCB shall maintain authentication data that
includes information for verifying the identity of individual users (e.g., passwords) as well as
information for determining the clearance and authorizations or individual users. This data shall be
used by the TCB to authenticate the user's identity and to ensure that the security level and
authorizations of subjects external to the TCB that may be created to act on behalf of the individual
user are dominated by the clearance and authorization of that user. The TCB shall protect
authentication data so that it cannot be accessed by any unauthorized user. The TCB shall be able to
enforce individual accountability by providing the capability to uniquely identify each individual ADP
system user. The TCB shall also provide the capability of associating this identity with all auditable
actions taken by that individual.
3.1.2.2 Audit
Universal Knowledge Solutions S.A.L.
- 43 -
The TCB shall be able to create, maintain, and protect from modification or unauthorized access or
destruction an audit trail of accesses to the objects it protects. The audit data shall be protected by the
TCB so that read access to it is limited to those who are authorized for audit data. The TCB shall be
able to record the following types of events: use of identification and authentication mechanisms,
introduction of objects into a user's address space (e.g., file open, program initiation), deletion of
objects, and actions taken by computer operators and system administrators and/or system security
officers and other security relevant events. The TCB shall also be able to audit any override of humanreadable output markings. For each recorded event, the audit record shall identify: date and time of the
event, user, type of event, and success or failure of the event. For identification/authentication events
the origin of request (e.g., terminal ID) shall be included in the audit record. For events that introduce
an object into a user's address space and for object deletion events the audit record shall include the
name of the object and the object's security level. The ADP system administrator shall be able to
selectively audit the actions of any one or more users based on individual identity and/or object
security level.
3.1.3 Assurance
3.1.3.1 Operational Assurance
The security mechanisms of the ADP system shall be tested and found to work as claimed in the
system documentation. A team of individuals who thoroughly understand the specific implementation
of the TCB shall subject its design documentation, source code, and object code to thorough analysis
and testing. Their objectives shall be: to uncover all design and implementation flaws that would
permit a subject external to the TCB to read, change, or delete data normally denied under the
mandatory or discretionary security policy enforced by the TCB; as well as to assure that no subject
(without authorization to do so) is able to cause the TCB to enter a state such that it is unable to
respond to communications initiated by other users. All discovered flaws shall be removed or
neutralized and the TCB retested to demonstrate that they have been eliminated and that new flaws
have not been introduced. (See the Security Testing Guidelines.)
3.1.3.2.2 Design Specification and Verification
An informal or formal model of the security policy supported by the TCB shall be maintained over the
life cycle of the ADP system and demonstrated to be consistent with its axioms.
3.1.4 Documentation
3.1.4.1 Security Features User's Guide
A single summary, chapter, or manual in user documentation shall describe the protection mechanisms
provided by the TCB, guidelines on their use, and how they interact with one another.
3.1.4.2 Trusted Facility Manual
A manual addressed to the ADP system administrator shall present cautions about functions and
privileges that should be controlled when running a secure facility. The procedures for examining and
maintaining the audit files as well as the detailed audit record structure for each type of audit event
shall be given. The manual shall describe the operator and administrator functions related to security,
to include changing the security characteristics of a user. It shall provide guidelines on the consistent
and effective use of the protection features of the system, how they interact, how to securely generate a
new TCB, and facility procedures, warnings, and privileges that need to be controlled in order to
operate the facility in a secure manner.
3.1.4.3 Test Documentation
The system developer shall provide to the evaluators a document that describes the test plan, test
procedures that show how the security mechanisms were tested, and results of the security
mechanisms' functional testing.
3.1.4.4 Design Documentation
Documentation shall be available that provides a description of the manufacturer's philosophy of
protection and an explanation of how this philosophy is translated into the TCB. If the TCB is
composed of distinct modules, the interfaces between these modules shall be described. An informal or
formal description of the security policy model enforced by the TCB shall be available and an
explanation provided to show that it is sufficient to enforce the security policy. The specific TCB
protection mechanisms shall be identified and an explanation given to show that they satisfy the
model.
which the TCB and an authorized user reliably communicate to designate the single security level of
information imported or exported via single-level communication channels or I/O devices.
3.2.1.3.2.3 Labelling Human-Readable Output
The ADP system administrator shall be able to specify the printable label names associated with
exported sensitivity labels. The TCB shall mark the beginning and end of all human-readable, paged,
hardcopy output (e.g., line printer output) with human-readable sensitivity labels that properly*
represent the sensitivity of the output. The TCB shall, by default, mark the top and bottom of each
page of human-readable, paged, hardcopy output (e.g., line printer output) with human-readable
sensitivity labels that properly* represent the overall sensitivity of the output or that properly*
represent the sensitivity of the information on the page. The TCB shall, by default and in an
appropriate manner, mark other forms of human-readable output (e.g., maps, graphics) with humanreadable sensitivity labels that properly* represent the sensitivity of the output. Any override of these
marking defaults shall be auditable by the TCB.
3.2.1.3.3 Subject Sensitivity Labels
The TCB shall immediately notify a terminal user of each change in the security level associated with
that user during an interactive session. A terminal user shall be able to query the TCB as desired for a
display of the subject's complete sensitivity label.
3.2.1.3.4 Device Labels
The TCB shall support the assignment of minimum and maximum security levels to all attached
physical devices. These security levels shall be used by the TCB to enforce constraints imposed by the
physical environments in which the devices are located.
3.2.1.4 Mandatory Access Control
The TCB shall enforce a mandatory access control policy over all resources (i.e., subjects, storage
objects, and I/O devices that are directly or indirectly accessible by subjects external to the TCB. These
subjects and objects shall be assigned sensitivity labels that are a combination of hierarchical
classification levels and non-hierarchical categories, and the labels shall be used as the basis for
mandatory access control decisions. The TCB shall be able to support two or more such security levels.
(See the Mandatory Access Control guidelines.) The following requirements shall hold for all accesses
between All subjects external to the TCB and all objects directly or indirectly accessible by these
subjects: A subject can read an object only if the hierarchical classification in the subject's security
level is greater than or equal to the hierarchical classification in the object's security level and the nonhierarchical categories in the subject's security level include all the non-hierarchical categories in the
object's security level. A subject can write an object only if the hierarchical classification in the
subject's security level is less than or equal to the hierarchical classification in the object's security
level and all the non-hierarchical categories in the subject's security level are included in the nonhierarchical categories in the object's security level. Identification and authentication data shall be used
by the TCB to authenticate the user's identity and to ensure that the security level and authorization of
subjects external to the TCB that may be created to act on behalf of the individual user are dominated
by the clearance and authorization of that user.
3.2.2 Accountability
3.2.2.1 Identification and Authentication
The TCB shall require users to identify themselves to it before beginning to perform any other actions
that the TCB is expected to mediate. Furthermore, the TCB shall maintain authentication data that
includes information for verifying the identity of individual users (e.g., passwords) as well as
information for determining the clearance and authorizations of individual users. This data shall be
used by the TCB to authenticate the user's identity and to ensure that the security level and
authorizations of subjects external to the TCB that may be created to act on behalf of the individual
user are dominated by the clearance and authorization of that user. The TCB shall protect
authentication data so that it cannot be accessed by any unauthorized user. The TCB shall be able to
enforce individual accountability by providing the capability to uniquely identify each individual ADP
system user. The TCB shall also provide the capability of associating this identity with all auditable
actions taken by that individual.
3.2.2.1.1 Trusted Path
The TCB shall support a trusted communication path between itself and user for initial login and
authentication. Communications via this path shall be initiated exclusively by a user.
3.2.2.2 Audit
The TCB shall be able to create, maintain, and protect from modification or unauthorized access or
destruction an audit trail of accesses to the objects it protects. The audit data shall be protected by the
TCB so that read access to it is limited to those who are authorized for audit data. The TCB shall be
able to record the following types of events: use of identification and authentication mechanisms,
introduction of objects into a user's address space (e.g., file open, program initiation), deletion of
objects, and actions taken by computer operators and system administrators and/or system security
officers, and other security relevant events. The TCB shall also be able to audit any override of humanreadable output markings. For each recorded event, the audit record shall identify: date and time of the
event, user, type of event, and success or failure of the event. For identification/ authentication events
the origin of request (e.g., terminal ID) shall be included in the audit record. For events that introduce
an object into a user's address space and for object deletion events the audit record shall include the
name of the object and the object's security level. The ADP system administrator shall be able to
selectively audit the actions of any one or more users based on individual identity and/or object
security level. The TCB shall be able to audit the identified events that may be used in the exploitation
of covert storage channels.
3.2.3 Assurance
3.2.3.1 Operational Assurance
The security mechanisms of the ADP system shall be tested and found to work as claimed in the
system documentation. A team of individuals who thoroughly understand the specific implementation
of the TCB shall subject its design documentation, source code, and object code to thorough analysis
and testing. Their objectives shall be: to uncover all design and implementation flaws that would
permit a subject external to the TCB to read, change, or delete data normally denied under the
mandatory or discretionary security policy enforced by the TCB; as well as to assure that no subject
(without authorization to do so) is able to cause the TCB to enter a state such that it is unable to
respond to communications initiated by other users. The TCB shall be found relatively resistant to
penetration. All discovered flaws shall be corrected and the TCB retested to demonstrate that they have
been eliminated and that new flaws have not been introduced. Testing shall demonstrate that the TCB
implementation is consistent with the descriptive top-level specification. (See the Security Testing
Guidelines.)
3.2.3.2.2 Design Specification and Verification
A formal model of the security policy supported by the TCB shall be maintained over the life cycle of
the ADP system that is proven consistent with its axioms. A descriptive top-level specification (DTLS)
of the TCB shall be maintained that completely and accurately describes the TCB in terms of
exceptions, error messages, and effects. It shall be shown to be an accurate description of the TCB
interface.
3.2.3.2.3 Configuration Management
During development and maintenance of the TCB, a configuration management system shall be in
place that maintains control of changes to the descriptive top-level specification, other design data,
implementation documentation, source code, the running version of the object code, and test fixtures
and documentation. The configuration management system shall assure a consistent mapping among
all documentation and code associated with the current version of the TCB. Tools shall be provided for
generation of a new version of the TCB from source code. Also available shall be tools for comparing
a newly generated version with the previous TCB version in order to ascertain that only the intended
changes have been made in the code that will actually be used as the new version of the TCB.
3.2.4 Documentation
Universal Knowledge Solutions S.A.L.
- 49 -
A single summary, chapter, or manual in user documentation shall describe the protection mechanisms
provided by the TCB, guidelines on their use, and how they interact with one another.
3.2.4.2 Trusted Facility Manual
A manual addressed to the ADP system administrator shall present cautions about functions and
privileges that should be controlled when running a secure facility. The procedures for examining and
maintaining the audit files as well as the detailed audit record structure for each type of audit event
shall be given. The manual shall describe the operator and administrator functions related to security,
to include changing the security characteristics of a user. It shall provide guidelines on the consistent
and effective use of the protection features of the system, how they interact, how to securely generate a
new TCB, and facility procedures, warnings, and privileges that need to be controlled in order to
operate the facility in a secure manner. The TCB modules that contain the reference validation
mechanism shall be identified. The procedures for secure generation of a new TCB from source after
modification of any modules in the TCB shall be described.
3.2.4.3 Test Documentation The system developer shall provide to the evaluators a document that
describes the test plan, test procedures that show how the security mechanisms were tested, and results
of the security mechanisms' functional testing. It shall include results of testing the effectiveness of the
methods used to reduce covert channel bandwidths.
3.2.4.4 Design Documentation
Documentation shall be available that provides a description of the manufacturer's philosophy of
protection and an explanation of how this philosophy is translated into the TCB. The interfaces
between the TCB modules shall be described. A formal description of the security policy model
enforced by the TCB shall be available and proven that it is sufficient to enforce the security policy.
The specific TCB protection mechanisms shall be identified and an explanation given to show that
they satisfy the model. The descriptive top-level specification (DTLS) shall be shown to be an accurate
description of the TCB interface. Documentation shall describe how the TCB implements the reference
monitor concept and give an explanation why it is tamper resistant, cannot be bypassed, and is
correctly implemented. Documentation shall describe how the TCB is structured to facilitate testing
and to enforce least privilege. This documentation shall also present the results of the covert channel
analysis and the tradeoffs involved in restricting the channels. All auditable events that may be used in
the exploitation of known covert storage channels shall be identified. The bandwidths of known covert
storage channels the use of which is not detectable by the auditing mechanisms, shall be provided. (See
the Covert Channel Guideline section.)
significant system engineering during TCB design and implementation directed toward minimizing its
complexity. A security administrator is supported, audit mechanisms are expanded to signal securityrelevant events, and system recovery procedures are required. The system is highly resistant to
penetration. The following are minimal requirements for systems assigned a class (B3) rating:
3.1.1 Security Policy
3.3.1.1 Discretionary Access Control
The TCB shall define and control access between named users and named objects (e.g., files and
programs) in the ADP system. The enforcement mechanism (e.g., access control lists) shall allow users
to specify and control sharing of those objects, and shall provide controls to limit propagation of access
rights. The discretionary access control mechanism shall, either by explicit user action or by default,
provide that objects are protected from unauthorized access. These access controls shall be capable of
specifying, for each named object, a list of named individuals and a list of groups of named individuals
with their respective modes of access to that object. Furthermore, for each such named object, it shall
be possible to specify a list of named individuals and a list of groups of named individuals for which
no access to the object is to be given. Access permission to an object by users not already possessing
access permission shall only be assigned by authorized users.
3.3.1.2 Object Reuse
All authorizations to the information contained within a storage object shall be revoked prior to initial
assignment, allocation or reallocation to a subject from the TCB's pool of unused storage objects. No
information, including encrypted representations of information, produced by a prior subjects actions is
to be available to any subject that obtains access to an object that has been released back to the system.
3.3.1.3 Labels
Sensitivity labels associated with each ADP system resource (e.g., subject, storage object, ROM) that
is directly or indirectly accessible by subjects external to the TCB shall be maintained by the TCB.
These labels shall be used as the basis for mandatory access control decisions. In order to import nonlabelled data, the TCB shall request and receive from an authorized user the security level of the data,
and all such actions shall be auditable by the TCB.
3.3.1.3.1 Label Integrity
Sensitivity labels shall accurately represent security levels of the specific subjects or objects with
which they are associated. When exported by the TCB, sensitivity labels shall accurately and
unambiguously represent the internal labels and shall be associated with the information being
exported.
3.3.1.3.2 Exportation of Labeled Information
The TCB shall designate each communication channel and I/O device as either single-level or
multilevel. Any change in this designation shall be done manually and shall be auditable by the TCB.
The TCB shall maintain and be able to audit any change in the security level or levels associated with a
communication channel or I/O device.
mandatory access control decisions. The TCB shall be able to support two or more such security levels.
(See the Mandatory Access Control guidelines.) The following requirements shall hold for all accesses
between all subjects external to the TCB and all objects directly or indirectly accessible by these
subjects: A subject can read an object only if the hierarchical classification in the subject's security
level is greater than or equal to the hierarchical classification in the object's security level and the nonhierarchical categories in the subject's security level include all the non-hierarchical categories in the
object's security level. A subject can write an object only if the hierarchical classification in the
subject's security level is less than or equal to the hierarchical classification in the object's security
level and all the non-hierarchical categories in the subject's security level are included in the nonhierarchical categories in the object's security level. Identification and authentication data shall be used
by the TCB to authenticate the user's identity and to ensure that the security level and authori- zation of
subjects external to the TCB that may be created to act on behalf of the individual user are dominated
by the clearance and authorization of that user.
3.3.2 Accountability
3.3.2.1 Identification and Authentication
The TCB shall require users to identify themselves to it before beginning to perform any other actions
that the TCB is expected to mediate. Furthermore, the TCB shall maintain authentication data that
includes information for verifying the identity of individual users (e.g., passwords) as well as
information for determining the clearance and authorizations of individual users. This data shall be
used by the TCB to authenticate the user's identity and to ensure that the security level and
authorizations of subjects external to the TCB that may be created to act on behalf of the individual
user are dominated by the clearance and authorization of that user. The TCB shall protect
authentication data so that it cannot be accessed by any unauthorized user. The TCB shall be able to
enforce individual accountability by providing the capability to uniquely identify each individual ADP
system user. The TCB shall also provide the capability of associating this identity with all auditable
actions taken by that individual.
3.3.2.1.1 Trusted Path
The TCB shall support a trusted communication path between itself and users for use when a positive
TCB-to- user connection is required (e.g., login, change subject security level). Communications via
this trusted path shall be activated exclusively by a user of the TCB and shall be logically isolated and
unmistakably distinguishable from other paths.
3.3.2.2 Audit
The TCB shall be able to create, maintain, and protect from modification or unauthorized access or
destruction an audit trail of accesses to the objects it protects. The audit data shall be protected by the
TCB so that read access to it is limited to those who are authorized for audit data. The TCB shall be
able to record the following types of events: use of identification and authentication mechanisms,
introduction of objects into a user's address space (e.g., file open, program initiation), deletion of
objects, and actions taken by computer operators and system administrators and/or system security
officers and other security relevant events. The TCB shall also be able to audit any override of humanreadable output markings. For each recorded event, the audit record shall identify: date and time of the
event, user, type of event, and success or failure of the event. For identification/authentication events
the origin of request (e.g., terminal ID) shall be included in the audit record. For events that introduce
an object into a user's address space and for object deletion events the audit record shall include the
name of the object and the object's security level. The ADP system administrator shall be able to
Universal Knowledge Solutions S.A.L.
- 53 -
selectively audit the actions of any one or more users based on individual identity and/or object
security level. The TCB shall be able to audit the identified events that may be used in the exploitation
of covert storage channels. The TCB shall contain a mechanism that is able to monitor the occurrence
or accumulation of security auditable events that may indicate an imminent violation of security
policy. This mechanism shall be able to immediately notify the security administrator when thresholds
are exceeded, and if the occurrence or accumulation of these security relevant events continues, the
system shall take the least disruptive action to terminate the event.
3.3.3 Assurance
3.3.3.1 Operational Assurance
The security mechanisms of the ADP system shall be tested and found to work as claimed in the
system documentation. A team of individuals who thoroughly understand the specific implementation
of the TCB shall subject its design documentation, source code, and object code to thorough analysis
and testing. Their objectives shall be: to uncover all design and implementation flaws that would
permit a subject external to the TCB to read, change, or delete data normally denied under the
mandatory or discretionary security policy enforced by the TCB; as well as to assure that no subject
(without authorization to do so) is able to cause the TCB to enter a state such that it is unable to
respond to communications initiated by other users. The TCB shall be found resistant to penetration.
All discovered flaws shall be corrected and the TCB retested to demonstrate that they have been
eliminated and that new flaws have not been introduced. Testing shall demonstrate that the TCB
implementation is consistent with the descriptive top-level specification. (See the Security Testing
Guidelines.) No design flaws and no more than a few correctable implementation flaws may be found
during testing and there shall be reasonable confidence that few remain.
3.3.3.2.2 Design Specification and Verification
A formal model of the security policy supported by the TCB shall be maintained over the life cycle of
the ADP system that is proven consistent with its axioms. A descriptive top-level specification (DTLS)
of the TCB shall be maintained that completely and accurately describes the TCB in terms of
exceptions, error messages, and effects. It shall be shown to be an accurate description of the TCB
interface. A convincing argument shall be given that the DTLS is consistent with the model.
3.3.3.2.3 Configuration Management
During development and maintenance of the TCB, a configuration management system shall be in
place that maintains control of changes to the descriptive top-level specification, other design data,
implementation documentation, source code, the running version of the object code, and test fixtures
and documentation. The configuration management system shall assure a consistent mapping among
all documentation and code associated with the current version of the TCB. Tools shall be provided for
generation of a new version of the TCB from source code. Also available shall be tools for comparing
a newly generated version with the previous TCB version in order to ascertain that only the intended
changes have been made in the code that will actually be used as the new version of the TCB.
3.3.4 Documentation
3.3.4.1 Security Features User's Guide
A single summary, chapter, or manual in user documentation shall describe the protection mechanisms
provided by the TCB, guidelines on their use, and how they interact with one another.
3.3.4.2 Trusted Facility Manual
A manual addressed to the ADP system administrator shall present cautions about functions and
privileges that should be controlled when running a secure facility. The procedures for examining and
maintaining the audit files as well as the detailed audit record structure for each type of audit event
Universal Knowledge Solutions S.A.L.
- 55 -
shall be given. The manual shall describe the operator and administrator functions related to security,
to include changing the security characteristics of a user. It shall provide guidelines on the consistent
and effective use of the protection features of the system, how they interact, how to securely generate a
new TCB, and facility procedures, warnings, and privileges that need to be controlled in order to
operate the facility in a secure manner. The TCB modules that contain the reference validation
mechanism shall be identified. The procedures for secure generation of a new TCB from source after
modification of any modules in the TCB shall be described. It shall include the procedures to ensure
that the system is initially started in a secure manner. Procedures shall also be included to resume
secure system operation after any lapse in system operation.
3.3.4.3 Test Documentation
The system developer shall provide to the evaluators a document that describes the test plan, test
procedures that show how the security mechanisms were tested, and results of the security
mechanisms' functional testing. It shall include results of testing the effectiveness of the methods used
to reduce covert channel bandwidths.
3.3.4.4 Design Documentation
Documentation shall be available that provides a description of the manufacturer's philosophy of
protection and an explanation of how this philosophy is translated into the TCB. The interfaces
between the TCB modules shall be described. A formal description of the security policy model
enforced by the TCB shall be available and proven that it is sufficient to enforce the security policy.
The specific TCB protection mechanisms shall be identified and an explanation given to show that
they satisfy the model. The descriptive top-level specification (DTLS) shall be shown to be an accurate
description of the TCB interface. Documentation shall describe how the TCB implements the reference
monitor concept and give an explanation why it is tamper resistant, cannot be bypassed, and is
correctly implemented. The TCB implementation (i.e., in hardware, firmware, and software) shall be
informally shown to be consistent with the DTLS. The elements of the DTLS shall be shown, using
informal techniques, to correspond to the elements of the TCB. Documentation shall describe how the
TCB is structured to facilitate testing and to enforce least privilege. This documentation shall also
present the results of the covert channel analysis and the tradeoffs involved in restricting the channels.
All auditable events that may be used in the exploitation of known covert storage channels shall be
identified. The bandwidths of known covert storage channels, the use of which is not detectable by the
auditing mechanisms, shall be provided. (See the Covert Channel Guideline section.)
A formal model of the security policy must be clearly identified and documented, including
a mathematical proof that the model is consistent with its axioms and is sufficient to
support the security policy.
An FTLS must be produced that includes abstract definitions of the functions the TCB
performs and of the hardware and/or firmware mechanisms that are used to support
separate execution domains.
The FTLS of the TCB must be shown to be consistent with the model by formal techniques
where possible (i.e., where verification tools exist) and informal ones otherwise.
The TCB implementation (i.e., in hardware, firmware, and software) must be informally
shown to be consistent with the FTLS. The elements of the FTLS must be shown, using
informal techniques, to correspond to the elements of the TCB. The FTLS must express the
unified protection mechanism required to satisfy the security policy, and it is the elements
of this protection mechanism that are mapped to the elements of the TCB.
1. Formal analysis techniques must be used to identify and analyze covert channels. Informal
techniques may be used to identify covert timing channels. The continued existence of
identified covert channels in the system must be justified.
In keeping with the extensive design and development analysis of the TCB required of systems in class
(A1), more stringent configuration management is required and procedures are established for securely
distributing the system to sites. A system security administrator is supported.
The following are minimal requirements for systems assigned a class (A1) rating:
4.1.1 Security Policy
4.1.1.1 Discretionary Access Control
The TCB shall define and control access between named users and named objects (e.g., files and
programs) in the ADP system. The enforcement mechanism (e.g., access control lists) shall allow users
to specify and control sharing of those objects, and shall provide controls to limit propagation of access
rights. The discretionary access control mechanism shall, either by explicit user action or by default,
provide that objects are protected from unauthorized access. These access controls shall be capable of
specifying, for each named object, a list of named individuals and a list of groups of named individuals
with their respective modes of access to that object. Furthermore, for each such named object, it shall
be possible to specify a list of named individuals and a list of groups of named individuals for which
no access to the object is to be given. Access permission to an object by users not already possessing
access permission shall only be assigned by authorized users.
represent the sensitivity of the output. The TCB shall, by default, mark the top and bottom of each
page of human-readable, paged, hardcopy output (e.g., line printer output) with human-readable
sensitivity labels that properly* represent the overall sensitivity of the output or that properly*
represent the sensitivity of the information on the page. The TCB shall, by default and in an
appropriate manner, mark other forms of human-readable output (e.g., maps, graphics) with humanreadable sensitivity labels that properly* represent the sensitivity of the output. Any override of these
marking defaults shall be auditable by the TCB.
* The hierarchical classification component in human-readable sensitivity labels shall be equal to the
greatest hierarchical classification of any of the information in the output that the labels refer to; the
non-hierarchical category component shall include all of the non-hierarchical categories of the
information in the output the labels refer to, but no other non-hierarchical categories.
4.1.1.3.3 Subject Sensitivity Labels
The TCB shall immediately notify a terminal user of each change in the security level associated with
that user during an interactive session. A terminal user shall be able to query the TCB as desired for a
display of the subject's complete sensitivity label.
4.1.1.3.4 Device Labels
The TCB shall support the assignment of minimum and maximum security levels to all attached
physical devices. These security levels shall be used by the TCB to enforce constraints imposed by the
physical environments in which the devices are located.
4.1.1.4 Mandatory Access Control
The TCB shall enforce a mandatory access control policy over all resources (i.e., subjects, storage
objects, and I/O devices) that are directly or indirectly accessible by subjects external to the TCB.
These subjects and objects shall be assigned sensitivity labels that are a combination of hierarchical
classification levels and non-hierarchical categories, and the labels shall be used as the basis for
mandatory access control decisions. The TCB shall be able to support two or more such security levels.
(See the Mandatory Access Control guidelines.) The following requirements shall hold for all accesses
between all subjects external to the TCB and all objects directly or indirectly accessible by these
subjects: A subject can read an object only if the hierarchical classification in the subject's security
level is greater than or equal to the hierarchical classification in the object's security level and the nonhierarchical categories in the subject's security level include all the non-hierarchical categories in the
object's security level. A subject can write an object only if the hierarchical classification in the
subject's security level is less than or equal to the hierarchical classification in the object's security
level and all the non-hierarchical categories in the subject's security level are included in the nonhierarchical categories in the object's security level. Identification and authentication data shall be used
by the TCB to authenticate the user's identity and to ensure that the security level and authoriza- tion of
subjects external to the TCB that may be created to act on behalf of the individual user are dominated
by the clearance and authorization of that user.
4.1.2 Accountability
4.1.2.1 Identification and Authentication
The TCB shall require users to identify themselves to it before beginning to perform any other actions
that the TCB is expected to mediate. Furthermore, the TCB shall maintain authentication data that
information for determining the clearance and authorizations of individual users. This data shall be
used by the TCB to authenticate the user's identity and to ensure that the security level and
authorizations of subjects external to the TCB that may be created to act on behalf of the individual
user are dominated by the clearance and authorization of that user. The TCB shall protect
authentication data so that it cannot be accessed by any unauthorized user. The TCB shall be able to
enforce individual accountability by providing the capability to uniquely identify each individual ADP
system user. The TCB shall also provide the capability of associating this identity with all auditable
actions taken by that individual. 4.1.2.1.1 Trusted Path
The TCB shall support a trusted communication path between itself and users for use when a positive
TCB-to- user connection is required (e.g., login, change subject security level). Communications via
this trusted path shall be activated exclusively by a user or the TCB and shall be logically isolated and
unmistakably distinguishable from other paths.
4.1.2.2 Audit
The TCB shall be able to create, maintain, and protect from modification or unauthorized access or
destruction an audit trail of accesses to the objects it protects. The audit data shall be protected by the
TCB so that read access to it is limited to those who are authorized for audit data. The TCB shall be
able to record the following types of events: use of identification and authentication mechanisms,
introduction of objects into a user's address space (e.g., file open, program initiation), deletion of
objects, and actions taken by computer operators and system administrators and/or system security
officers, and other security relevant events. The TCB shall also be able to audit any override of humanreadable output markings. For each recorded event, the audit record shall identify: date and time of the
event, user, type of event, and success or failure of the event. For identification/ authentication events
the origin of request (e.g., terminal ID) shall be included in the audit record. For events that introduce
an object into a user's address space and for object deletion events the audit record shall include the
name of the object and the object's security level. The ADP system administrator shall be able to
selectively audit the actions of any one or more users based on individual identity and/or object
security level. The TCB shall be able to audit the identified events that may be used in the exploitation
of covert storage channels. The TCB shall contain a mechanism that is able to monitor the occurrence
or accumulation of security auditable events that may indicate an imminent violation of security policy.
This mechanism shall be able to immediately notify the security administrator when thresholds are
exceeded, and, if the occurrence or accumulation of these security relevant events continues, the
system shall take the least disruptive action to terminate the event.
4.1.3 Assurance
4.1.3.1 Operational Assurance
4.1.3.1.1 System Architecture
The TCB shall maintain a domain for its own execution that protects it from external interference or
tampering (e.g., by modification of its code or data structures). The TCB shall maintain process
isolation through the provision of distinct address spaces under its control. The TCB shall be internally
structured into well-defined largely independent modules. It shall make effective use of available
hardware to separate those elements that are protection-critical from those that are not. The TCB
modules shall be designed such that the principle of least privilege is enforced. Features in hardware,
such as segmentation, shall be used to support logically distinct storage objects with separate attributes
(namely: readable, writeable). The user interface to the TCB shall be completely defined and all
elements of the TCB identified. The TCB shall be designed and structured to use a complete,
play a central role in enforcing the internal structuring of the TCB and the system. The TCB shall
incorporate significant use of layering, abstraction and data hiding. Significant system engineering
shall be directed toward minimizing the complexity of the TCB and excluding from the TCB modules
that are not protection-critical.
4.1.3.1.2 System Integrity
Hardware and/or software features shall be provided that can be used to periodically validate the
correct operation of the on-site hardware and firmware elements of the TCB.
4.1.3.1.3 Covert Channel Analysis
The system developer shall conduct a thorough search for covert channels and make a determination
(either by actual measurement or by engineering estimation) of the maximum bandwidth of each
identified channel. (See the Covert Channels Guideline section.) Formal methods shall be used in the
analysis.
4.1.3.1.4 Trusted Facility Management
The TCB shall support separate operator and administrator functions. The functions performed in the
role of a security administrator shall be identified. The ADP system administrative personnel shall only
be able to perform security administrator functions after taking a distinct auditable action to assume the
security administrator role on the ADP system. Non-security functions that can be performed in the
security administration role shall be limited strictly to those essential to performing the security role
effectively.
4.1.3.1.5 Trusted Recovery
Procedures and/or mechanisms shall be provided to assure that, after an ADP system failure or other
discontinuity, recovery without a protection compromise is obtained.
4.1.3.2 Life-Cycle Assurance
4.1.3.2.1 Security Testing
The security mechanisms of the ADP system shall be tested and found to work as claimed in the
system documentation. A team of individuals who thoroughly understand the specific implementation
of the TCB shall subject its design documentation, source code, and object code to thorough analysis
and testing. Their objectives shall be: to uncover all design and implementation flaws that would
permit a subject external to the TCB to read, change, or delete data normally denied under the
mandatory or discretionary security policy enforced by the TCB; as well as to assure that no subject
(without authorization to do so) is able to cause the TCB to enter a state such that it is unable to
respond to communications initiated by other users. The TCB shall be found resistant to penetration.
All discovered flaws shall be corrected and the TCB retested to demonstrate that they have been
eliminated and that new flaws have not been introduced. Testing shall demonstrate that the TCB
implementation is consistent with the formal top- level specification. (See the Security Testing
Guidelines.) No design flaws and no more than a few correctable implementation flaws may be found
during testing and there shall be reasonable confidence that few remain. Manual or other mapping of
the FTLS to the source code may form a basis for penetration testing.
4.1.3.2.2 Design Specification and Verification
A formal model of the security policy supported by the TCB shall be maintained over the life-cycle of
the ADP system that is proven consistent with its axioms. A descriptive top-level specification (DTLS)
of the TCB shall be maintained that completely and accurately describes the TCB in terms of
exceptions, error messages, and effects. A formal top-level specification (FTLS) of the TCB shall be
maintained that accurately describes the TCB in terms of exceptions, error messages, and effects. The
DTLS and FTLS shall include those components of the TCB that are implemented as hardware and/or
firmware if their properties are visible at the TCB interface. The FTLS shall be shown to be an
accurate description of the TCB interface. A convincing argument shall be given that the DTLS is
consistent with the model and a combination of formal and informal techniques shall be used to show
that the FTLS is consistent with the model. This verification evidence shall be consistent with that
provided within the state-of-the-art of the particular computer security center-endorsed formal
specification and verification system used. Manual or other mapping of the FTLS to the TCB source
code shall be performed to provide evidence of correct implementation.
4.1.3.2.3 Configuration Management
During the entire life-cycle, i.e., during the design, development, and maintenance of the TCB, a
configuration management system shall be in place for all security- relevant hardware, firmware, and
software that maintains control of changes to the formal model, the descriptive and formal top-level
specifications, other design data, implementation documentation, source code, the running version of
the object code, and test fixtures and documentation. The configuration management system shall
assure a consistent mapping among all documentation and code associated with the current version of
the TCB. Tools shall be provided for generation of a new version of the TCB from source code. Also
available shall be tools, maintained under strict configuration control, for comparing a newly generated
version with the previous TCB version in order to ascertain that only the intended changes have been
made in the code that will actually be used as the new version of the TCB. A combination of technical,
physical, and procedural safeguards shall be used to protect from unauthorized modification or
destruction the master copy or copies of all material used to generate the TCB.
4.1.3.2.4 Trusted Distribution
A trusted ADP system control and distribution facility shall be provided for maintaining the integrity
of the mapping between the master data describing the current version of the TCB and the on-site
master copy of the code for the current version. Procedures (e.g., site security acceptance testing) shall
exist for assuring that the TCb software, firmware, and hardware updates distributed to a customer are
exactly as specified by the master copies.
4.1.4 Documentation
4.1.4.1 Security Features User's Guide
A single summary, chapter, or manual in user documentation shall describe the protection mechanisms
provided by the TCB, guidelines on their use, and how they interact with one another.
4.1.4.2 Trusted Facility Manual
A manual addressed to the ADP system administrator shall present cautions about functions and
privileges that should be controlled when running a secure facility. The procedures for examining and
maintaining the audit files as well as the detailed audit record structure for each type of audit event
shall be given. The manual shall describe the operator and administrator functions related to security,
to include changing the security characteristics of a user. It shall provide guidelines on the consistent
and effective use of the protection features of the system, how they interact, how to securely generate a
Universal Knowledge Solutions S.A.L.
- 62 -
new TCB, and facility procedures, warnings, and privileges that need to be controlled in order to
operate the facility in a secure manner. The TCB modules that contain the reference validation
mechanism shall be identified. The procedures for secure generation of a new TCB from source after
modification of any modules in the TCB shall be described. It shall include the procedures to ensure
that the system is initially started in a secure manner. Procedures shall also be included to resume
secure system operation after any lapse in system operation.
4.1.4.3 Test Documentation
The system developer shall provide to the evaluators a document that describes the test plan, test
procedures that show how the security mechanisms were tested, and results of the security
mechanisms' functional testing. It shall include results of testing the effectiveness of the methods used
to reduce covert channel bandwidths. The results of the mapping between the formal top-level
specification and the TCB source code shall be given.
4.1.4.4 Design Documentation
Documentation shall be available that provides a description of the manufacturer's philosophy of
protection and an explanation of how this philosophy is translated into the TCB. The interfaces
between the TCB modules shall be described. A formal description of the security policy model
enforced by the TCB shall be available and proven that it is sufficient to enforce the security policy.
The specific TCB protection mechanisms shall be identified and an explanation given to show that they
satisfy the model. The descriptive top-level speci- fication (DTLS) shall be shown to be an accurate
description of the TCB interface. Documentation shall describe how the TCB implements the reference
monitor concept and give an explana- tion why it is tamper resistant, cannot be bypassed, and is
correctly implemented. The TCB implementation (i.e., in hardware, firmware, and software) shall be
informally shown to be consistent with the formal top-level specification (FTLS). The elements of the
FTLS shall be shown, using informal techniques, to correspond to the elements of the TCB.
Documentation shall describe how the TCB is structured to facilitate testing and to enforce least
privilege. This documentation shall also present the results of the covert channel analysis and the
tradeoffs involved in restricting the channels. All auditable events that may be used in the exploitation
of known covert storage channels shall be identified. The bandwidths of known covert storage
channels, the use of which is not detectable by the auditing mechanisms, shall be provided. (See the
Covert Channel Guideline section.) Hardware, firmware, and software mechanisms not dealt with in
the FTLS but strictly internal to the TCB (e.g., mapping registers, direct memory access I/O) shall be
clearly described.
development (e.g., compilers, assemblers, loaders) and to the correct functioning of the
hardware/firmware on which the TCB will run. Areas to be addressed by systems beyond class (A1)
include:
System Architecture
A demonstration (formal or otherwise) must be given showing that requirements of self-protection and
completeness for reference monitors have been implemented in the TCB.
Security Testing
Although beyond the current state-of-the-art, it is envisioned that some test-case generation will be
done automatically from the formal top-level specification or formal lower-level specifications.
Formal Specification and Verification
The TCB must be verified down to the source code level, using formal verification methods where
feasible. Formal verification of the source code of the security-relevant portions of an operating system
has proven to be a difficult task. Two important considerations are the choice of a high-level language
whose semantics can be fully and formally expressed, and a careful mapping, through successive
stages, of the abstract formal design to a formalization of the implementation in low-level
specifications. Experience has shown that only when the lowest level specifications closely correspond
to the actual code can code proofs be successfully accomplished.
Trusted Design Environment
The TCB must be designed in a trusted facility with only trusted (cleared) personnel.
PART II:
RATIONALE AND GUIDELINES
Security Policy
Accountability
Assurance
This section provides a discussion of these general control objectives and their implication in
terms of designing trusted systems.
The purpose of this section is to describe in detail the fundamental control objectives. These objectives
lay the foundation for the requirements outlined in the criteria. The goal is to explain the foundations
so that those outside the National Security Establishment can assess their universality and, by
extension, the universal applicability of the criteria requirements to processing all types of sensitive
applications whether they be for National Security or the private sector.
through these definitions is the word "protection." Further declarations of protection requirements can
be found in DoD Directive 5200.28 which describes an acceptable level of protection for classified
data to be one that will "assure that systems which process, store, or use classified data and produce
classified information will, with reasonable dependability, prevent: a. Deliberate or inadvertent access
to classified material by unauthorized persons, and b. Unauthorized manipulation of the computer and
its associated peripheral devices."[8]
In summary, protection requirements must be defined in terms of the perceived threats, risks, and goals
of an organization. This is often stated in terms of a security policy. It has been pointed out in the
literature that it is external laws, rules, regulations, etc. that establish what access to information is to
be permitted, independent of the use of a computer. In particular, a given system can only be said to be
secure with respect to its enforcement of some specific policy.[30] Thus, the control objective for
security policy is:
SECURITY POLICY CONTROL OBJECTIVE
A statement of intent with regard to control over access to and dissemination of information, to be
known as the security policy must be precisely defined and implemented for each system that is used
to process sensitive information. The security policy must accurately reflect the laws, regulations, and
general policies from which it is derived.
5.3.1.1 Mandatory Security Policy
Where a security policy is developed that is to be applied to control of classified or other specifically
designated sensitive information, the policy must include detailed rules on how to handle that
information throughout its life-cycle. These rules are a function of the various sensitivity designations
that the information can assume and the various forms of access supported by the system. Mandatory
security refers to the enforcement of a set of access control rules that constrains a subject's access to
information on the basis of a comparison of that individual's clearance/authorization to the information,
the classification/sensitivity designation of the information, and the form of access being mediated.
Mandatory policies either require or can be satisfied by systems that can enforce a partial ordering of
designations, namely, the designations must form what is mathematically known as a "lattice."[5] A
clear implication of the above is that the system must assure that the designations associated with
sensitive data cannot be arbitrarily changed, since this could permit individuals who lack the
appropriate authorization to access sensitive information. Also implied is the requirement that the
system control the flow of information so that data cannot be stored with lower sensitivity designations
unless its "downgrading" has been authorized. The control objective is:
Universal Knowledge Solutions S.A.L.
- 66 -
Security policies defined for systems that are used to process classified or other specifically
categorized sensitive information must include provisions for the enforcement of mandatory access
control rules. That is, they must include a set of rules for controlling access based directly on a
comparison of the individual's clearance or authorization for the information and the classification or
sensitivity designation of the information being sought, and indirectly on considerations of physical
and other environmental factors of control. The mandatory access control rules must accurately reflect
the laws, regulations, and general policies from which they are derived.
5.3.1.2 Discretionary Security Policy
Discretionary security is the principal type of access control available in computer systems today. The
basis of this kind of security is that an individual user, or program operating on his behalf, is allowed
to specify explicitly the types of access other users may have to information under his control.
Discretionary security differs from mandatory security in that it implements an access control policy
on the basis of an individual's need-to-know as opposed to mandatory controls which are driven by the
classification or sensitivity designation of the information.
Discretionary controls are not a replacement for mandatory controls. In an environment in which
information is classified (as in the DoD) discretionary security provides for a finer granularity of
control within the overall constraints of the mandatory policy. Access to classified information requires
effective implementation of both types of controls as precondition to granting that access. In general,
no person may have access to classified information unless: (a) that person has been determined to be
trustworthy, i.e., granted a personnel security clearance -- MANDATORY, and (b) access is necessary
for the performance of official duties, i.e., determined to have a need-to-know -- DISCRETIONARY.
In other words, discretionary controls give individuals discretion to decide on which of the permissible
accesses will actually be allowed to which users, consistent with overriding mandatory policy
restrictions. The control objective is:
DISCRETIONARY SECURITY CONTROL OBJECTIVE
Security policies defined for systems that are used to process classified or other sensitive information
must include provisions for the enforcement of discretionary access control rules. That is, they must
include a consistent set of rules for controlling and limiting access based on identified individuals who
have been determined to have a need-to-know for the information.
5.3.1.3 Marking
To implement a set of mechanisms that will put into effect a mandatory security policy, it is necessary
that the system mark information with appropriate classification or sensitivity labels and maintain
these markings as the information moves through the system. Once information is unalterably and
accurately marked, comparisons required by the mandatory access control rules can be accurately and
consistently made. An additional benefit of having the system maintain the classification or sensitivity
label internally is the ability to automatically generate properly "labelled" output. The labels, if
accurately and integrally maintained by the system, remain accurate when output from the system. The
control objective is:
MARKING CONTROL OBJECTIVE
Systems that are designed to enforce a mandatory security policy must store and preserve the integrity
Universal Knowledge Solutions S.A.L.
- 67 -
of classification or other sensitivity labels for all information. Labels exported from the system must be
accurate representations of the corresponding internal sensitivity labels being exported.
5.3.2 Accountability
The second basic control objective addresses one of the fundamental principles of security, i.e.,
individual accountability. Individual accountability is the key to securing and controlling any system
that processes information on behalf of individuals or groups of individuals. A number of requirements
must be met in order to satisfy this objective.
The first requirement is for individual user identification. Second, there is a need for authentication of
the identification. Identification is functionally dependent on authentication. Without authentication,
user identification has no credibility. Without a credible identity, neither mandatory nor discretionary
security policies can be properly invoked because there is no assurance that proper authorizations can
be made.
The third requirement is for dependable audit capabilities. That is, a trusted computer system must
provide authorized personnel with the ability to audit any action that can potentially cause access to,
generation of, or affect the release of classified or sensitive information. The audit data will be
selectively acquired based on the auditing needs of a particular installation and/or application.
However, there must be sufficient granularity in the audit data to support tracing the auditable events
to a specific individual who has taken the actions or on whose behalf the actions were taken. The
control objective is:
ACCOUNTABILITY CONTROL OBJECTIVE
Systems that are used to process or handle classified or other sensitive information must assure
individual accountability whenever either a mandatory or discretionary security policy is invoked.
Furthermore, to assure accountability, the capability must exist for an authorized and competent agent
to access and evaluate accountability information by a secure means, within a reasonable amount of
time, and without undue difficulty.
5.3.3 Assurance
The third basic control objective is concerned with guaranteeing or providing confidence that the
security policy has been implemented correctly and that the protection-relevant elements of the system
do, indeed, accurately mediate and enforce the intent of that policy. By extension, assurance must
include a guarantee that the trusted portion of the system works only as intended. To accomplish these
objectives, two types of assurance are needed. They are life-cycle assurance and operational
assurance.
Life-cycle assurance refers to steps taken by an organization to ensure that the system is designed,
developed, and maintained using formalized and rigorous controls and standards.[17] Computer
systems that process and store sensitive or classified information depend on the hardware and software
to protect that information. It follows that the hardware and software themselves must be protected
against unauthorized changes that could cause protection mechanisms to malfunction or be bypassed
completely. For this reason trusted computer systems must be carefully evaluated and tested during the
design and development phases and revaluated whenever changes are made that could affect the
integrity of the protection mechanisms. Only in this way can confidence be provided that the hardware
and software interpretation of the security policy is maintained accurately and without distortion.
While life-cycle assurance is concerned with procedures for managing system design, development,
Universal Knowledge Solutions S.A.L.
- 68 -
and maintenance; operational assurance focuses on features and system architecture used to ensure that
the security policy is uncircumventably enforced during system operation. That is, the security policy
must be integrated into the hardware and software protection features of the system. Examples of steps
taken to provide this kind of confidence include: methods for testing the operational hardware and
software for correct operation, isolation of protection- critical code, and the use of hardware and
software to provide distinct domains. The control objective is:
ASSURANCE CONTROL OBJECTIVE
Systems that are used to process or handle classified or other sensitive information must be designed to
guarantee correct and accurate interpretation of the security policy and must not distort the intent of
that policy. Assurance must be provided that correct implementation and operation of the policy exists
throughout the system's life-cycle.
models of security policy requirements and of the mechanisms that would implement and enforce those
policy models as a security kernel. Prominent among these efforts was the ESD-sponsored
development of the Bell and LaPadula model, an abstract formal treatment of DoD security policy.[2]
Using mathematics and set theory, the model precisely defines the notion of secure state, fundamental
modes of access, and the rules for granting subjects specific modes of access to objects. Finally, a
theorem is proven to demonstrate that the rules are security-preserving operations, so that the
application of any sequence of the rules to a system that is in a secure state will result in the system
entering a new state that is also secure. This theorem is known as the Basic Security Theorem.
A subject can act on behalf of a user or another subject. The subject is created as a surrogate for the
cleared user and is assigned a formal security level based on their classification. The state transitions and
invariants of the formal policy model define the invariant relationships that must hold between the
clearance of the user, the formal security level of any process that can act on the user's behalf, and the
formal security level of the devices and other objects to which any process can obtain specific modes of
access. The Bell and LaPadula model, for example, defines a relationship between formal security levels
of subjects and objects, now referenced as the "dominance relation." From this definition, accesses
permitted between subjects and objects are explicitly defined for the fundamental modes of
access, including read-only access, read/write access, and write-only access. The model defines the
Simple Security Condition to control granting a subject read access to a specific object, and the *Property (read "Star Property") to control granting a subject write access to a specific object. Both the
Simple Security Condition and the *-Property include mandatory security provisions based on the
dominance relation between formal security levels of subjects and objects the clearance of the subject
and the classification of the object. The Discretionary Security Property is also defined, and requires
that a specific subject be authorized for the particular mode of access required for the state transition.
In its treatment of subjects (processes acting on behalf of a user), the model distinguishes between
trusted subjects (i.e., not constrained within the model by the *-Property) and untrusted subjects (those
that are constrained by the *-Property).
From the Bell and LaPadula model there evolved a model of the method of proof required to formally
demonstrate that all arbitrary sequences of state transitions are security-preserving. It was also shown
that the *- Property is sufficient to prevent the compromise of information by Trojan Horse attacks.
6.3 THE TRUSTED COMPUTING BASE
In order to encourage the widespread commercial availability of trusted computer systems, these
evaluation criteria have been designed to address those systems in which a security kernel is
specifically implemented as well as those in which a security kernel has not been implemented. The
latter case includes those systems in which objective (c) is not fully supported because of the size or
complexity of the reference validation mechanism. For convenience, these evaluation criteria use the
term Trusted Computing Base to refer to the reference validation mechanism, be it a security kernel,
front-end security filter, or the entire trusted computer system.
The heart of a trusted computer system is the Trusted Computing Base (TCB) which contains all of the
elements of the system responsible for supporting the security policy and supporting the isolation of
objects (code and data) on which the protection is based. The bounds of the TCB equate to the
"security perimeter" referenced in some computer security literature. In the interest of understandable
and maintainable protection, a TCB should be as simple as possible consistent with the functions it has
to perform. Thus, the TCB includes hardware, firmware, and software critical to protection and must
be designed and implemented such that system elements excluded from it need not be trusted to
maintain protection. Identification of the interface and elements of the TCB along with their correct
functionality therefore forms the basis for evaluation.
Universal Knowledge Solutions S.A.L.
- 70 -
For general-purpose systems, the TCB will include key elements of the operating system and may
include all of the operating system. For embedded systems, the security policy may deal with objects in
a way that is meaningful at the application level rather than at the operating system level. Thus, the
protection policy may be enforced in the application software rather than in the underlying operating
system. The TCB will necessarily include all those portions of the operating system and application
software essential to the support of the policy. Note that, as the amount of code in the TCB increases, it
becomes harder to be confident that the TCB enforces the reference monitor requirements under all
circumstances.
6.4 ASSURANCE
The third reference monitor design objective is currently interpreted as meaning that the TCB "must be
of sufficiently simple organization and complexity to be subjected to analysis and tests, the
completeness of which can be assured."
Clearly, as the perceived degree of risk increases (e.g., the range of sensitivity of the system's protected
data, along with the range of clearances held by the system's user population) for a particular system's
operational application and environment, so also must the assurances be increased to substantiate the
degree of trust that will be placed in the system. The hierarchy of requirements that are presented for
the evaluation classes in the trusted computer system evaluation criteria reflect the need for these
assurances.
As discussed in Section 5.3, the evaluation criteria uniformly require a statement of the security policy
that is enforced by each trusted computer system. In addition, it is required that a convincing argument
be presented that explains why the TCB satisfies the first two design requirements for a reference
monitor. It is not expected that this argument will be entirely formal. This argument is required for
each candidate system in order to satisfy the assurance control objective.
The systems to which security enforcement mechanisms have been added, rather than built-in as
fundamental design objectives, are not readily amenable to extensive analysis since they lack the
requisite conceptual simplicity of a security kernel. This is because their TCB extends to cover much
of the entire system. Hence, their degree of trustworthiness can best be ascertained only by obtaining
test results. Since no test procedure for something as complex as a computer system can be truly
exhaustive, there is always the possibility that a subsequent penetration attempt could succeed. It is for
this reason that such systems must fall into the lower evaluation classes.
On the other hand, those systems that are designed and engineered to support the TCB concepts are
more amenable to analysis and structured testing. Formal methods can be used to analyze the
correctness of their reference validation mechanisms in enforcing the system's security policy. Other
methods, including less-formal arguments, can be used in order to substantiate claims for the
completeness of their access mediation and their degree of tamper-resistance. More confidence can be
placed in the results of this analysis and in the thoroughness of the structured testing than can be placed
in the results for less methodically structured systems. For these reasons, it appears reasonable to
conclude that these systems could be used in higher-risk environments. Successful implementations of
such systems would be placed in the higher evaluation classes.
6.5 THE CLASSES
It is highly desirable that there be only a small number of overall evaluation classes. Three major
divisions have been identified in the evaluation criteria with a fourth division reserved for those
systems that have been evaluated and found to offer unacceptable security protection. Within each
Universal Knowledge Solutions S.A.L.
- 71 -
major evaluation division, it was found that "intermediate" classes of trusted system design and
development could meaningfully be defined. These intermediate classes have been designated in the
criteria because they identify systems that:
are viewed to offer significantly better protection and assurance than would systems that
satisfy the basic requirements for their evaluation class; and
there is reason to believe that systems in the intermediate evaluation classes could
eventually be evolved such that they would satisfy the requirements for the next higher
evaluation class.
Except within division A it is not anticipated that additional "intermediate" evaluation classes
satisfying the two characteristics described above will be identified.
Distinctions in terms of system architecture, security policy enforcement, and evidence of credibility
between evaluation classes have been defined such that the "jump" between evaluation classes would
require a considerable investment of effort on the part of implementors. Correspondingly, there are
expected to be significant differentials of risk to which systems from the higher evaluation classes will
be exposed.
for the establishment of physical, administrative and technical safeguards required to adequately
protect personal, proprietary or other sensitive data not subject to national security regulations, as well
as national security data."[26, para. 4 p. 2]
OMB Circular No. A-123, "Internal Control Systems,"[27] issued to help eliminate fraud, waste, and
abuse in government programs requires: (a) agency heads to issue internal control directives and assign
responsibility, (b) managers to review programs for vulnerability, and (c) managers to perform periodic
reviews to evaluate strengths and update controls. Soon after promulgation of OMB Circular A-123,
the relationship of its internal control requirements to building secure computer systems was
recognized.[4] While not stipulating computer controls specifically, the definition of Internal Controls
in A-123 makes it clear that computer systems are to be included:
"Internal Controls - The plan of organization and all of the methods and measures adopted within an
agency to safeguard its resources, assure the accuracy and reliability of its information, assure
adherence to applicable laws, regulations and policies, and promote operational economy and
efficiency."[27, sec. 4.C]
The matter of classified national security information processed by ADP systems was one of the first
areas given serious and extensive concern in computer security. The computer security policy
documents promulgated as a result contain generally more specific and structured requirements than
most, keyed in turn to an authoritative basis that itself provides a rather clearly articulated and
structured information security policy. This basis, Executive Order 12356, "National Security
Information," sets forth requirements for the classification, declassification and safeguarding of
"national security information" per se.[14]
* i.e., NASA, Commerce Department, GSA, State Department, Small Business Administration, National
Science Foundation, Treasury Department, Transportation Department, Interior Department,
Agriculture Department, U.S. Information Agency, Labor Department, Environmental Protection
Agency, Justice Department, U.S. Arms Control and Disarmament Agency, Federal Emergency
Management Agency, Federal Reserve System, and U.S. General Accounting Office.
For ADP systems, these information security requirements are further amplified and specified in: 1)
DoD Directive 5200.28 [8] and DoD Manual 5200.28-M [9], for DoD components; and 2) Section XIII
of DoD 5220.22-M [11] for contractors. DoD Directive 5200.28, "Security Requirements for
Automatic Data Processing (ADP) Systems," stipulates: "Classified material contained in an ADP
system shall be safeguarded by the continuous employment of protective features in the system's
Universal Knowledge Solutions S.A.L.
- 73 -
hardware and software design and configuration . . . ."[8, sec. IV] Furthermore, it is required that ADP
systems that "process, store, or use classified data and produce classified information will, with
reasonable dependability, prevent:
a. Deliberate or inadvertent access to classified material by unauthorized persons, and
b. Unauthorized manipulation of the computer and its associated peripheral devices."[8, sec. I B.3]
Requirements equivalent to these appear within DoD 5200.28-M [9] and in DoD 5220.22-M [11].
DoD Directove 5200.28 provides the security requirements for ADP systems. For some types of
information, such as Sensitive Compartmented Information (SCI), DoD Directive 5200.28 states that
other minimum security requirements also apply. These minima are found in DCID l/l6 (new reference
number 5) which is implemented in DIAM 50-4 (new reference number 6) for DoD and DoD
contractor ADP systems.
From requirements imposed by these regulations, directives and circulars, the three components of the
Security Policy Control Objective, i.e., Mandatory and Discretionary Security and Marking, as well as
the Accountability and Assurance Control Objectives, can be functionally defined for DoD
applications. The following discussion provides further specificity in Policy for these Control
Objectives.
employing such media shall provide for internal classification marking to assure that classified
information contained therein that is reproduced or generated, will bear applicable classification and
associated markings." (This regulation provides for the exemption of certain existing systems where
"internal classification and applicable associated markings cannot be implemented without extensive
system modifications."[7] However, it is clear that future DoD ADP systems must be able to provide
applicable and accurate labels for classified and other sensitive information.)
DoD Manual 5200.28-M (Section IV, 4-305d) requires the following: "Security Labels - All classified
material accessible by or within the ADP system shall be identified as to its security classification and
access or dissemination limitations, and all output of the ADP system shall be appropriately
marked."[9]
7.3.2 Mandatory Security
The control objective for mandatory security is: "Security policies defined for systems that are used to
process classified or other specifically categorized sensitive information must include provisions for
the enforcement of mandatory access control rules. That is, they must include a set of rules for
controlling access based directly on a comparison of the individual's clearance or authorization for the
information and the classification or sensitivity designation of the information being sought, and
indirectly on considerations of physical and other environmental factors of control. The mandatory
access control rules must accurately reflect the laws, regulations, and general policies from which they
are derived."
There are a number of policy statements that are related to mandatory security.
Executive Order 12356 (Section 4.1.a) states that "a person is eligible for access to classified
information provided that a determination of trustworthiness has been made by agency heads or
designated officials and provided that such access is essential to the accomplishment of lawful and
authorized Government purposes."[14]
DoD Regulation 5200.1-R (Chapter I, Section 3) defines a Special Access Program as "any program
imposing 'need-to-know' or access controls beyond those normally provided for access to Confidential,
Secret, or Top Secret information. Such a program includes, but is not limited to, special clearance,
adjudication, or investigative requirements, special designation of officials authorized to determine
'need-to-know', or special lists of persons determined to have a 'need-to- know.'"[7, para. 1-328] This
passage distinguishes between a 'discretionary' determination of need-to-know and formal need-toknow which is implemented through Special Access Programs. DoD Regulation 5200.1-R, paragraph
7-100 describes general requirements for trustworthiness (clearance) and need-to-know, and states that
the individual with possession, knowledge or control of classified information has final responsibility
for determining if conditions for access have been met. This regulation further stipulates that "no one
has a right to have access to classified information solely by virtue of rank or position." [7, para. 7100])
DoD Manual 5200.28-M (Section II 2-100) states that, "Personnel who develop, test (debug), maintain,
or use programs which are classified or which will be used to access or develop classified material
shall have a personnel security clearance and an access authorization (need-to-know), as appropriate
for the highest classified and most restrictive category of classified material which they will access
under system constraints."[9]
DoD Manual 5220.22-M (Paragraph 3.a) defines access as "the ability and opportunity to obtain
knowledge of classified information. An individual, in fact, may have access to classified information
by being in a place where such information is kept, if the security measures which are in force do not
Universal Knowledge Solutions S.A.L.
- 75 -
security review of system activity. (e.g., The log should record security related transactions, including
each access to a classified file and the nature of the access, e.g., logins, production of accountable
classified outputs, and creation of new classified files. Each classified file successfully accessed
[regardless of the number of individual references] during each 'job' or 'interactive session' should also
be recorded in the audit log. Much of the material in this log may also be required to assure that the
system preserves information entrusted to it.)"[9]
DoD Manual 5200.28-M (Section IV 4-305f) states: "Where needed to assure control of access and
individual accountability, each user or specific group of users shall be identified to the ADP System by
appropriate administrative or hardware/software measures. Such identification measures must be in
sufficient detail to enable the ADP System to provide the user only that material which he is
authorized."[9]
DoD Manual 5200.28-M (Section I 1-102b) states:
"Component's Designated Approving Authorities, or their designees for this purpose . . . will assure:
.................
(4) Maintenance of documentation on operating systems (O/S) and all modifications thereto, and its
retention for a sufficient period of time to enable tracing of security- related defects to their point of
origin or inclusion in the system.
.................
(6) Establishment of procedures to discover, recover, handle, and dispose of classified material
improperly disclosed through system malfunction or personnel action.
(7) Proper disposition and correction of security deficiencies in all approved ADP Systems, and the
effective use and disposition of system housekeeping or audit records, records of security violations or
security-related system malfunctions, and records of tests of the security features of an ADP
System."[9]
DoD Manual 5220.22-M (Section XIII 111) states: "Audit Trails
a. The general security requirement for any ADP system audit trail is that it provide a documented
history of the use of the system. An approved audit trail will permit review of classified system activity
and will provide a detailed activity record to facilitate reconstruction of events to determine the
magnitude of compromise (if any) should a security malfunction occur. To fulfil this basic requirement,
audit trail systems, manual, automated or a combination of both must document significant events
occurring in the following areas of concern: (i) preparation of input data and dissemination of output
data (i.e., reportable interactivity between users and system support personnel), (ii) activity involved
within an ADP environment (e.g., ADP support personnel modification of security and related
controls), and (iii) internal machine activity.
b. The audit trail for an ADP system approved to process classified information must be based on the
above three areas and may be stylized to the particular system. All systems approved for classified
processing should contain most if not all of the audit trail records listed below. The contractor's SPP
documentation must identify and describe those applicable:
1. Personnel access;
Universal Knowledge Solutions S.A.L.
- 77 -
2. Unauthorized and surreptitious entry into the central computer facility or remote terminal areas;
3. Start/stop time of classified processing indicating pertinent systems security initiation and
termination events (e.g., upgrading/downgrading actions pursuant to paragraph 107);
4. All functions initiated by ADP system console operators;
5. Disconnects of remote terminals and peripheral devices (paragraph 107c);
6. Log-on and log-off user activity;
7. Unauthorized attempts to access files or programs, as well as all open, close, create, and file destroy
actions;
8. Program aborts and anomalies including identification information (i.e., user/program name, time
and location of incident, etc.);
9. System hardware additions, deletions and maintenance actions;
10. Generations and modifications affecting the security features of the system software.
c. The ADP system security supervisor or designee shall review the audit trail logs at least weekly to
assure that all pertinent activity is properly recorded and that appropriate action has been taken to
correct any anomaly. The majority of ADP systems in use today can develop audit trail systems in
accord with the above; however, special systems such as weapons, communications, communications
security, and tactical data exchange and display systems, may not be able to comply with all aspects of
the above and may require individualized consideration by the cognizant security office.
d. Audit trail records shall be retained for a period of one inspection cycle."[11]
system and the unauthorized manipulation of the system and its components. Particular attention shall
be given to the continuous protection of automated system security measures, techniques and
procedures when the personnel security clearance level of users having access to the system
changes."[8]
DoD Directive 5200.28 (VI.A.2) states: "Environmental Control. The ADP System shall be externally
protected to minimize the likelihood of unauthorized access to system entry points, access to classified
information in the system, or damage to the system."[8]
DoD Manual 5200.28-M (Section I 1-102b) states:
"Component's Designated Approving Authorities, or their designees for this purpose . . . will assure:
.................
(5) Supervision, monitoring, and testing, as appropriate, of changes in an approved ADP System which
could affect the security features of the system, so that a secure system is maintained.
.................
(7) Proper disposition and correction of security deficiencies in all approved ADP Systems, and the
effective use and disposition of system housekeeping or audit records, records of security violations or
security-related system malfunctions, and records of tests of the security features of an ADP System.
(8) Conduct of competent system ST&E, timely review of system ST&E reports, and correction of
deficiencies needed to support conditional or final approval or disapproval of an ADP System for the
processing of classified information.
(9) Establishment, where appropriate, of a central ST&E coordination point for the maintenance of
records of selected techniques, procedures, standards, and tests used in the testing and evaluation of
security features of ADP Systems which may be suitable for validation and use by other Department of
Defence Components."[9]
DoD Manual 5220.22-M (Section XIII 103a) requires: "the initial approval, in writing, of the cognizant
security office prior to processing any classified information in an ADP system. This section requires
reapproval by the cognizant security office for major system modifications made subsequent to initial
approval. Reapprovals will be required because of (i) major changes in personnel access requirements,
(ii) relocation or structural modification of the central computer facility, (iii) additions, deletions or
changes to main frame, storage or input/output devices, (iv) system software changes impacting
security protection features, (v) any change in clearance, declassification, audit trail or
hardware/software maintenance procedures, and (vi) other system changes as determined by the
cognizant security office."[11]
A major component of assurance, life-cycle assurance, as described in DoD Directive 7920.l, is
concerned with testing ADP systems both in the development phase as well as during operation (17).
DoD Directive 5215.1 (Section F.2.C.(2)) requires "evaluations of selected industry and governmentdeveloped trusted computer systems against these criteria."[10]
hierarchical level. To encourage consistency and portability in the design and development of the
National Security Establishment trusted computer systems, it is desirable for all such systems to be
able to support a minimum number of levels and categories. The following suggestions are provided
for this purpose:
* The number of hierarchical classifications should be greater than or equal to sixteen (16).
* The number of non-hierarchical categories should be greater than or equal to sixty-four (64).
The security testing team shall consist of at least two individuals with bachelor degrees in Computer
Science or the equivalent and at least one individual with a master's degree in Computer Science or
equivalent. Team members shall be able to follow test plans prepared by the system developer and
suggest additions, shall be conversant with the "flaw hypothesis" or equivalent security testing
methodology, shall be fluent in the TCB implementation language(s), and shall have assembly level
of, and shall have completed the system developer's internals course for, the system being evaluated. At
least one team member shall have previously completed a security test on another system.
10.2.2 Testing
The team shall have "hands-on" involvement in an independent run of the test package used by the
system developer to test security-relevant hardware and software. The team shall independently design
and implement at least fifteen system- specific tests in an attempt to circumvent the security
mechanisms of the system. The elapsed time devoted to testing shall be at least two months and need
not exceed four months. There shall be no fewer than thirty hands-on hours per team member spent
carrying out system developer-defined tests and test team-defined tests.
10.3 TESTING FOR DIVISION A
10.3.1 Personnel
The security testing team shall consist of at least one individual with a bachelor's degree in Computer
Science or the equivalent and at least two individuals with masters' degrees in Computer Science or
equivalent. Team members shall be able to follow test plans prepared by the system developer and
suggest additions, shall be conversant with the "flaw hypothesis" or equivalent security testing
methodology, shall be fluent in the TCB implementation language(s), and shall have assembly level
programming experience. Before testing begins, the team members shall have functional knowledge
of, and shall have completed the system developer's internals course for, the system being evaluated.
At least one team member shall be familiar enough with the system hardware to understand the
maintenance diagnostic programs and supporting hardware documentation. At least two team members
shall have previously completed a security test on another system. At least one team member shall
have demonstrated system level programming competence on the system under test to a level of
complexity equivalent to adding a device driver to the system.
10.3.2 Testing
The team shall have "hands-on" involvement in an independent run of the test package used by the
system developer to test security-relevant hardware and software. The team shall independently design
and implement at least twenty-five system- specific tests in an attempt to circumvent the security
mechanisms of the system. The elapsed time devoted to testing shall be at least three months and need
not exceed six months. There shall be no fewer than fifty hands-on hours per team member spent
carrying out system developer-defined tests and test team-defined tests.
APPENDIX A
COMMERCIAL PRODUCE EVALUATION PROCESS
"Department of Defence Trusted Computer System Evaluation Criteria" forms the basis upon which
the Computer Security Center will carry out the commercial computer security evaluation process.
This process is focused on commercially produced and supported general-purpose operating system
products that meet the needs of government departments and agencies. The formal evaluation is aimed
at "off-the-shelf" commercially supported products and is completely divorced from any consideration
of overall system performance, potential applications, or particular processing environments. The
evaluation provides a key input to a computer system security approval/accreditation. However, it does
not constitute a complete computer system security evaluation. A complete study (e.g., as in reference
[18]) must consider additional factors dealing with the system in its unique environment, such as it's
proposed security mode of operation, specific users, applications, data sensitivity, physical and
personnel security, administrative and procedural security, TEMPEST, and communications security.
The product evaluation process carried out by the Computer Security Center has three distinct
elements:
Preliminary Product Evaluation - An informal dialogue between a vendor and the Center in
which technical information is exchanged to create a common understanding of the
vendor's product, the criteria, and the rating that may be expected to result from a formal
product evaluation.
Evaluated Products List - A list of products that have been subjected to formal product
evaluation and their assigned ratings.
APPENDIX B
SUMMARY OF EVALUATION CRITERIA DIVISIONS
The divisions of systems recognized under the trusted computer system evaluation criteria are as
follows. Each division represents a major improvement in the overall confidence one can place in the
system to protect classified and other sensitive information.
Division (D): Minimal Protection
This division contains only one class. It is reserved for those systems that have been evaluated but that
fail to meet the requirements for a higher evaluation class.
Division (C): Discretionary Protection
Classes in this division provide for discretionary (need-to-know) protection and, through the inclusion
of audit capabilities, for accountability of subjects and the actions they initiate.
Division (B): Mandatory Protection
The notion of a TCB that preserves the integrity of sensitivity labels and uses them to enforce a set of
mandatory access control rules is a major requirement in this division. Systems in this division must
carry the sensitivity labels with major data structures in the system. The system developer also
provides the security policy model on which the TCB is based and furnishes a specification of the
TCB. Evidence must be provided to demonstrate that the reference monitor concept has been
implemented.
Division (A): Verified Protection
This division is characterized by the use of formal security verification methods to assure that the
mandatory and discretionary security controls employed in the system can effectively protect classified
or other sensitive information stored or processed by the system. Extensive documentation is required
to demonstrate that the TCB meets the security requirements in all aspects of design, development and
implementation.
APPENDIX C
SUMMARY OF EVALUATION CRITERIA CLASSES
The classes of systems recognized under the trusted computer system evaluation criteria are as follows.
They are presented in the order of increasing desirability from a computer security point of view.
Class (D): Minimal Protection
This class is reserved for those systems that have been evaluated but that fail to meet the requirements
for a higher evaluation class.
Class (C1): Discretionary Security Protection
The Trusted Computing Base (TCB) of a class (C1) system nominally satisfies the discretionary
security requirements by providing separation of users and data. It incorporates some form of credible
controls capable of enforcing access limitations on an individual basis, i.e., ostensibly suitable for
allowing users to be able to protect project or private information and to keep other users from
accidentally reading or destroying their data. The class (C1) environment is expected to be one of
cooperating users processing data at the same level(s) of sensitivity.
Class (C2): Controlled Access Protection
Systems in this class enforce a more finely grained discretionary access control than (C1) systems,
making users individually accountable for their actions through login procedures, auditing of securityrelevant events, and resource isolation.
Class (B1): Labeled Security Protection
Class (B1) systems require all the features required for class (C2). In addition, an informal statement of
the security policy model, data labelling, and mandatory access control over named subjects and
objects must be present. The capability must exist for accurately labelling exported information. Any
flaws identified by testing must be removed.
Class (B2): Structured Protection
In class (B2) systems, the TCB is based on a clearly defined and documented formal security policy
model that requires the discretionary and mandatory access control enforcement found in class (B1)
systems be extended to all subjects and objects in the ADP system. In addition, covert channels are
addressed. The TCB must be carefully structured into protection-critical and non- protection-critical
elements. The TCB interface is well-defined and the TCB design and implementation enable it to be
subjected to more thorough testing and more complete review. Authentication mechanisms are
strengthened, trusted facility management is provided in the form of support for system administrator
and operator functions, and stringent configuration management controls are imposed. The system is
relatively resistant to penetration.
Class (B3): Security Domains
The class (B3) TCB must satisfy the reference monitor requirements that it mediate all accesses of
subjects to objects, be tamperproof, and be small enough to be subjected to analysis and tests. To this
end, the TCB is structured to exclude code not essential to security policy enforcement, with
Universal Knowledge Solutions S.A.L.
- 86 -
significant system engineering during TCB design and implementation directed toward minimizing its
complexity. A security administrator is supported, audit mechanisms are expanded to signal securityrelevant events, and system recovery procedures are required. The system is highly resistant to
penetration.
Class (A1): Verified Design
Systems in class (A1) are functionally equivalent to those in class (B3) in that no additional
architectural features or policy requirements are added. The distinguishing feature of systems in this
class is the analysis derived from formal design specification and verification techniques and the
resulting high degree of assurance that the TCB is correctly implemented. This assurance is
developmental in nature, starting with a formal model of the security policy and a formal top-level
specification (FTLS) of the design. In keeping with the extensive design and development analysis of
the TCB required of systems in class (A1), more stringent configuration management is required and
procedures are established for securely distributing the system to sites. A system security administrator
is supported.
APPENDIX D
REQUIREMENT DIRECTORY
This appendix lists requirements defined in "Department of Defence Trusted Computer System
Evaluation Criteria" alphabetically rather than by class. It is provided to assist in following the
evolution of a requirement through the classes. For each requirement, three types of criteria may be
present. Each will be preceded by the word: NEW, CHANGE, or ADD to indicate the following:
NEW: Any criteria appearing in a lower class are superseded by the criteria that follow.
CHANGE: The criteria that follow have appeared in a lower class but are changed for this class.
Highlighting is used to indicate the specific changes to previously stated criteria.
ADD: The criteria that follow have not been required for any lower class, and are added in this class to
the previously stated criteria for this requirement.
Abbreviations are used as follows:
NR: (No Requirement) This requirement is not included in this class.
NAR: (No Additional Requirements) This requirement does not change from the previous class.
The reader is referred to Part I of this document when placing new criteria for a requirement into the
complete context for that class.
Figure 1 provides a pictorial summary of the evolution of requirements through the classes. [see
chart elsewhere]
Universal Knowledge Solutions S.A.L.
- 87 -
Audit
C1: NR.
C2: NEW: The TCB shall be able to create, maintain, and protect from modification or unauthorized
access or destruction an audit trail of accesses to the objects it protects. The audit data shall be
protected by the TCB so that read access to it is limited to those who are authorized for audit data. The
TCB shall be able to record the following types of events: use of identification and authentication
mechanisms, introduction of objects into a user's address space (e.g., file open, program initiation),
deletion of objects, and actions taken by computer operators and system administrators and/or system
security officers and other security relevant events. For each recorded event, the audit record shall
identify: date and time of the event, user, type of event, and success or failure of the event. For
identification/authentication events the origin of request (e.g., terminal ID) shall be included in the
audit record. For events that introduce an object into a user's address space and for object deletion
events the audit record shall include the name of the object. The ADP system administrator shall be
able to selectively audit the actions of any one or more users based on individual identity.
B1: CHANGE: For events that introduce an object into a user's address space and for object deletion
events the audit record shall include the name of the object and the object's security level. The ADP
system administrator shall be able to selectively audit the actions of any one or more users based on
individual identity and/or object security level.
ADD: The TCB shall also be able to audit any override of human-readable output markings.
B2: ADD: The TCB shall be able to audit the identified events that may be used in the exploitation of
covert storage channels.
B3: ADD: The TCB shall contain a mechanism that is able to monitor the occurrence or accumulation
of security auditable events that may indicate an imminent violation of security policy. This
mechanism shall be able to immediately notify the security administrator when thresholds are
exceeded, and, if the occurrence or accumulation of these security relevant events continues, the
system shall take the lease disruptive action to terminate the event.
A1: NAR.
Configuration Management
C1: NR.
C2: NR.
B1: NR.
B2: NEW: During development and maintenance of the TCB, a configuration management system
shall be in place that maintains control of changes to the descriptive top-level specification, other
design data, implementation documentation, source code, the running version of the object code, and
test fixtures and documentation. The configuration management system shall assure a consistent
mapping among all documentation and code associated with the current version of the TCB. Tools
Universal Knowledge Solutions S.A.L.
- 88 -
shall be provided for generation of a new version of the TCB from source code. Also available shall be
tools for comparing a newly generated version with the previous TCB version in order to ascertain that
only the intended changes have been made in the code that will actually be used as the new version of
the TCB.
B3: NAR.
A1: CHANGE: During the entire life-cycle, i.e., during the design, development, and maintenance of
the TCB, a configuration management system shall be in place for all security-relevant hardware,
firmware, and software that maintains control of changes to the formal model, the descriptive and
formal top-level specifications, other design data, implementation documentation, source code, the
running version of the object code, and test fixtures and documentation. Also available shall be tools,
maintained under strict configuration control, for comparing a newly generated version with the
previous TCB version in order to ascertain that only the intended changes have been made in the code
that will actually be used as the new version of the TCB.
ADD: A combination of technical, physical, and procedural safeguards shall be used to protect from
unauthorized modification or destruction the master copy or copies of all material used to generate the
TCB.
Design Documentation
C1: NEW: Documentation shall be available that provides a description of the manufacturer's
philosophy of protection and an explanation of how this philosophy is translated into the TCB. If the
TCB is composed of distinct modules, the interfaces between these modules shall be described.
C2: NAR.
B1: ADD: An informal or formal description of the security policy model enforced by the TCB shall
be available and an explanation provided to show that it is sufficient to enforce the security policy. The
specific TCB protection mechanisms shall be identified and an explanation given to show that they
satisfy the model.
B2: CHANGE: The interfaces between the TCB modules shall be described. A formal description of
the security policy model enforced by the TCB shall be available and proven that it is sufficient to
enforce the security policy.
ADD: The descriptive top-level specification (DTLS) shall be shown to be an accurate description of
the TCB interface. Documentation shall describe how the TCB implements the reference monitor
concept and give an explanation why it is tamper resistant, cannot be bypassed, and is correctly
implemented. Documentation shall describe how the TCB is structured to facilitate testing and to
enforce least privilege. This documentation shall also present the results of the covert channel analysis
and the tradeoffs involved in restricting the channels. All auditable events that may be used in the
exploitation of known covert storage channels shall be identified. The bandwidths of known covert
storage channels, the use of which is not detectable by the auditing mechanisms, shall be provided.
(See the Covert Channel Guideline section.)
B3: ADD: The TCB implementation (i.e., in hardware, firmware, and software) shall be informally
shown to be consistent with the DTLS. The elements of the DTLS shall be shown, using informal
techniques, to correspond to the elements of the TCB.
A1: CHANGE: The TCB implementation (i.e., in hardware, firmware, and software) shall be
informally shown to be consistent with the formal top-level specification (FTLS). The elements of the
FTLS shall be shown, using informal techniques, to correspond to the elements of the TCB.
ADD: Hardware, firmware, and software mechanisms not dealt with in the FTLS but strictly internal to
the TCB (e.g., mapping registers, direct memory access I/O) shall be clearly described.
B2: CHANGE: A formal model of the security policy supported by the TCB shall be maintained over
the life cycle of the ADP system that is proven consistent with its axioms.
ADD: A descriptive top-level specification (DTLS) of the TCB shall be maintained that completely
and accurately describes the TCB in terms of exceptions, error messages, and effects. It shall be shown
to be an accurate description of the TCB interface.
B3: ADD: A convincing argument shall be given that the DTLS is consistent with the model.
A1: CHANGE: The FTLS shall be shown to be an accurate description of the TCB interface. A
convincing argument shall be given that the DTLS is consistent with the model and a combination of
formal and informal techniques shall be used to show that the FTLS is consistent with the model.
ADD: A formal top-level specification (FTLS) of the TCB shall be maintained that accurately
describes the TCB in terms of exceptions, error messages, and effects. The DTLS and FTLS shall
include those components of the TCB that are implemented as hardware and/or firmware if their
properties are visible at the TCB interface. This verification evidence shall be consistent with that
provided within the state-of-the-art of the particular Computer Security Center- endorsed formal
specification and verification system used. Manual or other mapping of the FTLS to the TCB source
code shall be performed to provide evidence of correct implementation.
Device Labels
C1: NR.
C2: NR.
B1: NR.
B2: NEW: The TCB shall support the assignment of minimum and maximum security levels to all
attached physical devices. These security levels shall be used by the TCB to enforce constraints
imposed by the physical environments in which the devices are located.
B3: NAR.
A1: NAR.
C2: CHANGE: The enforcement mechanism (e.g., self/group/public controls, access control lists) shall
allow users to specify and control sharing of those objects by named individuals, or defined groups of
individuals, or by both, and shall provide controls to limit propagation of access rights.
ADD: The discretionary access control mechanism shall, either by explicit user action or by default,
provide that objects are protected from unauthorized access. These access controls shall be capable of
including or excluding access to the granularity of a single user. Access permission to an object by
users not already possessing access permission shall only be assigned by authorized users.
B1: NAR.
B2: NAR.
B3: CHANGE: The enforcement mechanism (e.g., access control lists) shall allow users to specify and
control sharing of those objects, and shall provide controls to limit propagation of access rights. These
access controls shall be capable of specifying, for each named object, a list of named individuals and a
list of groups of named individuals with their respective modes of access to that object.
ADD: Furthermore, for each such named object, it shall be possible to specify a list of named
individuals and a list of groups of named individuals for which no access to the object is to be given.
A1: NAR.
C2: ADD: The TCB shall be able to enforce individual accountability by providing the capability to
uniquely identify each individual ADP system user. The TCB shall also provide the capability of
associating this identity with all auditable actions taken by that individual.
B1: CHANGE: Furthermore, the TCB shall maintain authentication data that includes information for
verifying the identity of individual users (e.g., passwords) as well as information for determining the
clearance and authorizations of individual users. This data shall be used by the TCB to authenticate the
user's identity and to ensure that the security level and authorizations of subjects external to the TCB
that may be created to act on behalf of the individual user are dominated by the clearance and
authorization of that user.
B2: NAR.
B3: NAR.
A1: NAR.
Label Integrity
C1: NR.
C2: NR.
B1: NEW: Sensitivity labels shall accurately represent security levels of the specific subjects or objects
with which they are associated. When exported by the TCB, sensitivity labels shall accurately and
unambiguously represent the internal labels and shall be associated with the information being
exported.
B2: NAR.
B3: NAR.
A1: NAR.
paged, hardcopy output (e.g., line printer output) with human- readable sensitivity labels that properly*
represent the sensitivity of the output. The TCB shall, by default, mark the top and bottom of each
page of human-readable, paged, hardcopy output (e.g., line printer output) with human-readable
sensitivity labels that properly* represent the overall sensitivity of the output or that properly*
represent the sensitivity of the information on the page. The TCB shall, by default and in an
appropriate manner, mark other forms of human-readable output (e.g., maps, graphics) with humanreadable sensitivity labels that properly* represent the sensitivity of the output. Any override of these
marking defaults shall be auditable by the TCB.
B2: NAR.
B3: NAR.
A1: NAR.
* The hierarchical classification component in human-readable sensitivity labels shall be equal to the
greatest hierarchical classification of any of the information in the output that the labels refer to; the
non-hierarchical category component shall include all of the non-hierarchical categories of the
information in the output the labels refer to, but no other non-hierarchical categories.
Labels
C1: NR.
C2: NR.
B1: NEW: Sensitivity labels associated with each subject and storage object under its control (e.g.,
process, file, segment, device) shall be maintained by the TCB. These labels shall be used as the basis
for mandatory access control decisions. In order to import non- labelled data, the TCB shall request
and receive from an authorized user the security level of the data, and all such actions shall be
auditable by the TCB.
B2: CHANGE: Sensitivity labels associated with each ADP system resource (e.g., subject, storage
object, ROM) that is directly or indirectly accessible by subjects external to the TCB shall be
maintained by the TCB.
B3: NAR.
A1: NAR.
B1: NEW: The TCB shall enforce a mandatory access control policy over all subjects and storage
objects under its control (e.g., processes, files, segments, devices). These subjects and objects shall be
assigned sensitivity labels that are a combination of hierarchical classification levels and nonhierarchical categories, and the labels shall be used as the basis for mandatory access control decisions.
The TCB shall be able to support two or more such security levels. (See the Mandatory Access Control
guidelines.) The following requirements shall hold for all accesses between subjects and objects
controlled by the TCB: A subject can read an object only if the hierarchical classification in the
subject's security level is greater than or equal to the hierarchical classification in the object's security
level and the non-hierarchical categories in the subject's security level include all the non-hierarchical
categories in the object's security level. A subject can write an object only if the hierarchical
classification in the subject's security level is less than or equal to the hierarchical classification in the
object's security level and all the non-hierarchical categories in the subject's security level are included
in the non-hierarchical categories in the object's security level. Identification and authentication data
shall be used by the TCB to authenticate the user's identity and to ensure that the security level and
authori- zation of subjects external to the TCB that may be created to act on behalf of the individual
user are dominated by the clearance and authorization of that user.
B2: CHANGE: The TCB shall enforce a mandatory access control policy over all resources (i.e.,
subjects, storage objects, and I/O devices) that are directly or indirectly accessible by subjects external
to the TCB. The following requirements shall hold for all accesses between all subjects external to the
TCB and all objects directly or indirectly accessible by these subjects:
B3: NAR.
A1: NAR.
Object Reuse
C1: NR.
C2: NEW: All authorizations to the information contained within a storage object shall be revoked
prior to initial assignment, allocation or reallocation to a subject from the TCB's pool of unused storage
objects. No information, including encrypted representations of information, produced by a prior
subject's actions is to be available to any subject that obtains access to an object that has been released
back to the system.
B1: NAR.
B2: NAR.
B3: NAR.
A1: NAR.
mechanisms provided by the TCB, guidelines on their use, and how they interact with one another.
C2: NAR.
B1: NAR.
B2: NAR.
B3: NAR.
A1: NAR.
Security Testing
C1: NEW: The security mechanisms of the ADP system shall be tested and found to work as claimed
in the system documentation. Testing shall be done to assure that there are no obvious ways for an
unauthorized user to bypass or otherwise defeat the security protection mechanisms of the TCB. (See
the Security Testing guidelines.)
C2: ADD: Testing shall also include a search for obvious flaws that would allow violation of resource
isolation, or that would permit unauthorized access to the audit or authentication data.
B1: NEW: The security mechanisms of the ADP system shall be tested and found to work as claimed
in the system documentation. A team of individuals who thoroughly understand the specific
implementation of the TCB shall subject its design documentation, source code, and object code to
thorough analysis and testing. Their objectives shall be: to uncover all design and implementation
flaws that would permit a subject external to the TCB to read, change, or delete data normally denied
under the mandatory or discretionary security policy enforced by the TCB; as well as to assure that no
subject (without authorization to do so) is able to cause the TCB to enter a state such that it is unable to
respond to communications initiated by other users. All discovered flaws shall be removed or
neutralized and the TCB retested to demonstrate that they have been eliminated and that new flaws
have not been introduced. (See the Security Testing Guidelines.)
B2: CHANGE: All discovered flaws shall be corrected and the TCB retested to demonstrate that they
have been eliminated and that new flaws have not been introduced.
ADD: The TCB shall be found relatively resistant to penetration. Testing shall demonstrate that the
TCB implementation is consistent with the descriptive top-level specification.
B3: CHANGE: The TCB shall be found resistant to penetration.
ADD: No design flaws and no more than a few correctable implementation flaws may be found during
testing and there shall be reasonable confidence that few remain.
A1: CHANGE: Testing shall demonstrate that the TCB implementation is consistent with the formal
top-level specification.
ADD: Manual or other mapping of the FTLS to the source code may form a basis for penetration
testing.
Universal Knowledge Solutions S.A.L.
- 97 -
System Architecture
C1: NEW: The TCB shall maintain a domain for its own execution that protects it from external
interference or tampering (e.g., by modification of its code or data structures). Resources controlled by
the TCB may be a defined subset of the subjects and objects in the ADP system.
C2: ADD: The TCB shall isolate the resources to be protected so that they are subject to the access
control and auditing requirements.
B1: ADD: The TCB shall maintain process isolation through the provision of distinct address spaces
under its control.
B2: NEW: The TCB shall maintain a domain for its own execution that protects it from external
interference or tampering (e.g., by modification of its code or data structures). The TCB shall maintain
process isolation through the provision of distinct address spaces under its control. The TCB shall be
internally structured into well- defined largely independent modules. It shall make effective use of
available hardware to separate those elements that are protection- critical from those that are not. The
TCB modules shall be designed such that the principle of least privilege is enforced. Features in
hardware, such as segmentation, shall be used to support logically distinct storage objects with separate
attributes (namely: readable, writeable). The user interface to the TCB shall be completely defined and
all elements of the TCB identified.
B3: ADD: The TCB shall be designed and structured to use a complete, conceptually simple protection
mechanism with precisely defined semantics. This mechanism shall play a central role in enforcing the
internal structuring of the TCB and the system. The TCB shall incorporate significant use of layering,
abstraction and data hiding. Significant system engineering shall be directed toward minimizing the
complexity of the TCB and excluding from the TCB modules that are not protection-critical.
A1: NAR.
System Integrity
C1: NEW: Hardware and/or software features shall be provided that can be used to periodically
validate the correct operation of the on-site hardware and firmware elements of the TCB.
C2: NAR.
B1: NAR.
B2: NAR.
B3: NAR.
A1: NAR.
Test Documentation
C1: NEW: The system developer shall provide to the evaluators a document that describes the test
plan, test procedures that show how the security mechanisms were tested and results of the security
mechanisms' functional testing.
C2: NAR.
B1: NAR.
B2: ADD: It shall include results of testing the effectiveness of the methods used to reduce covert
channel bandwidths.
B3: NAR.
A1: ADD: The results of the mapping between the formal top-level specification and the TCB source
code shall be given.
Trusted Distribution
C1: NR.
C2: NR.
B1: NR.
B2: NR.
B3: NR.
A1: NEW: A trusted ADP system control and distribution facility shall be provided for maintaining the
integrity of the mapping between the master data describing the current version of the TCB and the onsite master copy of the code for the current version. Procedures (e.g., site security acceptance testing)
shall exist for assuring that the TCB software, firmware, and hardware updates distributed to a
customer are exactly as specified by the master copies.
A1: NAR.
Trusted Path
C1: NR.
C2: NR.
B1: NR.
B2: NEW: The TCB shall support a trusted communication path between itself and user for initial
login and authentication. Communications via this path shall be initiated exclusively by a user.
B3: CHANGE: The TCB shall support a trusted communication path between itself and users for use
when a positive TCB-to-user connection is required (e.g., login, change subject security level).
Communications via this trusted path shall be activated exclusively by a user or the TCB and shall be
logically isolated and unmistakably distinguishable from other paths.
A1: NAR.
Trusted Recovery
C1: NR.
C2: NR.
B1: NR.
B2: NR.
B3: NEW: Procedures and/or mechanisms shall be provided to assure that, after an ADP system failure
or other discontinuity, recovery without a protection compromise is obtained.
A1: NAR.
GLOSSARY
Access - A specific type of interaction between a subject and an object that results in the flow of
information from one to the other.
Approval/Accreditation - The official authorization that is granted to an ADP system to process
sensitive information in its operational environment, based upon comprehensive security evaluation of
the system's hardware, firmware, and software security design, configuration, and implementation and
of the other system procedural, administrative, physical, TEMPEST, personnel, and communications
security controls.
Audit Trail - A set of records that collectively provide documentary evidence of processing used to aid
in tracing from original transactions forward to related records and reports, and/or backwards from
records and reports to their component source transactions.
Authenticate - To establish the validity of a claimed identity.
Automatic Data Processing (ADP) System - An assembly of computer hardware, firmware, and
software configured for the purpose of classifying, sorting, calculating, computing, summarizing,
transmitting and receiving, storing, and retrieving data with a minimum of human intervention.
Bandwidth - A characteristic of a communication channel that is the amount of information that can be
passed through it in a given amount of time, usually expressed in bits per second.
Bell-LaPadula Model - A formal state transition model of computer security policy that describes a set
of access control rules. In this formal model, the entities in a computer system are divided into abstract
sets of subjects and objects. The notion of a secure state is defined and it is proven that each state
transition preserves security by moving from secure state to secure state; thus, inductively proving that
the system is secure. A system state is defined to be "secure" if the only permitted access modes of
subjects to objects are in accordance with a specific security policy. In order to determine whether or
not a specific access mode is allowed, the clearance of a subject is compared to the classification of the
object and a determination is made as to whether the subject is authorized for the specific access mode.
The clearance/classification scheme is expressed in terms of a lattice. See also: Lattice, Simple
Security Property, *- Property.
Certification - The technical evaluation of a system's security features, made as part of and in support
of the approval/accreditation process, that establishes the extent to which a particular computer
system's design and implementation meet a set of specified security requirements.
Channel - An information transfer path within a system. May also refer to the mechanism by which the
path is effected.
Covert Channel - A communication channel that allows a process to transfer information in a manner that
violates the system's security policy. See also: Covert Storage Channel, Covert Timing Channel.
Covert Storage Channel - A covert channel that involves the direct or indirect writing of a storage
location by one process and the direct or indirect reading of the storage location by another process.
Covert storage channels typically involve a finite resource (e.g., sectors on a disk) that is shared by two
subjects at different security levels.
Universal Knowledge Solutions S.A.L.
- 102 -
Covert Timing Channel - A covert channel in which one process signals information to another by
modulating its own use of system resources (e.g., CPU time) in such a way that this manipulation
affects the real response time observed by the second process.
Data - Information with a specific physical representation.
Data Integrity - The state that exists when computerized data is the same as that in the source
documents and has not been exposed to accidental or malicious alteration or destruction.
Descriptive Top-Level Specification (DTLS) - A top-level specification that is written in a natural
language (e.g., English), an informal program design notation, or a combination of the two.
Discretionary Access Control - A means of restricting access to objects based on the identity of
subjects and/or groups to which they belong. The controls are discretionary in the sense that a subject
with a certain access permission is capable of passing that permission (perhaps indirectly) on to any
other subject (unless restrained by mandatory access control).
Domain - The set of objects that a subject has the ability to access.
Dominate - Security level S1 is said to dominate security level S2 if the hierarchical classification of
S1 is greater than or equal to that of S2 and the non-hierarchical categories of S1 include all those of
S2 as a subset.
Exploitable Channel - Any channel that is useable or detectable by subjects external to the Trusted
Computing Base.
Flaw Hypothesis Methodology - A system analysis and penetration technique where specifications and
documentation for the system are analyzed and then flaws in the system are hypothesized. The list of
hypothesized flaws is then prioritized on the basis of the estimated probability that a flaw actually
exists and, assuming a flaw does exist, on the ease of exploiting it and on the extent of control or
compromise it would provide. The prioritized list is used to direct the actual testing of the system.
Flaw - An error of commission, omission, or oversight in a system that allows protection mechanisms
to be bypassed.
Formal Proof - A complete and convincing mathematical argument, presenting the full logical
justification for each proof step, for the truth of a theorem or set of theorems. The formal verification
process uses formal proofs to show the truth of certain properties of formal specification and for
showing that computer programs satisfy their specifications.
Formal Security Policy Model - A mathematically precise statement of a security policy. To be
adequately precise, such a model must represent the initial state of a system, the way in which the
system progresses from one state to another, and a definition of a "secure" state of the system. To be
acceptable as a basis for a TCB, the model must be supported by a formal proof that if the initial state
of the system satisfies the definition of a "secure" state and if all assumptions required by the model
hold, then all future states of the system will be secure. Some formal modelling techniques include:
state transition models, temporal logic models, denotational semantics models, algebraic specification
models. An example is the model described by Bell and LaPadula in reference [2]. See also: BellLaPadula Model, Security Policy Model.
Formal Top-Level Specification (FTLS) - A Top-Level Specification that is written in a formal
mathematical language to allow theorems showing the correspondence of the system specification to
file, etc.).
Security Testing - A process used to determine that the security features of a system are implemented
as designed and that they are adequate for a proposed application environment. This process includes
hands-on functional testing, penetration testing, and verification. See also: Functional Testing,
Penetration Testing, Verification.
Sensitive Information - Information that, as determined by a competent authority, must be protected
because its unauthorized disclosure, alteration, loss, or destruction will at least cause perceivable
damage to someone or something.
Sensitivity Label - A piece of information that represents the security level of an object and that
describes the sensitivity (e.g., classification) of the data in the object. Sensitivity labels are used by the
TCB as the basis for mandatory access control decisions.
Simple Security Condition - A Bell-LaPadula security model rule allowing a subject read access to an
object only if the security level of the subject dominates the security level of the object.
Single-Level Device - A device that is used to process data of a single security level at any one time.
Since the device need not be trusted to separate data of different security levels, sensitivity labels do
not have to be stored with the data being processed.
*-Property (Star Property) - A Bell-LaPadula security model rule allowing a subject write access to an
object only if the security level of the subject is dominated by the security level of the object. Also
known as the Confinement Property.
Storage Object - An object that supports both read and write accesses.
Subject - An active entity, generally in the form of a person, process, or device that causes information
to flow among objects or changes the system state. Technically, a process/domain pair.
Subject Security Level - A subject's security level is equal to the security level of the objects to which
it has both read and write access. A subject's security level must always be dominated by the clearance
of the user the subject is associated with.
TEMPEST - The study and control of spurious electronic signals emitted from ADP equipment.
Top-Level Specification (TLS) - A non-procedural description of system behaviour at the most
abstract level. Typically a functional specification that omits all implementation details.
Trap Door - A hidden software or hardware mechanism that permits system protection mechanisms to
be circumvented. It is activated in some non-apparent manner (e.g., special "random" key sequence at a
terminal).
Trojan Horse - A computer program with an apparently or actually useful function that contains
additional (hidden) functions that surreptitiously exploit the legitimate authorizations of the invoking
process to the detriment of security. For example, making a "blind copy" of a sensitive file for the
creator of the Trojan Horse.
Trusted Computer System - A system that employs sufficient hardware and software integrity
measures to allow its use for processing simultaneously a range of sensitive or classified information.
Universal Knowledge Solutions S.A.L.
- 106 -
Trusted Computing Base (TCB) - The totality of protection mechanisms within a computer system -including hardware, firmware, and software -- the combination of which is responsible for enforcing a
security policy. A TCB consists of one or more components that together enforce a unified security
policy over a product or system. The ability of a trusted computing base to correctly enforce a security
policy depends solely on the mechanisms within the TCB and on the correct input by system
administrative personnel of parameters (e.g., a user's clearance) related to the security policy.
Trusted Path - A mechanism by which a person at a terminal can communicate directly with the
Trusted Computing Base. This mechanism can only be activated by the person or the Trusted
Computing Base and cannot be imitated by untrusted software.
Trusted Software - The software portion of a Trusted Computing Base.
User - Any person who interacts directly with a computer system.
Verification - The process of comparing two levels of system specification for proper correspondence
(e.g., security policy model with top-level specification, TLS with source code, or source code with
object code). This process may or may not be automated.
Write - A fundamental operation that results only in the flow of information from a subject to an
object.
Write Access - Permission to write an object.
REFERENCES
1. Anderson, J. P. Computer Security Technology Planning Study, ESD-TR-73-51, vol. I, ESD/AFSC,
Hanscom AFB, Bedford, Mass., October 1972 (NTIS AD-758 206).
2. Bell, D. E. and LaPadula, L. J. Secure Computer Systems: Unified Exposition and Multics
Interpretation, MTR-2997 Rev. 1, MITRE Corp., Bedford, Mass., March 1976.
3. Brand, S. L. "An Approach to Identification and Audit of Vulnerabilities and Control in Application
Systems," in Audit and Evaluation of Computer Security II: System Vulnerabilities and Controls, Z.
Ruthberg, ed., NBS Special Publication #500-57, MD78733, April 1980.
4. Brand, S. L. "Data Processing and A-123," in Proceedings of the Computer Performance Evaluation
User's Group 18th Meeting, C. B. Wilson, ed., NBS Special Publication #500-95, October 1982.
5. DCID l/l6, Security of Foreign Intelligence in Automated Data Processing Systems and Networks
(U), 4 January l983.
6. DIAM 50-4, Security of Compartmented Computer Operations (U), 24 June l980.
7. Denning, D. E. "A Lattice Model of Secure Information Flow," in Communications of the ACM,
vol. 19, no. 5 (May 1976), pp. 236-243.
Universal Knowledge Solutions S.A.L.
- 107 -
8. Denning, D. E. Secure Information Flow in Computer Systems, Ph.D. dissertation, Purdue Univ.,
West Lafayette, Ind., May 1975.
9. DoD Directive 5000.29, Management of Computer Resources in Major Defence Systems, 26 April
l976.
10. DoD 5200.1-R, Information Security Program Regulation, August 1982.
11. DoD Directive 5200.28, Security Requirements for Automatic Data Processing (ADP) Systems,
revised April 1978.
12. DoD 5200.28-M, ADP Security Manual -- Techniques and Procedures for Implementing,
Deactivating, Testing, and Evaluating Secure Resource-Sharing ADP Systems, revised June 1979.
13. DoD Directive 5215.1, Computer Security Evaluation Center, 25 October 1982.
14. DoD 5220.22-M, Industrial Security Manual for Safeguarding Classified Information, March
1984.
15. DoD 5220.22-R, Industrial Security Regulation, February 1984.
16. DoD Directive 5400.11, Department of Defence Privacy Program, 9 June 1982.
17. DoD Directive 7920.1, Life Cycle Management of Automated Information Systems (AIS), 17
October 1978
18. Executive Order 12356, National Security Information, 6 April 1982.
19. Faurer, L. D. "Keeping the Secrets Secret," in Government Data Systems, November - December
1981, pp. 14-17.
20. Federal Information Processing Standards Publication (FIPS PUB) 39, Glossary for Computer
Systems Security, 15 February 1976.
21. Federal Information Processing Standards Publication (FIPS PUB) 73, Guidelines for Security of
Computer Applications, 30 June 1980.
22. Federal Information Processing Standards Publication (FIPS PUB) 102, Guideline for Computer
Security Certification and Accreditation.
23. Lampson, B. W. "A Note on the Confinement Problem," in Communications of the ACM, vol. 16,
no. 10 (October 1973), pp. 613-615.
24. Lee, T. M. P., et al. "Processors, Operating Systems and Nearby Peripherals: A Consensus Report,"
in Audit and Evaluation of Computer Security II: System Vulnerabilities and Controls, Z. Ruthberg,
ed., NBS Special Publication #500-57, MD78733, April 1980.
25. Lipner, S. B. A Comment on the Confinement Problem, MITRE Corp., Bedford, Mass.
26. Millen, J. K. "An Example of a Formal Flow Violation," in Proceedings of the IEEE Computer
Society 2nd International Computer Software and Applications Conference, November 1978, pp. 204-
208.
27. Millen, J. K. "Security Kernel Validation in Practice," in Communications of the ACM, vol. 19, no.
5 (May 1976), pp. 243-250.
28. Nibaldi, G. H. Proposed Technical Evaluation Criteria for Trusted Computer Systems, MITRE
Corp., Bedford, Mass., M79-225, AD-A108-832, 25 October 1979.
29. Nibaldi, G. H. Specification of A Trusted Computing Base, (TCB), MITRE Corp., Bedford, Mass.,
M79-228, AD-A108- 831, 30 November 1979.
30. OMB Circular A-71, Transmittal Memorandum No. 1, Security of Federal Automated Information
Systems, 27 July 1978.
31. OMB Circular A-123, Internal Control Systems, 5 November 1981.
32. Ruthberg, Z. and McKenzie, R., eds. Audit and Evaluation of Computer Security, in NBS Special
Publication #500-19, October 1977.
33. Schaefer, M., Linde, R. R., et al. "Program Confinement in KVM/370," in Proceedings of the ACM
National Conference, October 1977, Seattle.
34. Schell, R. R. "Security Kernels: A Methodical Design of System Security," in Technical Papers,
USE Inc. Spring Conference, 5-9 March 1979, pp. 245-250.
35. Trotter, E. T. and Tasker, P. S. Industry Trusted Computer Systems Evaluation Process, MITRE
Corp., Bedford, Mass., MTR-3931, 1 May 1980.
36. Turn, R. Trusted Computer Systems: Needs and Incentives for Use in government and Private
Sector, (AD # A103399), Rand Corporation (R-28811-DR&E), June 1981.
37. Walker, S. T. "The Advent of Trusted Computer Operating Systems," in National Computer
Conference Proceedings, May 1980, pp. 655-665.
38. Ware, W. H., ed., Security Controls for Computer Systems: Report of Defence Science Board Task
Force on Computer Security, AD # A076617/0, Rand Corporation, Santa Monica, Calif., February
1970, reissued October 1979.
Part 2
Summary:
In recent years all sectors of the economy have focused on management of risk as the key to making
organisations successful in delivering their objectives whilst protecting the interests of their
stakeholders.
Risk is uncertainty of outcome, and good risk management allows an organisation to:
Have increased confidence in achieving its desired outcomes;
Effectively constrain threats to acceptable levels;
Take informed decisions about exploiting opportunities.
Good risk management also allows stakeholders to have increased confidence in the organisations
corporate governance and ability to deliver to the wider environment in which it functions.
Objectives:
Upon completion of this part, the student will be able to understand:
What is risk?
What is risk analysis?
Why risk analysis is necessary and relationship between threat, vulnerability, and loss
Risk Analysis Approaches
Comparison of Risk Analysis Approaches
Detailed Risk Analysis Approach
Introduction
It is a matter of definition that organizations exist for a purpose perhaps to deliver a service, or to
achieve particular outcomes.
In the private sector the primary purpose of an organization is generally concerned with the
enhancement of shareholder value.
In the central government sector the purpose is generally concerned with the delivery of service or with
the delivery of a beneficial outcome in the public interest.
Whatever the purpose of the organization may be, the delivery of its objectives is surrounded by
uncertainty which both poses threats to success and offers opportunity for increasing success.
The management of risk is not a linear process; rather it is the balancing of a number of interwoven
elements which interact with each other and which have to be in balance with each other if risk
management is to be effective.
Furthermore, specific risks cannot be addressed in isolation from each other; the management of one
risk may have an impact on another, or management actions which are effective in controlling more
than one risk simultaneously may be achievable.
The whole model has to function in an environment in which risk appetite has been defined. The
concept of risk appetite (how much risk is tolerable and justifiable) can be regarded as an overlay
across the whole of this model.
The model presented here, by necessity, dissects the core risk management process into elements for
illustrative purposes but in reality they blend together.
In addition, the particular stage in the process which one may be at for any particular risk will not
necessarily be the same for all risks.
The model illustrates how the core risk management process is not isolated, but takes place in a
context; and, how certain key inputs have to be given to the overall process in order to generate the
outputs which will be desired from risk management.
Threat
Loss
Threats
Keyboard operation
error
Hardware failure
Illegal procedure by
employee
Vulnerabilities
Possible loss
Inadequate manual
Interruption of
service due to system
shutdown
Inadequate
Interruption of
maintenance
service due to system
shutdown
Inadequate training Loss of credit and
inadequate log
compensation for
setting
damage caused by
leakage personal
information
Software bug
Data alteration <=
loss of credit recovery
cost (monetary
&time (
Inadequate security Damage caused by
check at the entrance stolen hardware
Classification of a loss
An asset is something that the organization values and therefore has to protect. Examples:
o Hardware: servers, PC, routers, firewalls, printers
o Software: programs, OS, utilities
o Data: database, e-mails, backups, logs, data in transit over transmission line
o People: users, administrators, clients
o Printed documents: contracts, financial documents
Identifying Risk
In order to manage risk, an organisation needs to know what risks it faces, and to evaluate them.
Identifying risks is the first step in building the organisations risk profile. There is no single right way
to document an organisations risk profile, but documentation is critical to effective management of
risk.
The identification of risk can be separated into two distinct phases.
Universal Knowledge Solutions S.A.L.
- 115 -
There is:
Initial risk identification (for an organisation which has not previously identified its risks in a
structured way, or for a new organisation, or perhaps for a new project or activity within an
organisation);
Continuous risk identification which is necessary to identify new risks which did not previously
arise, changes in existing risks, or risks which did exist ceasing to be relevant to the organisation
(this should be a routine element of the conduct of business).
It is also necessary to adopt an appropriate approach or tool for the identification of risk. Two of the
most commonly used approaches are:
Commissioning a risk review: A designated team is established (either in- house or contracted in)
to consider all the operations and activities of the organisation in relation to its objectives and to
identify the associated risks. The team should work by conducting a series of interviews with key
staff at all levels of the organisation to build a risk profile for the whole range of activities (but it is
important that the use of this approach should not undermine line management s understanding of
their responsibility for managing the risks which are relevant to their objectives);
Risk self-assessment: An approach by which each level and part of the organisation is invited to
review its activities and to contribute its diagnosis of the risks it faces. This may be done through a
documentation approach (with a framework for diagnosis set out through questionnaires),but is
often more effectively conducted through a facilitated workshop approach (with facilitators with
appropriate skills helping groups of staff to work out the risks affecting their objectives).A
particular strength of this approach is that better ownership of risk tends to be established when the
owners themselves identify the risks.
Risk Appetite
The concept of a risk appetite is key concept to achieve effective risk management and it is essential
to consider it before moving on to consideration of how risks can be addressed.
The concept may be looked at in different ways depending on whether the risk (the uncertainty) being
considered is a threat or an opportunity:
When considering threats the concept of risk appetite embraces the level of exposure which is
considered tolerable and justifiable should it be realised. In this sense it is about comparing the cost
(financial or otherwise) of constraining the risk with the cost of the exposure should the exposure
become a reality and finding an acceptable balance;
When considering opportunities the concept embraces consideration of how much one is prepared
to actively put at risk in order to obtain the benefits of the opportunity. In this sense it is about
comparing the value (financial or otherwise) of potential benefits with the losses which might be
incurred (some losses may be incurred with or without realising the benefits).
It should be noted that some risk is unavoidable and it is not within the ability of the organisation to
completely manage it to a tolerable level for example many organisations have to accept that there is
a risk arising from terrorist activity which they cannot control. In these cases the organisation needs to
make contingency plans.
Addressing Risk
Possibility
of
facing
the
risk
TREAT
TERMINATE
TOLERATE
TRANSFER
Cost of Loss
The purpose of addressing risks is to turn uncertainty to the organisations benefit by constraining
threats and taking advantage of opportunities.
Any action that is taken by the organisation to address a risk forms part of what is known as internal
control. There are five key aspects of addressing risk:
TOLERATE
The exposure may be tolerable without any further action being taken. Even if it is not tolerable, ability
to do anything about some risks may be limited, or the cost of taking any action may be
disproportionate to the potential benefit gained.
In these cases the response may be to tolerate the existing level of risk.
This option, of course, may be supplemented by contingency planning for handling the impacts that
will arise if the risk is realised.
TREAT
By far the greater number of risks will be addressed in this way. The purpose of treatment is that whilst
continuing within the organisation with the activity giving rise to the risk, action (control) is taken
constrain the risk to an acceptable level.
Such controls can be further sub-divided according to their particular purpose.
TRANSFER
For some risks the best response may be to transfer them. This might be done by conventional
insurance, or it might be done by paying a third party to take the risk in another way.
This option is particularly good for mitigating financial risks or risks to assets. The transfer of risks
may be considered to either reduce the exposure of the organisation or because another organisation
(which may be another government organisation)is more capable of effectively managing the risk. It is
important to note that some risks are not (fully) transferable in particular it is generally not possible to
transfer reputational risk even if the delivery of a service is contracted out.
The relationship with the third party to which the risk is transferred needs to be carefully managed to
ensure successful transfer of risk
TERMINATE
Some risks will only be treatable, or containable to acceptable levels, by terminating the activity. It
should be noted that the option of termination of activities may be severely limited in government
Universal Knowledge Solutions S.A.L.
- 117 -
when compared to the private sector; a number of activities are conducted in the government sector
because the associated risks are so great that there is no other way in which the output or outcome,
which is required for the public benefit, can be achieved.
This option can be particularly important in project management if it becomes clear that the projected
cost /benefit relationship is in jeopardy.
TAKE THE OPPORTUNITY
This option is not an alternative to those above; rather it is an option which should be considered
whenever tolerating, transferring or treating a risk.
There are two aspects to this:
The first is whether or not at the same time as mitigating threats; an opportunity arises to exploit
positive impact. For example, if a large sum of capital funding is to be put at risk in a major
project, are the relevant controls judged to be good enough to justify increasing the sum of money
at stake to gain even greater advantages?
The second is whether or not circumstances arise which, whilst not generating threats, offer
positive opportunities. For example, a drop in the cost of goods or services frees up resources
which can be re-deployed.
Baseline Approach
o Apply a set of safeguards to achieve a baseline of protection of each system
o Using safeguard baseline materials: ISO17799, GMITS
Detailed Approach
o In-depth identification and evaluation of assets
o In-depth Assessment of the levels of threats and associated vulnerabilities
Informal Approach
o Not based on a structured analysis
o Exploit the knowledge and experience of individuals
Advantages
Follow-up
Approach
Informal
Approach
(Result > 9)
Risk is high
Part 3
Summary:
A typical attack is not a simple, one-step procedure. It's rare that an attacker can get online or dial up a
remote computer and use only one method to gain full access. It's more likely that the attacker will
need several techniques in combination to bypass the many layers of protection standing between the
attacker and root administrative access. Therefore, as a security consultant or network administrator,
you should be well-versed in these techniques in order to thwart them. This section introduces the main
types of attacks as well as system vulnerabilities. Later sections discuss some of the more popular
measures.
Objectives:
Upon completion of this part, the student will be able to understand:
Threats to computer systems
Vulnerabilities of computer systems
Examples of threats and vulnerabilities
General procedure of illegal access
External
o Illegal access from the Internet.
Internal
o Steal, alter, or delete confidential files.
o Steal hardware devices.
o Virus Infection. o
Operation mistake.
o Illegal access to the Internet.
Percentage of
companies
attacked by each
kind of attack
Foreign
gov.
Foreign
Corp.
Independent
US.
Disgruntled
Hackers
Competitors Employee
Vulnerability:
o A weakness in the organization, computer system, or network.
Example:
o Security policy is not set.
o Roles and responsibilities are vague.
o Security training of employees are inadequate.
o Building entrance are not checked completely.
o There is no protection against computer viruses.
o Software bugs exist.
o No password rules are set.
o Confidential data are sent without encryption over the network.
Security Holes
A security hole is security problem of the system or a place where the security problem may occur
Voice Over
(Same text of the slide)
Propositions
(No Proposition)
Slide
Threats
o Break-ins: by delivery man for example.
o Theft of keys, ID cards.
o Access of an unauthorized person.
Vulnerabilities
o Lost of keys or ID Cards.
Malicious Code
Trojan Horse
Virus
Mobile Code
Trojan horse attacks pose one of the most serious threats to computer security.
According to legend, the Greeks won the Trojan war by hiding in a huge, hollow wooden horse to
sneak into the fortified city of Troy.
For example, we download what appears to be a movie or music file, but when we click on it, we
unleash a dangerous program that erases our disk, sends our credit card numbers and passwords to
a stranger, or lets that stranger hijack our computer to commit illegal denial of service attacks like
those that have virtually crippled the DALnet IRC network for months on end.
The following general information applies to all operating systems, but by far most of the damage
is done to/with Windows users due to its vast popularity and many weaknesses.
Trojans are executable programs, which means that when we open the file, it will perform some
action(s).
In Windows, executable programs have file extensions like "exe", "vbs", "com", "bat", etc.
Trojans can be spread in the guise of literally ANYTHING people find desirable, such as a free
game, movie, song, etc.
Victims typically downloaded the trojan from a WWW or FTP archive, got it via peer-to-peer file
exchange using IRC/instant messaging/Kazaa etc., or just carelessly opened some email
attachment.
Trojans usually do their damage silently. The first sign of trouble is often when others tell us that
we are attacking them or trying to infect them!
There are many products to choose from, but the following are generally effective: AVP, PCcillin, and McAfee VirusScan. All are available for immediate downloading typically with a 30
day free trial.
3. Anti-Trojan Programs:
These programs are the most effective against trojan horse attacks, because they specialize in
trojans instead of general viruses.
A popular choice is The Cleaner, $30 commercial software with a 30 day free trial.
To use it effectively, we must follow hackfix.org's configuration suggestions.
When we are done, we must make sure weve updated Windows with all security patches, then
we must change all our passwords because they may have been seen by every "hacker" in the
world.
Created by a group called The Cult of the Dead Cow: www.CultDeadCow.com and presented
as a remote administration tool.
It is composed of 3 parts:
o Server Side Program (112 KB)
o Configuration tool (Automatic Installation)
o Client Side Tool (User Friendly)
It can only spread from one computer to another when its host is taken to the uninfected computer, for
instance by a user sending it over a network or carrying it on a removable medium.
It can spread to other computers by infecting files on a network file system or a file system that is
accessed by another computer. In fact, many personal computers are now connected to the Internet and
to local-area networks, facilitating their spread.
In fact, Some viruses are programmed to damage the computer by damaging programs, deleting files,
or reformatting the hard disk. Others are not designed to do any damage, but simply replicate
themselves and perhaps make their presence known by presenting text, video, or audio messages. Even
these benign viruses can create problems for the computer user. They typically take up computer
memory used by legitimate programs. As a result, they often cause erratic behavior and can result in
system crashes.
Viruses - A virus is a small piece of software that piggybacks on real programs. For example,
a virus might attach itself to a program such as a spreadsheet program. Each time the
spreadsheet program runs, the virus runs, too, and it has the chance to reproduce (by attaching
to other programs) or wreak havoc.
E-mail viruses - e-mail virus moves around in e-mail messages, and usually replicates itself by
automatically mailing itself to dozens of people in the victim's e-mail address book.
Trojan horses - A Trojan horse is simply a computer program. The program claims to do one
thing (it may claim to be a game) but instead does damage when you run it (it may erase your
hard disk). Trojan horses have no way to replicate automatically.
Worms - A worm is a small piece of software that uses computer networks and security holes
to replicate itself. A copy of the worm scans the network for another machine that has a
specific security hole. It copies itself to the new machine using the security hole, and then
starts replicating from there, as well. We'll take a closer look at how a worm works in the next
section.
Today's viruses may also take advantage of network services such as the World Wide Web, e-mail, and
file sharing systems to spread, blurring the line between viruses and worms.
Computer viruses can spread very fast. For example, it is estimated that the Mydoom worm infected a
quarter-million computers in a single day in January 2004. Another example is the ILOVEYOU worm,
Universal Knowledge Solutions S.A.L.
- 130 -
Replace Web pages on infected servers with a page that declares "Hacked by Chinese".
Launch a concerted attack on the White House Web server in an attempt to overwhelm it
Universal Knowledge Solutions S.A.L.
- 131 -
Replicator
Protector
Trigger
Payload
Malicious Code:
Mobile Code related to Web Applications
In the early days, when Web pages were just static HTML files, they did not contain executable code.
Now they often contain small programs, including Java applets, ActiveX controls, and JavaScripts.
Downloading and executing such mobile code is obviously a massive security risk, so various methods
have been devised to minimize it.
We will take a quick look at some of the issues raised by mobile code and some approaches to dealing
with it. We will focus on 3 mobile code types:
Java Applet
ActiveX
JavaScript
Java applets are small Java programs compiled to a stack-oriented machine language called JVM
(Java Virtual Machine).
They can be placed on a Web page for downloading along with the page. After the page is loaded, the
applets are inserted into a JVM interpreter inside the browser, as illustrated in the slide.
The advantage of running interpreted code over compiled code is that every instruction is examined by
the interpreter before being executed. This gives the interpreter the opportunity to check whether the
instructions address is valid.
In addition, system calls are also caught and interpreted. How these calls are handled is a matter of the
security policy. For example, if an applet is trusted (e.g., it came from the local disk), its system calls
could be carried out without question.
Universal Knowledge Solutions S.A.L.
- 133 -
However, if an applet is not trusted (e.g., it came in over the Internet), it could be encapsulated in what
is called a sandbox to restrict its behaviour and trap its at-tempts to use system resources.
When an applet tries to use a system resource, its call is passed to a security monitor for approval. The
monitor examines the call in light of the local security policy and then makes a decision to allow or
reject it. In this way, it is possible to give applets access to some resources but not all. Unfortunately,
the reality is that the security model works badly and that bugs in it crop up all the time.
Network Attacks
Introduction
A typical attack is not a simple, one-step procedure. It's rare that an attacker can get online or dial up a
remote computer and use only one method to gain full access. It's more likely that the attacker will
need several techniques in combination to bypass the many layers of protection standing between the
attacker and root administrative access.
Therefore, as a security consultant or network administrator, we should be well-versed in these
techniques in order to thwart them. Thus, we will introduce the main types of network attacks. Later
parts discuss some of the more popular defenses and measures.
The stereotypical image conjured by most people when they hear the term hacker is that of a pallid,
atrophied recluse cloistered in a dank bedroom, whose spotted complexion is revealed only by the
unearthly glare of a Linux box used for remote exploit scanning in Perl.
However, while computer skill is central to a hacker's profession, there are many additional facets that
he (or she) must master.
A real hacker must also rely on physical and interpersonal skills such as social engineering and other
"wet work" that involves human interaction. However, because most people have a false stereotype of
hackers, they fail to realize that the person they're chatting with in the office or talking to on the phone
may in fact be a hacker in disguise. In fact, this common misunderstanding is one of the hacker's
greatest assets.
Network Attacks
Collect of Information
Foot Printing
Scanning: Ping Sweeps, Port Sweeps
Enumeration
Network Attacks
Collect of Information: Foot Printing
Functional Information:
o Names of employees in order to deduce user names and passwords.
o Email addresses.
o Technical levels of the employees.
o Business type and information transmitted over the network.
o etc.
Universal Knowledge Solutions S.A.L.
- 135 -
Financial Information
Network Attacks
Collect of Information: Scanning
ICMP Echo Request (type 8)
IP Scanning
Is performed using:
o ping tool.
o OR ICMP-echo request and ICMP-echo reply packet.
Port Scanning:
Based on The Three Way Handshake procedure
Client SYN
Server
SYN-ACK
Client ACK
The Procedure:
Repeat for all Ports
Client SYN
If (Server SYN-ACK) then Port
Else
if (Server RST-ACK) then No Port
Client ACK
Tools:
UNIX
o fping, gping: ftp://tamu.edu/pub/Unix/src
Universal Knowledge Solutions S.A.L.
- 136 -
o Nmap: http://www.InSecure.Org
Windows
o Pinger : http://207.98.195.250/Software
o INetTools: http://www.wildpackets.com/Products/inettools
Network Attacks
Collect of Information: Enumeration
Network Attacks
Sniffing
A sniffer is a program and/or device that monitors all information passing through a computer network.
It "sniffs" the data passing through the network off the wire and determines where the data is going,
where it originated, and what it is.
In addition to these basic functions, sniffers can have extra features that allow them to filter one type of
data, capture passwords, and more.
There are even some sniffersfor example, the FBI's controversial mass-monitoring tool formerly
known as Carnivorethat can rebuild files sent across a network, such as an email or web page.
Network Attacks
Sniffing: How Does a Sniffer Work?
A network card normally accepts only information that has been sent to its specific network address.
This network address is properly known as the Media Access Control (MAC) address. The MAC
address is also called the physical address.
For a computer to have the ability to sniff a network, it must have a network card running in
promiscuous mode, which means that it can receive all the traffic that's sent across the network,
regardless of whether the data is destined for the machine running the sniffer.
An exception to this rule is monitor mode, which stops all interaction. This type of network card status
applies only to wireless network interface cards.
Due to the unique properties of a wireless network, any data traveling through the airwaves is open to
any device that's configured to listen. While a card in promiscuous mode will work in wireless
environments, there's no need for it to be part of the network. Instead, a wireless NIC can simply enter
a listening status in which it's restricted from sending data out to the network.
The destination address is the MAC address of the computer; there's a unique MAC address for every
network card in the world. Although you can change the address, the MAC address ensures that the
data is delivered to the right computer. If a computer's address doesn't match the address in the packet,
the data is normally ignored.
Network cards have the option to run in promiscuous mode for the sake of troubleshooting. Normally,
a computer doesn't need to send information to other computers on the network. However, in the event
that something goes wrong with the network wiring or hardware, it's important for a network
technician to look inside the data traveling on the network to see what's causing the problem. For
example, one common indication of a bad network card is when computers start to have a difficult time
transferring data. This could be the result of an information overload on the network wires.
The flood of data would jam the network and would stop any productive communication. Once a
technician plugs in a computer with the ability to examine the network, he can quickly pinpoint the
origin of the corrupt data and thus the location of the broken network card. He can then simply replace
the bad card and everything is back to normal.
Although sniffers most commonly show up within closed business networks, they can also be used
throughout the Internet. As mentioned previously, the FBI has a program that captures all the
information both coming from and going to computers online. This tool, previously known as
Carnivore, simply has to be plugged in and turned on. (The FBI backed away from the aggressive
name Carnivore because of negative public reaction.) Although it's purported to filter out any
information that's not intended for the target, this tool actually captures everything traveling through
wires to which it's connected and then filters it according to the rules set up in the program. Thus,
Carnivore can potentially capture all of the passwords, emails, and chats passing through its
connection.
Network Attacks
IP Spoofing
Spoofing is the term hackers use to describe the act of faking the source address of information sent to
a computer. This is a broad definition, but there are many subtle variations of this attack. However, the
purpose of each variation is generally the same: to disguise the location from which the attack
originates.
Session hijacking takes the act of spoofing one step further. It involves faking someone's identity in
order to take over a connection that's already established. Because spoofing is required in order to
successfully hijack a connection.
The most common spoofing attack is called an IP spoof. This type of attack takes advantage of the
Internet Protocol (IP), which is part of the Transmission Control Protocol/Internet Protocol (TCP/IP)
suite. In this case, the return address of a packet sent to a computer is faked. This trick protects the
identity of the attacker.
Network Attacks
Denial of Service: ICMP-echo Flooding
ECHO REPLY
ICMP ECHO
C
ICMP ECHO
Internet
ECHO REPLY
B
Host A try to attack Host B & Host C
A::Attack(B,C)
{
P1, P2: ICMP-packet;
Repeat
{
P1:=A.Create-ICMP-echo-packet( );
P1.SourceAddress:=C.IPAddress
P1.DestinationAddress=B.IPAddress
A.Send(P1);
P2:=B.Create-ICMP-echo-reply-packet( );
P2.SourceAddress=B.IPAddress;
P2.DestinationAddress:=P1.SourceAddress;
B.Send(P2);
// Consequently, the ICMP-reply will be sent to C
}
}
Hackers can wreak havoc without ever penetrating a system. For example, a hacker can effectively
shut down a computer by flooding this computer with obnoxious signals or malicious code.
This technique is known as a denial-of-service (DoS) attack. In addition, there are numerous other
kinds of DoS attacks that can be launched and that may deserve coveragechief among these being
the Distributed DoS (DDoS):
Hackers execute a denial-of-service attack using one of two methods:
The flooding method drowns the target computer or hardware device with so much traffic that
it becomes overwhelmed and cannot process it all, let alone legitimate traffic. A common type
of flooding is SYN flooding.
The alternative method is to send a well-crafted command or piece of erroneous data that
crashes the target computer device.
Network Attacks
Smurf Attack
ECHO REPLY
1*ICMP ECHO
C
A
Internet
1*ICMP ECHO
N*ECHO REPLY
B
One variation of the flooding DoS is the smurf attack.
Imagine a company with 50 employees available to respond to customer questions by email. Each
employee has an auto-responder that automatically sends a courtesy reply when a question is received.
What would happen if an angry customer mailed 100 email messages, copied to each of the 50
employees, using a fake return email address? The 100 incoming messages would suddenly become
5,000 outgoing messagesall going to one mailbox. Whoever owned the fake return address would be
overwhelmed with all that mail! And he would have to search through all of it to make sure that he
didn't miss an important message from his boss or a friend.
In a smurf attack, the attacker sends a request signal into a network of computers, each of which reply
to a faked return address. Special programs and other techniques can amplify this attack until a flood of
information is headed toward one unfortunate computer.
The rules of TCP/IP specify that a computer generally ignores all packets that are not expressly
addressed to it. One exception is if a computer has a network card running in promiscuous mode.
Another exception occurs when a system uses broadcast packets.
Because of the way IP addresses are set up within a network, there's always one address to which every
computer will answer. This broadcast address is used to update name lists and other necessary items
that computers need to keep the network up and running. Although the broadcast address is necessary
in some cases, it can lead to what's known as a broadcast storm.
A broadcast storm is like an echo that never dies. More specifically, it's like an echo that crescendos
until you can't hear anything over the pure noise. If a computer sends a request to a network using the
broadcast address and the return address of the broadcast address, every computer will respond to
every other computer's response; this continues in a snowball effect until the network is so full of
echoes that nothing else can get through.
These types of attacks not only quickly and effectively shut down a server, but keep the hacker
invisible. The original packets sent by the hacker are untraceable. In the smurf attack, the hacker
doesn't directly attack the target, but instead uses the side-effect of sending broadcast signals into a
network to do the job indirectly. Therefore, the attack appears to have come from another computer or
Universal Knowledge Solutions S.A.L.
- 141 -
network.
Network Attacks
SYN Flooding
A SYN (short for synchronize) flooding attack ties up the target computer's resources by making it
respond to a flood of connection requests that are never completed.
Imagine that you're a secretary whose job is to answer and redirect phone calls. What if 200 people
called you at the same time, and then simply kept the line open while you sat there saying "Hello?
Hello?" You would be so busy picking up dead lines that you would never get any work done.
Eventually, you might suffer a mental breakdown and quit your job. This is the same technique that
hackers use when they employ a denial-of-service attack.
A SYN DoS attack takes advantage of the required TCP/IP handshake that takes place when two
computers set up a communications session. The client computer sends a SYN packet to the server
computer to start the communication. When the server receives this data, it processes the return
address and sends back the SYN ACK (acknowledge) packet. The server then waits for the client to
respond with a final ACK packet, which completes the connection initiation.
A server has a limited number of resources designated for client connections. When the server receives
the initial SYN packet from a client, the server allocates some of these resources. This limitation is
meant to cap the number of simultaneous client connections. If too many clients connect at once, the
server will become overloaded and crash under the excess processing load. (Note that both server
defences and attacks have continued to evolve since the discovery of this weakness.)
The weakness in this system occurs when the hacker inserts a fake return address in the initial SYN
packet. Thus, when the server sends back the SYN ACK to the fake client, it never receives the final
ACK. This means that for every fake SYN packet, further resources are tied up until the server refuses
any more connections.
A successful attack requires myriad fake packets, but if a hacker has several slave computers sending
packets he can overload a server quickly. A well-known example of this type of attack occurred in
February 2000. Several high-profile web sites were brought to their knees by a flood of signals coming
from hundreds of different computers simultaneously. The web sites would have had no problem
handling an attack from one source; however, through the use of remote-control programs, one or more
hackers launched a concerted attack using hundreds of computers, thus quickly overloading their
targets.
To perform a DoS attack, the hacker must first determine the IP address of the target. Using this IP
address, the hacker connects to it using a client computer. To amplify the force of the attack, hackers
often set up several client computers programmed to attack the target at the same time. This is usually
accomplished by doing some preliminary hacking in order to gain ownership of several computers with
high bandwidth connections. The most popular sources of these "slave" or "zombie" computers are
university systems and broadband customers. Once the hacker has his slave computers set up, he
launches the attack from a central point (the master).
Network Attacks
Hijacking
To get the information exchanged between two sides, the Hacker start Sniffing the Client
1. When, Client Server: SYN
a. Hacker Server: false RST-SYN
b. Hacker Client: false ACK
2. Thus, Server Client: new SYN-ACK (responding to 1.a)
c. Hacker Server: ACK
d. Server Hacker: ACK (Established with Hacker)
3. Client Hacker: ACK (Established with Hacker) (responding to 1.b)
Ping of Death
A TCP/IP packet with a theoretical length greater than 65536-bytes has been sent to the machine.
This attack was popular around July of 1997, but since then most systems have been patched to prevent
this bug.
TCP/IP supports a feature called "fragmentation", where a single IP-packet can be broken down into
smaller segments. This is needed because the typical Internet connection (dial-up, Ethernet, cablemodem, etc.) only supports packets of around a couple thousand bytes, but IP supports packets up to
64-kbytes. Thus, when sending a single packet that is too large for a link, it is broken up into smaller
packet fragments.
A quirk of IP is that while a single packet cannot exceed 65536-bytes, the fragments themselves can
add up to more than that. The "Ping of Death" technique does just that. Since this is a condition
thought impossible, operating systems crash when they receive this data.
Ping of death can actually be run from older versions of Windows. At a command line, simply type:
ping -l 65550 VICTIM A further bug in Windows is that it not only crashes when it receives the
invalid data, but it can accidentally also generate it. Newer versions of Windows prevent you from
sending these packets.
DNS Spoofing
When visiting websites, such as, http://www.example.com, the system must first resolve the name into
an IP address using DNS. This is similar to how you must lookup someone name in the phone book in
order to dial their telephone number.
There exists a hacker technique whereby they can sometimes force a duplicate reply to the DNS
lookup. Using the phone book analogy, it is similar to calling 411/information for somebody's number
and getting back two replies. Imagine a hacker breaking into the phone system such that the first
number you heard was to the hacker. The hacker who broke into the telephone system might use this
technique to redirect people buying with credit cards to his own phone number, and then pretend to be
the real vendor, and then steal the credit card numbers.
In much the same way, hackers use this DNS spoof in order to redirect people to their own website.
However, we are finding that home users are seeing such behaviour from ISPs. Some ISPs attempt to
re-direct users through their own caching servers. Therefore, this "spoof" symptom doesn't actually
indicate a hostile attack.
In fact, DNS spoofing works by forcing a DNS "client" to generate a request to a "server", then by
spoofing the response from the "server".
One way this works is through the scheme that most DNS servers support "recursive" queries. You can
therefore send a request to any DNS server asking for it to resolve a name-to-address. That DNS server
will then send the proper queries to the proper servers in order to discover the appropriate information.
However, an intruder can predict what request that victim server will send out to satisfy the request,
and can spoof the response, which will arrive before the real response arrives.
This is useful because DNS servers will "cache" information for a certain amount of time. If an
intruder can successfully spoof a response for "www.microsoft.com", any legitimate users of that DNS
server will then be redirected to the intruder's site.
Social Engineering
Social engineering, or interpersonal manipulation, is not unique to hacking. In fact, many people use
this type of trickery every day, both criminally and professionally.
We are probably guilty of using social engineering techniques to get something we wanted. Whether
it's haggling for a lower price on a lawnmower at a garage sale.
One example of social engineering that information technology managers face on a weekly basis is
solicitation from vendors. This inimical form of sales takes the form of thinly disguised telemarketing.
Straying far from ethical standards of sales technique, such vendors attempt to trick us into giving
them information so they can put our company's name on a mailing list.
Here's one such attempt:
Universal Knowledge Solutions S.A.L.
- 144 -
"Hi, this is the copier repair company. We need to get the model of your copier for our service
records. Can you get that for us?"
This request sounds innocent enough, and many people fall for this tactic. The attacker is simply trying to
trick us into providing "sensitive" informationdetails that he or she really has no right to know.
However, we might try to reverse the social engineering ourselves: Play along, and even tempt the
caller with statements indicating that you're dissatisfied with our current copier, and that we wish we
could purchase another brand. Once scammers sense money, they often foolishly make their true
identity known.
Like the scam artist's trick, a common hacker method is to pretend to be conducting a surveyasking
all kinds of questions about the network operating systems, intrusion-detection systems, firewalls, and
more, in the guise of a researcher. A really malicious hacker might even offer a cash reward to pay for
the network administrator's time in answering the questions. The most popular social engineering
method involves pretending to be a user who has trouble getting the VPN to work, has lost the
password to the mail server, etc. Unfortunately, most people fall for the bait and reveal sensitive
network information.
Social Spying
Social spying is the process of using observation to acquire information. While social engineering can
provide an attacker with crucial information, small businesses are more resistant to social engineering
because people in small companies all know each other. For example, if one of the IT staff received a
call from an attacker pretending to be a distressed CEO, she would probably recognize the voice as not
belonging to the real CEO. In this case, social spying becomes more important.
To illustrate one of the non-technical ways in which social spying can be used, consider how many
people handle their ATM cards.
Do we hide your PIN when we get cash at the ATM?
Most people don't; they whip out the card and punch the numbers without noticing who might be
watching.
If someone else memorizes the PIN, he'll have all the information needed to access the funds in
the accountprovided that he can get his hands on the ATM card.
Snooping on people as they actively type their user information isn't the only technique. Most offices
have at least a few people who post passwords on or near their computer monitors. This type of blatant
disregard for security is every network administrator's worst nightmare.
Regardless of repeated memos, personal visits, and warnings, some people seem to find an excuse to
post network passwords in plain view. Even if some people are at least security-conscious enough to
hide the Post-it note with the password in a discreet place, it still takes only a few seconds to lift a
keyboard or pull open a desk drawer.
Part 4
Summary:
In this section, we will present the main security measures related to network, services and application
security.
Objectives:
Upon completion of this part, the student will be able to understand:
Networks related security measures.
OS and DB and services related security measures
Techniques of prevention
Techniques of detection
Techniques of recovery
LOCAL NETWORK
CONNECTED TO THE
FTP
The Internet
INTERNET
Example:
Voice Over
A firewall is simply a program or hardware device that filters the information coming through the
Internet connection into a private network or computer system. If an incoming packet of information is
flagged by the filters, it is not allowed through.
Let's say that we work at a company with 500 employees. The company will therefore have hundreds
of computers that all have network cards connecting them together. In addition, the company will have
one or more connections to the Internet through something like T1 or T3 lines.
Without a firewall in place, all of those hundreds of computers are directly accessible to anyone on the
Universal Knowledge Solutions S.A.L.
- 151 -
Internet. A person who knows what he or she is doing can probe those computers, try to make FTP
connections to them, try to make telnet connections to them and so on. If one employee makes a
mistake and leaves a security hole, hackers can get to the machine and exploit the hole.
With a firewall in place, the landscape is much different. A company will place a firewall at every
connection to the Internet (for example, at every T1 line coming into the company). The firewall can
implement security rules. For example, one of the security rules inside the company might be:
Out of the 500 computers inside this company, only one of them is permitted to receive public FTP
traffic. Allow FTP connections only to that one computer and prevent them on all others.
A company can set up rules like this for FTP servers, Web servers, Telnet servers and so on. In
addition, the company can control how employees connect to Web sites, whether files are allowed to
leave the company over the network and so on. A firewall gives a company tremendous control over
how people use the network.
Firewalls use one or more of three methods to control traffic flowing in and out of the network:
Packet filtering - Packets (small chunks of data) are analyzed against a set of filters. Packets
that make it through the filters are sent to the requesting system and all others are discarded.
Proxy service - Information from the Internet is retrieved by the firewall and then sent to the
requesting system and vice versa.
Stateful inspection - A newer method that doesn't examine the contents of each packet but
instead compares certain key parts of the packet to a database of trusted information.
Information travelling from inside the firewall to the outside is monitored for specific defining
characteristics, then incoming information is compared to these characteristics. If the
comparison yields a reasonable match, the information is allowed through. Otherwise it is
discarded.
Firewall Actions
There are many creative ways that unscrupulous people use to access or abuse unprotected computers:
Remote login - When someone is able to connect to your computer and control it in some form.
This can range from being able to view or access your files to actually running programs on your
computer.
Application backdoors - Some programs have special features that allow for remote access. Others
contain bugs that provide a backdoor or hidden access that provides some level of control of the
program.
SMTP session hijacking - SMTP is the most common method of sending e-mail over the Internet.
By gaining access to a list of e-mail addresses, a person can send unsolicited junk e-mail (spam) to
thousands of users. This is done quite often by redirecting the e-mail through the SMTP server of an
unsuspecting host, making the actual sender of the spam difficult to trace.
Operating system bugs - Like applications, some operating systems have backdoors. Others
provide remote access with insufficient security controls or have bugs that an experienced hacker
can take advantage of.
Denial of service - You have probably heard this phrase used in news reports on the attacks on
major Web sites. This type of attack is nearly impossible to counter. What happens is that the
hacker sends a request to the server to connect to it. When the server responds with an
acknowledgement and tries to establish a session, it cannot find the system that made the request.
By inundating a server with these unanswerable session requests, a hacker causes the server to slow
to a crawl or eventually crash.
E-mail bombs - An e-mail bomb is usually a personal attack. Someone sends you the same e-mail
hundreds or thousands of times until your e-mail system cannot accept any more messages.
Macros - To simplify complicated procedures, many applications allow you to create a script of
commands that the application can run. This script is known as a macro. Hackers have taken
advantage of this to create their own macros that, depending on the application, can destroy your
data or crash your computer.
Viruses - Probably the most well-known threat is computer viruses. A virus is a small program that
can copy itself to other computers. This way it can spread quickly from one system to the next.
Viruses range from harmless messages to erasing all of your data.
Spam - Typically harmless but always annoying, spam is the electronic equivalent of junk mail.
Spam can be dangerous though. Quite often it contains links to Web sites. Be careful of clicking on
these because you may accidentally accept a cookie that provides a backdoor to your computer.
Redirect bombs - Hackers can use ICMP to change (redirect) the path information takes by
sending it to a different router. This is one of the ways that a denial of service attack is set up.
Source routing - In most cases, the path a packet travels over the Internet (or any other network) is
determined by the routers along that path. But the source providing the packet can arbitrarily
specify the route that the packet should travel. Hackers sometimes take advantage of this to make
information appear to come from a trusted source or even from inside the network! Most firewall
products disable source routing by default.
Some of the items in the list above are hard, if not impossible, to filter using a firewall. While some
firewalls offer virus protection, it is worth the investment to install anti-virus software on each
computer. And, even though it is annoying, some spam is going to get through your firewall as long as
you accept e-mail.
Firewall Placement
Screened Host
Internal
Network
Firewall
Router
Internet
Screened Subnet
Internal
Network
Router
FireWall
Router
Internet
Internal
Network
Router
FireWall
Router
Internet
Firewall Products
Raptor: www.Raptor.com
Misuse Detection vs. Anomaly Detection: in misuse detection, the IDS analyzes the information it
gathers and compares it to large databases of attack signatures. Essentially, the IDS looks for a
specific attack that has already been documented. Like a virus detection system, misuse detection
software is only as good as the database of attack signatures that it uses to compare packets against.
In anomaly detection, the system administrator defines the baseline, or normal, state of the
networks traffic load, breakdown, protocol, and typical packet size. The anomaly detector
monitors network segments to compare their state to the normal baseline and look for anomalies.
Passive System vs. Reactive System: in a passive system, the IDS detects a potential security
breach, logs the information and signals an alert. In a reactive system, the IDS responds to the
suspicious activity by logging off a user or by reprogramming the firewall to block network traffic
from the suspected malicious source.
necessary.
We may need to open up port 80 to host web sites or port 21 to host an FTP file server. Each of these
holes may be necessary from one standpoint, but they also represent possible vectors for malicious
traffic to enter the network rather than being blocked by the firewall.
That is where the IDS would come in. Whether we implement a NIDS across the entire network or a
HIDS on a specific device, the IDS will monitor the inbound and outbound traffic and identify
suspicious or malicious traffic which may have somehow bypassed the firewall or it could possibly be
originating from inside the network as well.
An IDS can be a great tool for proactively monitoring and protecting the network from malicious
activity, however they are also prone to false alarms.
With just about any IDS solution we implement we will need to tune it once it is first installed. We
need the IDS to be properly configured to recognize what is normal traffic on your network vs. what
might be malicious traffic and we, or the administrators responsible for responding to IDS alerts, need
to understand what the alerts mean and how to effectively respond.
Using Antivirus
Virus Prevention
Standard Conformity
Example of Windows 2000 C2-Conformity (Orange Book Standard)
Part 5
Summary:
There are many aspects to security and many applications, ranging from secure commerce and
payments to private communications and protecting passwords. One essential aspect for secure
communications is that of cryptography. But it is important to note that while cryptography is
necessary for secure communications, it is not by itself sufficient.
Objectives:
Upon completion of this part, the student will be able to understand:
Types of Cryptography
Secret Key Cryptography
Public Key Cryptography
Hash Functions
Cryptography Schemes
There are, in general, three types of cryptographic schemes typically used to accomplish these goals:
In all cases, the initial unencrypted data is referred to as plaintext. It is encrypted into ciphertext, which
will in turn (usually) be decrypted into usable plaintext.
Universal Knowledge Solutions S.A.L.
- 161 -
With this form of cryptography, it is obvious that the key must be known to both the sender and the
receiver; that, in fact, is the secret.
The biggest difficulty with this approach, of course, is the distribution of the key.
Synchronous stream ciphers generate the keystream in a fashion independent of the message
stream but by using the same keystream generation function at sender and receiver. While stream
ciphers do not propagate transmission errors, they are, by their nature, periodic so that the
keystream will eventually repeat.
passes over the block; 3DES is also described in FIPS 46-3 and is the recommended replacement to
DES.
Advanced Encryption Standard (AES):
In 1997, NIST initiated a very public, 4.5 year process to develop a new secure cryptosystem for
U.S. government applications. The result, the Advanced Encryption Standard, became the official
successor to DES in December 2001.
AES uses an SKC scheme called Rijndael, a block cipher designed by Belgian cryptographers Joan
Daemen and Vincent Rijmen.
The algorithm can use a variable block length and key length; the latest specification allowed any
combination of keys lengths of 128, 192, or 256 bits and blocks of length 128, 192, or 256 bits.
NIST initially selected Rijndael in October 2000 and formal adoption as the AES standard came in
December 2001. FIPS PUB 197 describes a 128-bit block cipher employing a 128-, 192-, or 256bit key.
International Data Encryption Algorithm (IDEA):
Secret-key cryptosystem written by Xuejia Lai and James Massey, in 1992 and patented by Ascom;
a 64-bit SKC block cipher using a 128-bit key.
Also available internationally.
Example:
Key:
Input
character
string:
Input bit
string:
Output
bit
string:
11110110
10100110
11110010
00000011
10000010
11001110
10000000
00101111
11100110 11100110
10000001 01011011
Output O
character
string:
DES uses a 56-bit key. The 56-bit key is divided into eight 7-bit blocks and an 8th odd parity bit is
added to each block (i.e., a "0" or "1" is added to the block so that there are an odd number of 1 bits in
each 8-bit block).
By using the 8 parity bits for rudimentary error detection, a DES key is actually 64 bits in length for
computational purposes (although it only has 56 bits worth of randomness, or entropy).
DES acts on 64-bit blocks of the plaintext, invoking 16 rounds of permutations, swaps, and substitutes.
The standard includes tables describing all of the selection, permutation, and expansion operations
mentioned below; these aspects of the algorithm are not secrets. The basic DES steps are:
1. The 64-bit block to be encrypted undergoes an initial permutation (IP), where each bit is moved to
a new bit position; e.g., the 1st, 2nd, and 3rd bits are moved to the 58th, 50th, and 42nd position,
respectively.
2. The 64-bit permuted input is divided into two 32-bit blocks, called left and right, respectively. The
initial values of the left and right blocks are denoted L0 and R0.
3. There are then 16 rounds of operation on the L and R blocks. During each iteration (where n ranges
from 1 to 16), the following formulae apply:
Ln = Rn-1
Rn = Ln-1 XOR f(Rn-1,Kn)
4. At any given step in the process, then, the new L block value is merely taken from the prior R
block value. The new R block is calculated by taking the bit-by-bit exclusive-OR (XOR) of the
prior L block with the results of applying the DES cipher function, f, to the prior R block and
Kn. (Kn is a 48-bit value derived from the 64-bit DES key. Each round uses a different 48 bits
according to the standard's Key Schedule algorithm.)
5. The cipher function, f, combines the 32-bit R block value and the 48-bit subkey in the following
way. First, the 32 bits in the R block are expanded to 48 bits by an expansion function (E); the
extra 16 bits are found by repeating the bits in 16 predefined positions. The 48-bit expanded Rblock is then ORed with the 48-bit subkey. The result is a 48-bit value that is then divided into
eight 6-bit blocks. These are fed as input into 8 selection (S) boxes, denoted S1,...,S8. Each 6-bit
input yields a 4-bit output using a table lookup based on the 64 possible inputs; this results in a 32bit output from the S-box. The 32 bits are then rearranged by a permutation function (P), producing
the results from the cipher function.
6. The results from the final DES round i.e., L 16 and R16 are recombined into a 64-bit value
-1
and fed into an inverse initial permutation (IP ). At this step, the bits are rearranged into their
original positions, so that the 58th, 50th, and 42nd bits, for example, are moved back into the 1st,
-1
2nd, and 3rd positions, respectively. The output from IP is the 64-bit ciphertext block.
They then applied all 2 possible key values to the 64-bit block (I don't mean to make this sound
simple!). The system checked to see if the decrypted value of the block was "interesting," which they
defined as bytes containing one of the alphanumeric characters, space, or some punctuation. Since the
likelihood of a single byte being "interesting" is about , then the likelihood of the entire 8-byte stream
8
16
being "interesting" is about , or 1/65536 ( ). This dropped the number of possible keys that might
40
yield positive results to about 2 , or about a trillion.
They then made the assumption that an "interesting" 8-byte block would be followed by another
"interesting" block. So, if the first block of ciphertext decrypted to something interesting, they decrypted
the next block; otherwise, they abandoned this key. Only if the second block was also "interesting" did
24
they examine the key closer. Looking for 16 consecutive bytes that were "interesting" meant that only 2 ,
or 16 million, keys needed to be examined further. This further examination was primarily to see if the text
made any sense. Note that possible "interesting" blocks might be 1hJ5&aB7 or DEPOSITS; the latter is
more likely to produce a better result. And even a slow laptop today can search through lists of only a few
million items in a relatively short period of time.
It is well beyond the scope of this paper to discuss other forms of breaking DES and other codes.
Nevertheless, it is worth mentioning a couple of forms of cryptanalysis that have been shown to be
effective against DES. Differential cryptanalysis, invented in 1990 by E. Biham and A. Shamir (of
RSA fame), is a chosen-plaintext attack. By selecting pairs of plaintext with particular differences, the
cryptanalyst examines the differences in the resultant ciphertext pairs. Linear plaintext, invented by M.
Matsui, uses a linear approximation to analyze the actions of a block cipher (including DES). Both of
these attacks can be more efficient than brute force.
If DES was a group, then we could show that for two DES keys, X1 and X2, applied to some plaintext
(P), we can find a single equivalent key, X3, that would provide the same result; i.e.,:
EX2(EX1(P)) = EX3(P)
Where EX(P) represents DES encryption of some plaintext P using DES key X.
If DES were a group, it wouldn't matter how many keys and passes we applied to some plaintext; we
could always find a single 56-bit key that would provide the same result. As it happens, DES was
proven to not be a group so that as we apply additional keys and passes, the effective key length
increases.
C = EK3(DK2(EK1(P)))
Where EK(P) and DK(P) represent DES encryption and decryption, respectively, of some plaintext
P using DES key K. (For obvious reasons, this is sometimes referred to as an encrypt-decryptencrypt mode operation.)
Decryption of the ciphertext into plaintext is accomplished by:
P = DK1(EK2(DK3(C)))
The use of three, independent 56-bit keys provides 3DES with an effective key length of 168 bits. The
specification also defines use of two keys where, in the operations above, K3 = K1; this provides an
effective key length of 112 bits. Finally, a third keying option is to use a single key, so that K3 = K2 =
K1 (in this case, the effective key length is 56 bits and 3DES applied to some plaintext, P, will yield
the same ciphertext, C, as normal DES would with that same key). Given the relatively low cost of key
storage and the modest increase in processing due to the use of longer keys, the best recommended
practices are that 3DES be employed with three keys.
16
16
16
16
16
16
16
16
Voice Over
16
(Same Text
16
of the Slide)
16
16
16
16
16
16
Shift of 25 bit
3. From the previous shift left, we obtain a second set of 8 SubKeys (128 bits). Thus a 256 bit key is
obtained (composed from 16 subkeys):
16
16
16
16
16
16
16
16
16
16
16
16
16
16
16
16
4. We repeat the shift left operation on each last set of 8 subkeys obtained. The loop should stop after
the composition of 52 subkeys:
16
16
16
16
16
16
16
16
16
16
16
16
16
16
16
16
16
16
16
16
16
16
16
16
16
16
16
16
16
16
16
16
16
16
16
16
16
16
16
16
16
16
16
16
16
16
16
16
16
16
16
16
16
16
16
16
16
16
16
16
5. We have now: 52 sub-keys of 16 bits each, and a plaintext composed of 64 bits blocks (Composed
of 4 sub-blocks with 16 bits length each):
Keys
16
S1
16
S52
16
16
16
P1
P2
P3
P4
16
16
6. The functions used to computer the ciphertext are: XOR, + (modulo 2 ), and * (modulo 2 +1):
Keys
16
16
S1
S52
XOR
16
16
+ (mod 2 )
* (mod 2 +1)
16
16
16
P1
P2
P3
P4
7. For each block composed of 4 subblocks (P1,P2,P3,P4) of 16 bits length, we repeat the following
computations:
Keys
16
S1
(1)
(2)
(3)
16
st
Inputs
p1 * s1 d1
p2 + s2 d2
d3
p3 + s3
p4 * s4 d4
1 Round
S52
d8 * s6 d9
(4) d7 + d9 d10
d1 XOR d9 d11
(5) d3 XOR d9 d12
d2 XOR d10 d13
d4 XOR d10 d14
d1 XOR d3 d5
d2 XOR d4 d6
d5 * s5 d7
d6 + d7 d8
Outputs
16
P2
16
P3
16
P4
16
Inputs
S1
(1)
(2)
(3)
16
nd
2 Round
d11 * s7 d15
d13 + s8 d16
d12 + s9 d17
d14 * s10 d18
Outputs
16
P1
S52
16
P2
16
P3
16
P4
Keys
16
rd
3 Round
S1
16
S52
Inputs
s13, s14, s15, s16, s17, s18
d29, d30, d31, d32
Outputs
16
16
P1
Keys
P2
P3
16
P4
16 bits
S1
th
4 Round
16 bits
S52
Keys
16
16
th
5 Round
S1
S52
16
16
P1
Keys
P2
P3
16
P4
16
16
th
6 Round
S1
16
16
16
P1
P2
P3
P4
S52
Keys
16
th
7 Round
S1
16
S52
16
16
P1
Keys
P2
P3
16
P4
16
S1
th
8 Round
16
16
16
P1
P2
P3
P4
16
S52
Keys
16
16
S1
S52
e1 * s49 c1
16
e2 * s50 c2
16
e3 * s51 c3
CipherText
Block
16
16
e4 * s52 c4
PlainText Block (64 bits)
16
16
16
16
P1
P2
P3
P4
Multiplication vs. factorization: Suppose we have two numbers, 9 and 16, and that we want to
calculate the product; it should take almost no time to calculate the product, 144. Suppose instead
we have a number, 144, and we need to guess which pair of integers are multiplied together to
obtain that number. We will eventually come up with the solution but whereas calculating the
product took milliseconds, factoring will take longer because we first need to find the 8 pair of
integer factors and then determine which one is the correct pair.
Exponentiation vs.6logarithms: Suppose wer want to take the number 3 to the 6th power; again, it is
easy to calculate 3 =729. But if we I have the number 729 and we want to guess the two integers
used, x and y so that logx 729 = y, it will be more difficult to have all possible solutions and select
the pair that I used.
Universal Knowledge Solutions S.A.L.
- 178 -
While the examples above are trivial, they do represent two of the functional pairs that are used with
PKC; namely, the ease of multiplication and exponentiation versus the relative difficulty of factoring
and calculating logarithms, respectively.
The mathematical "trick" in PKC is to find a trap door in the one-way function so that the inverse
calculation becomes easy given knowledge of some item of information.
o The key-pair is derived from a very large number, n, that is the product of two prime numbers
chosen according to special rules; these primes may be 100 or more digits in length each,
yielding an n with roughly twice as many digits as the prime factors.
o The public key information includes n and a derivative of one of the factors of n; an attacker
cannot determine the prime factors of n (and, therefore, the private key) from this information
alone and that is what makes the RSA algorithm so secure.
o Some descriptions of PKC erroneously state that RSA's safety is due to the difficulty in
factoring large prime numbers. In fact, large prime numbers, like small prime numbers, only
have two factors!
The ability for computers to factor large numbers, and therefore attack schemes such as RSA, is
rapidly improving and systems today can find the prime factors of numbers with more than 140
digits.
The presumed protection of RSA, however, is that users can easily increase the key size to always
stay ahead of the computer processing curve. As an aside, the patent for RSA expired in September
2000 which does not appear to have affected RSA's popularity one way or the other.
Diffie-Hellman:
After the RSA algorithm was published, Diffie and Hellman came up with their own algorithm.
D-H is used for secret-key key exchange only, and not for authentication or digital signatures.
Digital Signature Algorithm (DSA):
The algorithm specified in NIST's Digital Signature Standard (DSS), provides digital signature
capability for the authentication of messages.
ElGamal:
Designed by Taher Elgamal, a PKC system similar to Diffie-Hellman and used for key exchange.
Elliptic Curve Cryptography (ECC):
A PKC algorithm based upon elliptic curves.
ECC can offer levels of security with small keys comparable to RSA and other PKC methods. It
was designed for devices with limited compute power and/or memory, such as smartcards and
PDAs.
Although we have categorized PKC as a two-key system, that has been merely for convenience.
The real criteria for a PKC scheme is that it allows two parties to exchange a secret even though
the communication with the shared secret might be overheard.
o There seems to be no question that Diffie and Hellman were first to publish; their method is
described in the classic paper, "New Directions in Cryptography," published in the November
1976 issue of IEEE Transactions on Information Theory.
o Diffie-Hellman uses the idea that finding logarithms is relatively harder than exponentiation.
And, indeed, it is the precursor to modern PKC which does employ two keys.
Rivest, Shamir, and Adleman described an implementation that extended this idea in their paper "A
Method for Obtaining Digital Signatures and Public-Key Cryptosystems," published in the
February 1978 issue of the Communications of the ACM (CACM).
Their method is based upon the relative ease of finding the product of two large prime numbers
compared to finding the prime factors of a large number.
Deffie- Hellman
Alice...
Bob...
Alice...
Bob...
Choose x=2
2
Send to Bob: X = 3 mod 7 = 2
2
KA = 6 mod 7 = 1
Choose y=3
3
The first published public-key crypto algorithm was Diffie-Hellman. The mathematical "trick" of this
scheme is that it is relatively easy to compute exponents compared to computing discrete logarithms.
Diffie-Hellman allows two parties the ubiquitous Alice and Bob to generate a secret key; they
need to exchange some information over an unsecure communications channel to perform the
calculation but an eavesdropper cannot determine the shared key based upon this information.
Diffie-Hellman works like this:
Alice and Bob start by agreeing on a large prime number, n. They also have to choose some
number g so that g<n.
There is actually another constraint on g, specifically that it must be primitive with respect to n.
Primitive is a definition that is a little beyond the scope of our discussion but basically g is
i
primitive to n if we can find integers i so that g = j mod n for all values of j from 1 to n-1.
7 = {3,2,6,4,5,1}.(The definition of primitive introduced a new term to some readers, namely mod.
The phrase x mod y (and read as written!) means "take the remainder after dividing x by y." Thus, 1
mod 7 = 1, 9 mod 6 = 3, and 8 mod 8 = 0.)
Anyway, either Alice or Bob selects n and g; they then tell the other party what the values are.
Alice and Bob then work independently:
Note that x and y are kept secret while X and Y are openly shared; these are the private and public
keys, respectively. Based on their own private key and the public key learned from the other party,
xy
Alice and Bob have computed their secret keys, KA and KB, respectively, which are equal to g
mod n.
Perhaps a small example will help here. Although Alice and Bob will really choose large values for
n and g, I will use small values for example only; let's use n=7 and g=3.
In this example, then, Alice and Bob will both find the secret key 1 which is, indeed, 3 mod 7. If an
eavesdropper (Mallory) was listening in on the information exchange between Alice and Bob,
he would learn g, n, X, and Y which is a lot of information but insufficient to compromise
the key;
x
as long as x and y remain unknown, K is safe. As said above, calculating X as g is a lot easier than
finding x as logg X!
RSA
Unlike Diffie-Hellman, RSA can be used for key exchange as well as digital signatures and the
encryption of small blocks of data.
Today, RSA is primary used to encrypt the session key used for secret key encryption (message
integrity) or the message's hash value (digital signature). RSA's mathematical hardness comes from the
ease in calculating large numbers and the difficulty in finding the prime factors of those large numbers.
Although employed with numbers using hundreds of digits, the math behind RSA is relatively straightforward.
To create an RSA public/private key pair, here are the basic steps:
1. Choose two prime numbers, p and q.
2. From these numbers you can calculate the modulus, n = pq.
3. Select a third number, e, that is relatively prime to (i.e., it does not divide evenly into) the product
(p-1)(q-1).
4. The number e is the public exponent.
5. Calculate an integer d from the quotient (ed-1)/[(p-1)(q-1)].
6. The number d is the private exponent.
7. The public key is the number pair (n,e).
8. Although these values are publicly known, it is computationally infeasible to determine d from n and
e if p and q are large enough.
Hash Functions
Hash functions, also called message digests and one-way encryption, are algorithms that, in some
sense, use no key. Instead, a fixed-length hash value is computed based upon the plaintext that makes it
impossible for either the contents or length of the plaintext to be recovered.
Hash algorithms are typically used to provide a digital fingerprint of a file's contents often used to
ensure that the file has not been altered by an intruder or virus.
Hash functions are also commonly employed by many operating systems to encrypt passwords. Hash
functions, then, provide a measure of the integrity of a file.
Hash functions are sometimes misunderstood and some sources claim that no two files can have the
same hash value. This is, in fact, not correct. Consider a hash function that provides a 128-bit hash
128
128
value. There are, obviously, 2 possible hash values. But there are a lot more than 2 possible files.
Therefore, there have to be multiple files in fact, there have to be an infinite number of files! that
can have the same 128-bit hash value.
The difficulty is finding two files with the same hash! What is, indeed, very hard to do is to try to
create a file that has a given hash value so as to force a hash value collision which is the reason that
hash functions are used extensively for information security and computer forensics applications.
Alas, researchers in 2004 found that practical collision attacks could be launched on MD5, SHA-1,
and other hash algorithms. At this time, there is no obvious successor to MD5 and SHA-1 that could be
put into use quickly; there are so many products using these hash functions that it could take many
years to flush out all use of 128- and 160-bit hashes.
Variables
SHA uses 5 variables, presented in hexadecimal as follows:
Universal Knowledge Solutions S.A.L.
- 183 -
A = 67 45 23 01
B = EF CD AB 89
C = 98 BA DC FE
D = 10 32 54 76
E = C3 D2 E1 F0
Padding
If the size of the message is not a multiple of 512, then the algorithm must complete the message by
adding one 1 and a suite of 0 until the end of an accepted size of the message (multiple of 512).
Functions
SHA-1 uses 4 loops. In each loop 20 operations are performed. This algorithm uses 80 functions based
on three variables of 32 bits B, C and D and these functions produce words of 32 bits.
The 80 functions are defined as follows:
ft(B,C,D) = (B AND C) OR ((NOT B) AND D) (t between 0 and 19)
ft (B,C,D) = B XOR C XOR D (t between 20 and 39)
ft (B,C,D) = (B AND C) OR (B AND D) OR (C AND D) (t between 40 and 59)
ft (B,C,D) = B XOR C XOR D (t between 60 and 79)
Constants
SHA-1 uses constants. These constants are:
The Algorithm
The algorithm uses 2 buffers of 5 words each (a word is composed of 32 bits), a sequence of 80 words,
and a temporary buffer called TEMP.
The first buffer is denoted {A, B, C, D, E}.
The second buffer se denoted {H0, H1, H2, H3, H4}.
The 80 words are denoted W0 to W79.
The message is divided into blocs of 512 bits, denoted M1 Mn.
H0 = 67452301
H1 = EFCDAB89
H2 = 98BADCFE
H3 = 10325476
H4 = C3D2E1F0.
From each bloc of 512 bits
Begin
Create 16 words of 32 bits each and assign the words to W0, W1 W15;
For t varies between 16 and 79
Universal Knowledge Solutions S.A.L.
- 184 -
A) /etc/passwd file
root:Jbw6BwE4XoUHo:0:0:root:/root:/bin/bash
carol:FM5ikbQt1K052:502:100:Carol Monaghan:/home/carol:/bin/bash
alex:LqAi7Mdyg/HcQ:503:100:Alex Insley:/home/alex:/bin/bash
gary:FkJXupRyFqY4s:501:100:Gary Kessler:/home/gary:/bin/bash
todd:edGqQUAaGv7g6:506:101:Todd Pritsky:/home/todd:/bin/bash
josh:FiH0ONcjPut1g:505:101:Joshua Kessler:/home/webroot:/bin/bash
gary:9ajlknknKJHjhnu7298ypnAIJKL$Jh.hnk:11449:0:99999:7:::
todd:798POJ90uab6.k$klPqMt%alMlprWqu6$.:11492:0:99999:7:::
josh:Awmqpsui*787pjnsnJJK%aappaMpQo07.8:11492:0:99999:7:::
The password password, for example, might be stored as the hash value (in hexadecimal)
60771b22d73c34bd4a290a79c8b09f18.
Nearly all modern multiuser computer and network operating systems employ passwords at the very
least to protect and authenticate users accessing computer and/or network resources.
But passwords are not typically kept on a host or server in plaintext, but are generally encrypted using
some sort of hash scheme.
Unix/Linux, for example, uses a well-known hash via its crypt() function. Passwords are stored in the
/etc/passwd file; each record in the file contains the username, hashed password, user's individual and
group numbers, user's name, home directory, and shell program; these fields are separated by colons
(:).
Note that each password is stored as a 13-byte string. The first two characters are actually a salt,
randomness added to each password so that if two users have the same password, they will still be
encrypted differently; the salt, in fact, provides a means so that a single password might have 4096
different encryptions. The remaining 11 bytes are the password hash, calculated using DES.
As it happens, the /etc/passwd file is world-readable on Unix systems. This fact, coupled with the weak
encryption of the passwords, resulted in the development of the shadow password system where
passwords are kept in a separate, non-world-readable file used in conjunction with the normal
password file.
When shadow passwords are used, the password entry in /etc/passwd is replaced with a "*" or "x" and
the MD5 hash of the passwords are stored in /etc/shadow along with some other account information
(Figure 5B.2).
Windows NT uses a similar scheme to store passwords in the Security Access Manager (SAM) file. In
the NT case, all passwords are hashed using the MD4 algorithm, resulting in a 128-bit (16-byte) hash
value.
The password password, for example, might be stored as the hash value (in hexadecimal)
60771b22d73c34bd4a290a79c8b09f18.
Hash functions, are well-suited for ensuring data integrity because any change made to the contents
of a message will result in the receiver calculating a different hash value than the one placed in the
Universal Knowledge Solutions S.A.L.
- 186 -
transmission by the sender. Since it is highly unlikely that two different messages will yield the
same hash value, data integrity is ensured to a high degree of confidence.
Secret key cryptography, on the other hand, is ideally suited to encrypting messages. The sender
can generate a session key on a per-message basis to encrypt the message; the receiver, of course,
needs the same session key to decrypt the message.
Key exchange, of course, is a key application of public-key cryptography (no pun intended).
Asymmetric schemes can also be used for non-repudiation; if the receiver can obtain the session
key encrypted with the sender's private key, then only this sender could have sent the message.
Public-key cryptography could, theoretically, also be used to encrypt messages although this is
rarely done because secret-key cryptography operates about 1000 times faster than public-key
cryptography.
A hybrid cryptographic scheme combines all of these functions to form a secure transmission
comprising digital signature and digital envelope. Thus, a digital envelope comprises an encrypted
message and an encrypted session key.
For example:
Alice uses secret key cryptography to encrypt her message using the session key, which she
Universal Knowledge Solutions S.A.L.
- 187 -
But this does bring up the issue, what is the precise significance of key length as it
affects the level of protection?
In cryptography, size does matter. The larger the key, the harder it is to crack a block of encrypted data. The
reason that large keys offer more protection is almost obvious; computers have made it easier to attack
ciphertext by using brute force methods rather than by attacking the mathematics (which are generally
well-known anyway). With a brute force attack, the attacker merely generates every possible
key and applies it to the ciphertext. Any resulting plaintext that makes sense offers a candidate for a
legitimate key.
Until the mid-1990s or so, brute force attacks were beyond the capabilities of computers that were
within the budget of the attacker community. Today, however, significant compute power is commonly
available and accessible. General purpose computers such as PCs are already being used for brute force
attacks.
The 1975 DES proposal suggested 56-bit keys; by 1995, a 70-bit key would have been required to
offer equal protection and an 85-bit key will be necessary by 2015.
While a large key is good, a huge key may not always be better. That is, many public-key
cryptosystems use 1024- or 2048-bit keys; expanding the key to 4096 bits probably doesn't add any
protection at this time but it does add significantly to processing time.
The most effective large-number factoring methods today use a mathematical Number Field Sieve to
find a certain number of relationships and then use a matrix operation to solve a linear equation to
produce the two prime factors.
The sieve step actually involves a large number of operations of operations that can be performed in
parallel; solving the linear equation, however, requires a supercomputer.
Indeed, finding the solution to the RSA-140 challenge in February 1999 factoring a 140-digit (465bit) prime number required 200 computers across the Internet about 4 weeks for the first step and a
Cray computer 100 hours and 810 MB of memory to do the second step.
In early 1999, Shamir (of RSA fame) described a new machine that could increase factorization speed
by 2-3 orders of magnitude.
There still appear to be many engineering details that have to be worked out before such a machine could
be built. Furthermore, the hardware improves the sieve step only; the matrix operation is not optimized at
all by this design and the complexity of this step grows rapidly with key length, both in terms of
processing time and memory requirements. Nevertheless, this plan conceptually puts 512-bit keys within
reach of being factored. Although most PKC schemes allow keys that are 1024 bits and longer, Shamir
claims that 512-bit RSA keys "protect 95% of today's E-commerce on the Internet."
Part 6
Summary:
This section describes different trust models and many secure protocols used for one of the biggest and
fastest growing applications of cryptography today, though, is electronic commerce (e-commerce), a
term that itself begs for a formal definition.
The section describes also enhanced configuration of a corporate network using VPN which is based
on many theoretical and practical elements of cryptography.
Objectives:
Upon completion of this part, the student will be able to understand:
Secure Protocols
Trust Models
VPN
Trust Models
Secure use of cryptography requires trust. While secret key cryptography can ensure message
confidentiality and hash codes can ensure integrity, none of this works without trust.
In SKC, Alice and Bob had to share a secret key. PKC solved the secret distribution problem, but how
does Alice really know that Bob is who he says he is? Just because Bob has a public and private key,
and purports to be "Bob," how does Alice know that a malicious person (Mallory) is not pretending to
be Bob?
There are a number of trust models employed by various cryptographic schemes. We will explore three
of them:
The web of trust employed by Pretty Good Privacy (PGP) users, who hold their own set of trusted
public keys.
Kerberos, a secret key distribution scheme using a trusted third party.
Certificates, which allow a set of trusted third parties to authenticate each other and, by
implication, each other's users.
Each of these trust models differs in complexity, general applicability, scope, and scalability.
does not necessarily follow that Alice trusts Dave even if she does trust Carol.
The point here is that that Alice trusts and how she makes that determination is strictly up to Alice.
Kerberos
Definition
Kerberos is a commonly used authentication scheme on the Internet. Developed by MIT's Project
Athena, Kerberos is named for the three-headed dog that, according to Greek mythology, guards the
entrance of Hades (rather than the exit, for some reason!).
Kerberos employs client/server architecture and provides user-to-server authentication rather than hostto-host authentication. In this model, security and authentication will be based on secret key
technology where every host on the network has its own secret key.
It would clearly be unmanageable if every host had to know the keys of all other hosts so a secure,
trusted host somewhere on the network, known as a Key Distribution Center (KDC), knows the keys
for all of the hosts (or at least some of the hosts within a portion of the network, called a realm). In this
way, when a new node is brought online, only the KDC and the new node need to be configured with
the node's key; keys can be distributed physically or by some other secure means.
The current shipping version of this protocol is Kerberos V5 (described in RFC 1510), although
Kerberos V4 still exists and is seeing some use. While the details of their operation, functional
capabilities, and message formats are different, the conceptual overview above pretty much holds for
both. One primary difference is that Kerberos V4 uses only DES to generate keys and encrypt
messages, while V5 allows other schemes to be employed (although DES is still the most widely
algorithm used).
How does it Work?
The Kerberos Server/KDC has two main functions, known as the Authentication Server (AS) and
Ticket-Granting Server (TGS).
The steps in establishing an authenticated session between an application client and the application
server are:
1. The Kerberos client software establishes a connection with the Kerberos server's AS function. The
AS first authenticates that the client is who it purports to be. The AS then provides the client with a
secret key for this login session (the TGS session key) and a ticket-granting ticket (TGT), which
gives the client permission to talk to the TGS. The ticket has a finite lifetime so that the
authentication process is repeated periodically.
2. The client now communicates with the TGS to obtain the Application Server's key so that it (the
client) can establish a connection to the service it wants. The client supplies the TGS with the TGS
session key and TGT; the TGS responds with an application session key (ASK) and an encrypted
form of the Application Server's secret key; this secret key is never sent on the network in any
other form.
3. The client has now authenticated itself and can prove its identity to the Application Server by
supplying the Kerberos ticket, application session key, and encrypted Application Server secret
key. The Application Server responds with similarly encrypted information to authenticate itself to
the client. At this point, the client can initiate the intended service requests (e.g., Telnet, FTP,
HTTP, or e-commerce transaction session establishment).
How, for example, does one site obtain another party's public key?
How does a recipient determine if a public key really belongs to the sender?
How does the recipient know that the sender is using their public key for a legitimate purpose for
which they are authorized?
When does a public key expire? How can a key be revoked in case of compromise or loss?
Example:
The basic concept of a certificate is one that is familiar to all of us. A driver's license, credit card, or
SCUBA certification, for example, identify us to others, indicate something that we are authorized to
do, have an expiration date, and identify the authority that granted the certificate.
Universal Knowledge Solutions S.A.L.
- 193 -
Let us consider driver's licenses. Suppose this driver has one issued by one US stat. The license
establishes the identity, indicates the type of vehicles that the driver can conduct.
When he drives outside of its US state, the other jurisdictions throughout the U.S. recognize the
authority of this particular state to issue this "certificate" and they trust the information it contains.
Now, when the driver leave the U.S., everything changes.
When he drives in Canada and many other countries, they will accept not the his state license, per se,
but any license issued in the U.S.;
How does it Work?
Contents of an X.509 V3 Certificate.
version number
certificate serial number
signature algorithm identifier
issuer's name and unique identifier
validity (or operational) period
subject's name and unique identifier
subject public key information
standard extensions
certificate appropriate use definition
key usage limitation definition
certificate policy information
other extensions
Application-specific
CA-specific
For purposes of electronic transactions, certificates are digital documents. The specific functions of the
certificate include:
Establish identity: Associate, or bind, a public key to an individual, organization, corporate
position, or other entity.
Assign authority: Establish what actions the holder may or may not take based upon this certificate.
Secure confidential information (e.g., encrypting the session's symmetric key for data
confidentiality).
Typically, a certificate contains a public key, a name, an expiration date, the name of the authority that
issued the certificate (and, therefore, is vouching for the identity of the user), a serial number, any
pertinent policies describing how the certificate was issued and/or how the certificate may be used, the
digital signature of the certificate issuer, and perhaps other information.
A sample abbreviated certificate is shown in the following. This is a typical certificate found in a
browser. When the browser makes a connection to a secure Web site, the Web server sends its public
key certificate to the browser. The browser then checks the certificate's signature against the public key
that it has stored; if there is a match, the certificate is taken as valid and the Web site verified by this
certificate is considered to be "trusted."
Standards
The most widely accepted certificate format is the one defined in International Telecommunication
Union Telecommunication Standardization Sector (ITU-T) Recommendation X.509.
Universal Knowledge Solutions S.A.L.
- 194 -
Rec. X.509 is a specification used around the world and any applications complying with X.509 can
share certificates.
Most certificates today comply with X.509 Version 3 and contain the information listed in Table 2.
Certificate Authority
Certificate authorities are the repositories for public-keys and can be any agency that issues
certificates.
A company, for example, may issue certificates to its employees, a college/university to its students, a
store to its customers, an Internet service provider to its users, or a government to its constituents.
When a sender needs an intended receiver's public key, the sender must get that key from the receiver's
CA. That scheme is straight-forward if the sender and receiver have certificates issued by the same
CA.
Some CAs will be trusted because they are known to be reputable, such as the CAs operated by AT&T,
BBN, Canada Post Corp., CommerceNe. CAs, in turn, form trust relationships with other CAs. Thus, if
a user queries a foreign CA for information, the user may ask to see a list of CAs that establishes a
"chain of trust" back to the user.
A PGP signed message. The sender uses their private key; at the destination, the
sender's e-mail address yields the public key from the receiver's keyring.
The slide shows PGP signed message. This message will not be kept secret from an eavesdropper, but
a recipient can be assured that the message has not been altered from what the sender transmitted.
In this instance, the sender signs the message using their own private key. The receiver uses the
sender's public key to verify the signature; the public key is taken from the receiver's keyring based on
the sender's e-mail address. Note that the signature process does not work unless the sender's public
key is on the receiver's keyring.
Encrypted Messages
-----BEGIN PGP MESSAGE----Version: PGP for Personal Privacy 5.0
MessageID: DAdVB3wzpBr3YRunZwYvhK5gBKBXOb/m
qANQR1DBwU4D/TlT68XXuiUQCADfj2o4b4aFYBcWumA7hR1Wvz9rbv2BR6WbEUsy
ZBIEFtjyqCd96qF38sp9IQiJIKlNaZfx2GLRWikPZwchUXxB+AA5+lqsG/ELBvRa
c9XefaYpbbAZ6z6LkOQ+eE0XASe7aEEPfdxvZZT37dVyiyxuBBRYNLN8Bphdr2zv
z/9Ak4/OLnLiJRk05/2UNE5Z0a+3lcvITMmfGajvRhkXqocavPOKiin3hv7+Vx88
uLLem2/fQHZhGcQvkqZVqXx8SmNw5gzuvwjV1WHj9muDGBY0MkjiZIRI7azWnoU9
3KCnmpR60VO4rDRAS5uGl9fioSvze+q8XqxubaNsgdKkoD+tB/4u4c4tznLfw1L2
YBS+dzFDw5desMFSo7JkecAS4NB9jAu9K+f7PTAsesCBNETDd49BTOFFTWWavAfE
gLYcPrcn4s3EriUgvL3OzPR4P1chNu6sa3ZJkTBbriDoA3VpnqG3hxqfNyOlqAka
mJJuQ53Ob9ThaFH8YcE/VqUFdw+bQtrAJ6NpjIxi/x0FfOInhC/bBw7pDLXBFNaX
HdlLQRPQdrmnWskKznOSarxq4GjpRTQo4hpCRJJ5aU7tZO9HPTZXFG6iRIT0wa47
AR5nvkEKoIAjW5HaDKiJriuWLdtN4OXecWvxFsjR32ebz76U8aLpAK87GZEyTzBx
dV+lH0hwyT/y1cZQ/E5USePP4oKWF4uqquPee1OPeFMBo4CvuGyhZXD/18Ft/53Y
WIebvdiCqsOoabK3jEfdGExce63zDI0=
=MpRf
-----END PGP MESSAGE-----
A PGP encrypted message. The receiver's e-mail address is the pointer to the public
key in the sender's keyring. At the destination side, the receiver uses their own private
key.
Hi Gary,
"Outside of a dog, a book is man's best friend.
Inside of a dog, it's too dark to read."
Carol
1. URLs specifying the protocol https:// are directed to HTTP servers secured using SSL/TLS. The
client will automatically try to make a TCP connection to the server at port 443. The client initiates
the secure connection by sending a ClientHello message containing a Session identifier,
highest SSL version number supported by the client, and lists of supported crypto and compression
schemes (in preference order).
2. The server examines the Session ID and if it is still in the server's cache, it will attempt to reestablish a previous session with this client. If the Session ID is not recognized, the server will
continue with the handshake to establish a secure session by responding with a ServerHello
message. The ServerHello repeats the Session ID, indicates the SSL version to use for this
connection (which will be the highest SSL version supported by the server and client), and specifies
which encryption method and compression method to be used for this connection.
3. There are a number of other optional messages that the server might send, including:
a. Certificate, which carries the server's X.509 public key certificate (and, generally, the
server's public key). This message will always be sent unless the client and server have already
agreed upon some form of anonymous key exchange. (This message is normally sent.)
b. ServerKeyExchange, which will carry a premaster secret when the server's
Certificate message does not contain enough data for this purpose; used in some
key exchange schemes.
c. CertificateRequest, used to request the client's certificate in those scenarios where
client authentication is performed.
d. ServerHelloDone, indicating that the server has completed its portion of the key exchange
handshake.
4. The client now responds with a series of mandatory and optional messages:
Universal Knowledge Solutions S.A.L.
- 198 -
a. Certificate, contains the client's public key certificate when it has been requested by the
server.
b. ClientKeyExchange, which usually carries the secret key to be used with the secret key
crypto scheme.
c. CertificateVerify, used to provide explicit verification of a client's certificate if the
server is authenticating the client.
5. TLS includes the change cipher spec protocol to indicate changes in the encryption method. This
protocol contains a single message, ChangeCipherSpec, which is encrypted and compressed
using the current (rather than the new) encryption and compression schemes. The
ChangeCipherSpec message is sent by both client and server to notify the other station that
all following information will employ the newly negotiated cipher spec and keys.
6. The Finished message is sent after a ChangeCipherSpec message to confirm that the key
exchange and authentication processes were successful.
7. At this point, both client and server can exchange application data using the session encryption and
compression schemes.
a. Side Note: It would probably be helpful to make some mention of SSL as it is used today.
Most of us have used SSL to engage in a secure, private transaction with some vendor. The
steps are something like this. During the SSL exchange with the vendor's secure server, the
server sends its certificate to our client software. The certificate includes the vendor's public
key and a signature from the CA that issued the vendor's certificate. Our browser software is
shipped with the major CAs' certificates which contains their public key; in that way we
authenticate the server. Note that the server does not use a certificate to authenticate us!
Instead, we are generally authenticated when we provide our credit card number; the server
checks to see if the card purchase will be authorized by the credit card company and, if so,
considers us valid and authenticated! While bidirectional authentication is certainly supported
by SSL, this form of asymmetric authentication is more commonly employed today since most
users don't have certificates.
b. Microsoft's Server Gated Cryptography (SGC) protocol is another extension to SSL/TLS. For
several decades, it has been illegal to generally export products from the U.S. that employed
secret-key cryptography with keys longer than 40 bits. For that reason, SSL/TLS has an
exportable version with weak (40-bit) keys and a domestic (North American) version with
strong (128-bit) keys. Within the last several years, however, use of strong SKC has been
approved for the worldwide financial community. SGC is an extension to SSL that allows
financial institutions using Windows NT servers to employ strong cryptography. Both the client
and server must implement SGC and the bank must have a valid SGC certificate. During the
initial handshake, the server will indicate support of SGC and supply its SGC certificate; if the
client wishes to use SGC and validates the server's SGC certificate, the session can employ
128-bit RC2, 128-bit RC4, 56-bit DES, or 168-bit 3DES. Microsoft supports SGC in the
Windows 95/98/NT versions of Internet Explorer 4.0, Internet Information Server (IIS) 4.0, and
Money 98.
Other field of Applications
As mentioned above, SSL was designed to provide application-independent transaction security for
the Internet. Although the discussion above has focused on HTTP over SSL (https/TCP port 443),
SSL is also applicable to:
Universal Knowledge Solutions S.A.L.
- 199 -
Protocol
File Transfer Protocol (FTP)
Internet Message Access Protocol v4 (IMAP4)
Lightweight Directory Access Protocol (LDAP)
Network News Transport Protocol (NNTP)
Post Office Protocol v3 (POP3)
Telnet
VPN
1- Principle
The world has changed a lot in the last couple of decades. Instead of simply dealing with local or
regional concerns, many businesses now have to think about global markets and logistics. Many
companies have facilities spread out across the country or around the world, and there is one thing that
all of them need: A way to maintain fast, secure and reliable communications wherever their offices
are.
Until fairly recently, this has meant the use of leased lines to maintain a wide area network (WAN).
Leased lines, ranging from ISDN (integrated services digital network, 128 Kbps) to OC3 (Optical
Carrier-3, 155 Mbps) fiber, provided a company with a way to expand its private network beyond its
immediate geographic area.
A WAN had obvious advantages over a public network like the Internet when it came to reliability,
performance and security. But maintaining a WAN, particularly when using leased lines, can become
quite expensive and often rises in cost as the distance between the offices increases.
There are two common types of VPN. Remote-access, also called a virtual private dial-up network
(VPDN), is a user-to-LAN connection used by a company that has employees who need to connect to
the private network from various remote locations. Typically, a corporation that wishes to set up a
large remote-access VPN will outsource to an enterprise service provider (ESP). The ESP sets up a
network access server (NAS) and provides the remote users with desktop client software for their
computers. The telecommuters can then dial a toll-free number to reach the NAS and use their VPN
client software to access the corporate network.
A good example of a company that needs a remote-access VPN would be a large firm with hundreds of
sales people in the field. Remote-access VPNs permit secure, encrypted connections between a
company's private network and remote users through a third-party service provider.
Universal Knowledge Solutions S.A.L.
- 201 -
Site-to-Site VPN
Through the use of dedicated equipment and large-scale encryption, a company can connect multiple
fixed sites over a public network such as the Internet. Site-to-site VPNs can be one of two types:
Intranet-based - If a company has one or more remote locations that they wish to join in a
single private network, they can create an intranet VPN to connect LAN to LAN.
Extranet-based - When a company has a close relationship with another company (for
example, a partner, supplier or customer), they can build an extranet VPN that connects LAN to
LAN, and that allows all of the various companies to work in a shared environment.
5- VPN Security
A well-designed VPN uses several methods for keeping your connection and data secure:
Firewalls
Encryption
IPSec
AAA Server
VPN Security: Firewalls
A firewall provides a strong barrier between your private network and the Internet. You can set
firewalls to restrict the number of open ports, what type of packets are passed through and which
protocols are allowed through. Some VPN products, such as Cisco's 1700 routers, can be upgraded to
include firewall capabilities by running the appropriate Cisco IOS on them. You should already have a
good firewall in place before you implement a VPN, but a firewall can also be used to terminate the
VPN sessions.
VPN Security: Encryption
Encryption is the process of taking all the data that one computer is sending to another and encoding it
into a form that only the other computer will be able to decode. Most computer encryption systems
belong in one of two categories:
Symmetric-key encryption
Public-key encryption
VPN Security: IPSec
IPSec has two encryption modes: tunnel and transport. Tunnel encrypts the header and the payload of
each packet while transport only encrypts the payload. Only systems that are IPSec compliant can take
advantage of this protocol. Also, all devices must use a common key and the firewalls of each network
must have very similar security policies set up. IPSec can encrypt data between various devices, such
as:
Router to router
Firewall to router
PC to router
PC to server
VPN Security: AAA Servers
AAA (authentication, authorization and accounting) servers are used for more secure access in a
remote-access VPN environment. When a request to establish a session comes in from a dial-up client,
the request is proxied to the AAA server. AAA then checks the following:
Who you are (authentication)
What you are allowed to do (authorization)
What you actually do (accounting)
The accounting information is especially useful for tracking client use for security auditing, billing or
reporting purposes.
Universal Knowledge Solutions S.A.L.
- 202 -
6- VPN Technologies
Depending on the type of VPN (remote-access or site-to-site), you will need to put in place certain
components to build your VPN. These might include:
Desktop software client for each remote user
Dedicated hardware such as a VPN concentrator or secure PIX firewall
Dedicated VPN server for dial-up services
NAS (network access server) used by service provider for remote-user VPN access
VPN network and policy-management center
Because there is no widely accepted standard for implementing a VPN, many companies have
developed turn-key solutions on their own. In the next few sections, we'll discuss some of the solutions
offered by Cisco, one of the most prevelant networking technology companies.
VPN Concentrator
Incorporating the most advanced encryption and authentication techniques available, Cisco VPN
concentrators are built specifically for creating a remote-access VPN. They provide high availability,
high performance and scalability and include components, called scalable encryption processing
(SEP) modules, that enable users to easily increase capacity and throughput. The concentrators are
offered in models suitable for everything from small businesses with up to 100 remote-access users to
large organizations with up to 10,000 simultaneous remote users.
VPN-Optimized Router
Cisco's VPN-optimized routers provide scalability, routing, security and QoS (quality of service).
Based on the Cisco IOS (Internet Operating System) software, there is a router suitable for every
situation, from small-office/home-office (SOHO) access through central-site VPN aggregation, to
large-scale enterprise needs.
Cisco Secure PIX Firewall
An amazing piece of technology, the PIX (private Internet exchange) firewall combines dynamic
network address translation, proxy server, packet filtration, firewall and VPN capabilities in a single
piece of hardware.
Instead of using Cisco IOS, this device has a highly streamlined OS that trades the ability to handle a
variety of protocols for extreme robustness and performance by focusing on IP.
7- Tunneling
Most VPNs rely on tunneling to create a private network that reaches across the Internet. Essentially,
tunneling is the process of placing an entire packet within another packet and sending it over a
network. The protocol of the outer packet is understood by the network and both points, called tunnel
interfaces, where the packet enters and exits the network.
Tunneling requires three different protocols:
Carrier protocol - The protocol used by the network that the information is travelling over
Encapsulating protocol - The protocol (GRE, IPSec, L2F, PPTP, L2TP) that is wrapped
around the original data
Passenger protocol - The original data (IPX, NetBeui, IP) being carried
Tunneling has amazing implications for VPNs. For example, you can place a packet that uses a
protocol not supported on the Internet (such as NetBeui) inside an IP packet and send it safely over the
Internet. Or you could put a packet that uses a private (non-routable) IP address inside a packet that
uses a globally unique IP address to extend a private network over the Internet.
Further Reading
Managing Information Systems Security and Privacy, by Denis Trcek, Publisher: Springer; 1
edition (December 5, 2005), ISBN: 3540281037.
Principles and Practices of Information Security, Volonino, L., and Robinson, S., 2004,
Pearson Prentice Hall: New Jersey.
Threat Modeling, F. Swiderski and W. Snyder, Microsoft Press, Redmond WA, 2004.
Digital Defense, T. Parenty, Harvard Business School Press, Boston, MA 2003.