The United States Department of Defense (DoD) was created in 1947 by unifying the military branches under one department. Since then, the DoD has been protecting the nation from physical threats and, as of late, cyber threats. At the onset of the personal computer wave in the 1980s, the DoD began publishing computer security recommendations. These recommendations have since developed into the required DoD cybersecurity certifications that companies have today.
Technology increases rapidly in sophistication and capability. As a result, so do cybersecurity risks. The DoD is at the heart of national security, so it is only natural that its security standards are among the highest in the world. The DoD's cybersecurity history may help familiarize companies with federal compliance to prepare them for future DoD contracts and help them establish strong protocols.
Rainbow Books
The Rainbow Series was an assortment of free documents released in the 1980s through the 1990s that provided security recommendations for U.S. government agencies. Each recommendation category is identifiable by the book's cover, the colors of which coined the nickname "Rainbow." The following are examples of some of the DoD cybersecurity documents:
- Orange Book (CSC-STD-001-83) - DOD Trusted Computer System Evaluation Criteria (TCSEC) [DOD 5200.28].
- Green Book (CSC-STD-002-85) - DOD Password Management Guidelines.
- Light Yellow Book (CSC-STD-003-85) - Guidance for Applying the DOD Trusted Computer System Evaluation Criteria in Specific Environments.
- Yellow Book II (CSC-STD-004-85) - Technical Rationale Behind CSC-STD-003-85: Computer Security Requirements.
DITSCAP/DIACAP
The DoD Information Technology Security Certification and Accreditation Process (DITSCAP) was the first accreditation and certification process that the DoD used. It was created in 1992 to show that contractor systems were safe to operate in the manner agreed upon in the contract. Later, DIACAP replaced DITSCAP.
The DoD Information Assurance Certification and Accreditation Process (DIACAP) produced a formal standard for risk management. DIACAP looked to ensure that organizations applied risk management to their information systems. To fulfill that goal, DIACAP contained processes to recognize, implement, confirm, and manage information assurance measures and services. The initial version formed in 2006, and the final version was signed in 2014.
NIST SP 800-53, RMF, CSF
NIST created SP 800-53 in 2006. The final version came in April 2013. This publication provided a framework for security and privacy controls to apply to federal computer systems. The NIST Risk Management Framework (RMF) superseded NIST SP 800-53 in 2020.
The RMF was designed to aid in the discovery and mitigation of risk in federal systems. It utilizes a process to integrate security, privacy and cyber supply chain risk management activities into the system development lifecycle. The RMF can apply to legacy technology and new technology systems.
In 2014, NIST worked with the private sector and the federal government to create the Cybersecurity Framework (CSF).
- The CSF integrates industry standards and best practices to help organizations set up and manage their DoD cybersecurity program.
- The primary objective of CSF is to address cyber threats and support business goals.
- CSF generates a common language to simplify the understanding of threats to staff at all levels within a business.
DFARS/NIST SP 800-171
The Defense Federal Acquisition Regulation Supplement (DFARS) established rules on the handling of covered defense information, including the reporting of cyber incidents. DFARS' main objective is to protect the DoD's unclassified information on a defense contractor's internal information systems.
Furthermore, NIST released NIST SP 800-171 to guide standards and best practices in the handling of controlled unclassified information (CUI) within non-federal systems and organizations. Data classified as CUI does not require clearance to view but isn't meant for public distribution. The requirements apply to all non-federal systems that handle (process, store or transmit) CUI or that provide protection for such components.
Notably, the development of the CMMC 2.0 model uses 800-171 as its base.
CMMC
For a considerable time, the NIST SP 800-171 framework was the standard to guide DoD contractors and subcontractors in managing CUI. With the rapid increase in cyber threats across the globe, the Defense Industrial Base (DIB) sector especially needed an enhanced model for protection. The answer to this problem is the Cybersecurity Maturity Model Certification (CMMC).
The CMMC launched on January 31, 2020, as a unified standard for DoD cybersecurity practices. As a result, it largely replaces NIST SP 800-171 compliance as the federal government's mechanism for protecting CUI. The CMMC has three levels at which a defense contractor can become certified in order to bid on DoD contracts. Starting with Level 1, each subsequent level requires more security controls and practices. The CMMC ensures security compliance and safety within the supply chain for DoD work.
- One key difference between CMMC and NIST SP 800-171 is the need for third-party assessments. While NIST SP 800-171 only required self-assessments, CMMC Levels 2 and 3 require an outside organization to audit remediation and certify compliance.
- As a precursor to CMMC, the DFARS Interim Rule (252.204-7019) establishes requirements for NIST SP 800-171 compliance scoring (SPRS score) and remediation.
Learn More About the DoD Cybersecurity Requirements
The DoD has a long history of setting standards to protect national security. As technology has progressed, so has the response to threats and the standard for achieving a strong security posture. Accordingly, the DoD requirements have moved from having reasonable cybersecurity measures with self-assessments to having strong cybersecurity controls.
If a company relies on DoD contracts, then it must become CMMC certified. The certification level required depends on the contract and the CUI involved. Regardless of the required level, however, the contractor still needs a professional third party that can get them to where they need to be.
InfoDefense has helped many clients achieve compliance standards to make them successful. If you are a defense contractor and need help becoming CMMC certified, contact us today for more information.
A History of DoD Cyber Security
The Trusted Computer System Evaluation Criteria (TCSEC), also known as the Orange Book, was a criteria developed by the United States Department of Defense (DoD) in 1983 to evaluate the security of computer systems. TCSEC was a part of the DoD's Rainbow Series, which consisted of various color-coded books offering guidelines and standards for computer security.
The primary goal of TCSEC was to provide a standardized approach to assessing the security features and assurance levels of computer systems, particularly those used in sensitive or classified environments. The criteria were divided into four hierarchical classes - D, C, B, and A - with each class representing a higher level of security and assurance.
- Class D: Minimal protection, not considered secure by the DoD.
- Class C: Discretionary protection, providing basic security features, such as user-level access controls. Divided into two subclasses: C1 and C2, with C2 offering more rigorous controls.
- Class B: Mandatory protection, implementing mandatory access controls and stronger security measures. Subdivided into three subclasses: B1, B2, and B3, each offering increased security and assurance.
- Class A: Verified protection, offering the highest level of security with formal methods for system design, implementation, and verification.
Although TCSEC was a pioneering effort in computer security evaluation, it eventually became outdated due to the rapid advancement of technology and the evolving cybersecurity landscape. It was replaced by newer evaluation frameworks like the Common Criteria for Information Technology Security Evaluation, which is now widely used internationally for assessing the security of IT products and systems.
How did the Orange Book influence Zero Trust? The Trusted Computer System Evaluation Criteria (TCSEC) and the Zero Trust security model are two distinct approaches to computer security, each originating from different time periods and addressing different aspects of security.
The TCSEC, or the Orange Book, is focused on evaluating the security of computer systems based on hierarchical classes. It provided a framework for assessing the security features and assurance levels of computer systems, particularly in sensitive or classified environments. However, TCSEC is a product of its time, and its relevance has waned due to the rapid advancements in technology and the evolving cybersecurity landscape.
On the other hand, the Zero Trust model is a modern security concept that assumes no trust by default for any user, device, or system, regardless of whether internal or external to the network. This model emphasizes continuous authentication, authorization, and monitoring of all access requests and interactions within a network. It operates on the principle of "never trust, always verify," which significantly differs from the traditional perimeter-based security approaches.
While the Orange Book was an important milestone in computer security evaluation, it is not a direct precursor to the Zero Trust model. TCSEC focused on the assessment of system security features. In contrast, Zero Trust is a comprehensive strategy for implementing security controls and practices that account for modern IT environments' complexity and dynamic nature.
The Orange Book's impact on computer security is undeniable. Its contributions led to modern models like Zero Trust. Though indirectly related, TCSEC was a vital early step in the evolution of security.
Orange Book 1983 Precursor Zero Trust
Abstract.
This paper discusses advantages and disadvantages of security policies for
databases. While database security will be defmed by using a broader perspective main
attention is given to access control models preventing a database from unauthorized
disclosure or modification of information. In addition to Discretionary and Mandatory
Security which are identified as requirements for the higher security classes of
evaluation criterias, Adapted Mandatory Security, the Clark and WiIson Model and the
Personal Knowledge Approach will be discussed.
INTRODUCTION
It is widely recognized that information stored in
databases is often of important economic value to
an enterprise or to the public. Indeed, many
databases or portion of databases are so crucial
that its corruption or destruction (malicious or
accidental) could endanger the economic
functioning of the enterprise or even human life.
Database security is concerned with ensuring the
secrecy, integrity, and availability of data stored in
a database while with safety all actions are meant
that protect against faults or malicious system
behavior that could endanger human lifes,
property, the environment, or a nation. Denning
(1989) has shown that security generally supports
safety and that both topics are closely related. This
paper takes more a security perspective by
discussing advantages and disadvantages of
different database security policies.
The security features of the database management
system (DBMS) enforce the security requirements
and can be classified into the following categories:
(1) Identification, Authentication, Audit. Usually,
before getting access to the database each user has
to identify itself to the computer system.
Authentication is a way to verify the identity of a
user at log-on time. Auditing is the examination of
all security relevant events by a process or person
that was not responsible for causing the event.
(2) Authorization. Authorization is the
specification of a set of rules about who has what
type of access to what information. Authorization
policies govern the disclosure and modification of
information.
(3) Integrity, Consistency. Integrity constraints are
rules that defme the correct state of a database
during database operations and therefore can
protect against malicious modification of
information.
In this paper we take not such a broad perspective
of database security and are mainly focused on
authorization policies. This is legitimate because
identification, authentication and auditing
normally fall within the scope of the underlying
operating system and database integrity and
consistency are subject of the close related topic of
semantic data modeling or dependent on the
physical design of the DBMS software.
Computer security is currently subject of many
national and international standardization work.
The best known are the Orange Book (1985) of
the US National Computer Security Cent er and
some national proposals that have extended it. For
example, see the German Criteria (1989), the
Canadian Criteria (1991), or the proposal of the
EC, the EC Criteria (1991). Relevant for database
security is the interpretation of the Orange Book
for databases, the so called Purple Book (1990).
Based on the Orange Book the Purple Book has
developed a metric with which DBMSs can be
evaluated for security. It consists of a number of
levels ranging from Al to 0, and for each level a
list of requirements that are necessary for systems
trying to achieve that level of security. The Purple
Book requirements tend to focus on secrecy of
information as well as unauthorized and improper
modification of information. As identified above
unauthorized disclosure and modification fall into
the scope of access control policies. The following
of this paper contains a comparative study of the
most relevant security models for data bases
covering access control strategies by taking a
critical point of view.
DATABASE SECURITY MODELS
Most security models implemented in commercial
DBMS products are based on Discretionary
Access Controls (DAC). As it will be shown DAC
is only of limited use in systems that are used for
security critical applications. More restrictive is
data protection in multilevel secure DBMSs that in
addition to DAC support Mandatory Access
Controls (MAC)...
Security Policies for Databases
Abstract
The recent emergence of cloud computing has drastically altered everyone's perception of infrastructure architectures, software delivery and development models. Projecting as an evolutionary step, following the transition from mainframe computers to client/server deployment models, cloud computing encompasses elements from grid computing, utility computing and autonomic computing, into an innovative deployment architecture. This rapid transition towards the clouds, has fuelled concerns on a critical issue for the success of information systems, communication and information security. From a security perspective, a number of unchartered risks and challenges have been introduced from this relocation to the clouds, deteriorating much of the effectiveness of traditional protection mechanisms. As a result the aim of this paper is twofold; firstly to evaluate cloud security by identifying unique security requirements and secondly to attempt to present a viable solution that eliminates these potential threats. This paper proposes introducing a Trusted Third Party, tasked with assuring specific security characteristics within a cloud environment. The proposed solution calls upon cryptography, specifically Public Key Infrastructure operating in concert with SSO and LDAP, to ensure the authentication, integrity and confidentiality of involved data and communications. The solution, presents a horizontal level of service, available to all implicated entities, that realizes a security mesh, within which essential trust is maintained.
1. Introduction
Throughout computer science history, numerous attempts have been made to disengage users from computer hardware needs, from time-sharing utilities envisioned in the 1960s, network computers of the 1990s, to the commercial grid systems of more recent years. This abstraction is steadily becoming a reality as a number of academic and business leaders in this field of science are spiralling towards cloud computing. Cloud computing is an innovative Information System (IS) architecture, visualized as what may be the future of computing, a driving force demanding from its audience to rethink their understanding of operating systems, client-server architectures, and browsers. Cloud computing has leveraged users from hardware requirements, while reducing overall client side requirements and complexity.
As cloud computing is achieving increased popularity, concerns are being voiced about the security issues introduced through the adoption of this new model. The effectiveness and efficiency of traditional protection mechanisms are being reconsidered, as the characteristics of this innovative deployment model, differ widely from them of traditional architectures. In this paper we attempt to demystify the unique security challenges introduced in a cloud environment and clarify issues from a security perspective. The notion of trust and security is investigated and specific security requirements are documented. This paper proposes a security solution, which leverages clients from the security burden, by trusting a Third Party. The Third Party is tasked with assuring specific security characteristics within a distributed information system, while realizing a trust mesh between involved entities, forming federations of clouds. The research methodology adopted towards achieving this goal, is based on software engineering and information systems design approaches. The basic steps for designing the system architecture include the collection of requirements and the analysis of abstract functional specifications.
2. Grid and cloud computing
Grid Computing emerged in the early 1990s, as high performance computers were inter-connected via fast data communication links, with the aim of supporting complex calculations and data-intensive scientific applications. Grid computing is defined as "a hardware and software infrastructure that provides dependable consistent, pervasive, and inexpensive access to high-end computational capabilities". Cloud Computing has resulted from the convergence of Grid Computing, Utility Computing and SaaS, and essentially represents the increasing trend towards the external deployment of IT resources, such as computational power, storage or business applications, and obtaining them as services [1]. Cloud computing is a model for enabling convenient, on-demand network access, to a shared pool of configurable computing resources, (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction [2].
The name cloud computing, was inspired by the cloud symbol that is often used to represent the Internet in flow charts and diagrams. A distinct migration to the clouds has been taking place over recent years with end users, "bit by bit" maintaining a growing number of personal data, including bookmarks, photographs, music files and much more, on remote servers accessible via a network. Cloud computing is empowered by virtualization technology; a technology that actually dates back to 1967, but for decades was available only on mainframe systems. In its quintessence, a host computer runs an application known as a hypervisor; this creates one or more virtual machines, which simulate physical computers so faithfully, that the simulations can run any software, from operating systems, to end-user applications [3]. At a hardware level, a number of physical devices, including processors, hard drives and network devices, are located in datacenters, independent from geographical location, which are responsible for storage and processing needs. Above this, the combination of software layers, the virtualization layer and the management layer, allow for the effective management of servers. Virtualization is a critical element of cloud implementations and is used to provide the essential cloud characteristics of location independence, resource pooling and rapid elasticity. Differing from traditional network topologies, such as client-server, cloud computing is able to offer robustness and alleviate traffic congestion issues. The management layer is able to monitor traffic and respond to peaks or drops with the creation of new servers or the destruction of non-necessary ones. The management layer has the additional ability of being able to implement security monitoring and rules throughout the cloud. According to Merrill Lynch, what makes cloud computing new and differentiates it from Grid Computing is virtualization: "Cloud computing, unlike grid computing, leverages virtualization to maximize computing power. Virtualization, by separating the logical from the physical, resolves some of the challenges faced by grid computing" [4]. While Grid Computing achieves high utilization through the allocation of multiple servers onto a single task or job, the virtualization of servers in cloud computing achieves high utilization by allowing one server to compute several tasks concurrently [5]. While most authors acknowledge similarities among those two paradigms, the opinions seem to cluster around the statement that cloud computing has evolved from Grid Computing and that Grid Computing is the foundation for cloud computing.
In cloud computing, the available service models are:...
Addressing Cloud Computing Security Issues
Abstract
Internet usage has grown exponentially, with individuals and companies performing multiple daily transactions in cyberspace rather than in the real world. The coronavirus (COVID-19) pandemic has accelerated this process. As a result of the widespread usage of the digital environment, traditional crimes have also shifted to the digital space. Emerging technologies such as cloud computing, the Internet of Things (IoT), social media, wireless communication, and cryptocurrencies are raising security concerns in cyberspace. Recently, cyber criminals have started to use cyber attacks as a service to automate attacks and leverage their impact. Attackers exploit vulnerabilities that exist in hardware, software, and communication layers. Various types of cyber attacks include distributed denial of service (DDoS), phishing, man-in-the-middle, password, remote, privilege escalation, and malware. Due to new-generation attacks and evasion techniques, traditional protection systems such as firewalls, intrusion detection systems, antivirus software, access control lists, etc., are no longer effective in detecting these sophisticated attacks. Therefore, there is an urgent need to find innovative and more feasible solutions to prevent cyber attacks. The paper first extensively explains the main reasons for cyber attacks. Then, it reviews the most recent attacks, attack patterns, and detection techniques. Thirdly, the article discusses contemporary technical and nontechnical solutions for recognizing attacks in advance. Using trending technologies such as machine learning, deep learning, cloud platforms, big data, and blockchain can be a promising solution for current and future cyber attacks. These technological solutions may assist in detecting malware, intrusion detection, spam identification, DNS attack classification, fraud detection, recognizing hidden channels, and distinguishing advanced persistent threats. However, some promising solutions, especially machine learning and deep learning, are not resistant to evasion techniques, which must be considered when proposing solutions against intelligent cyber attacks.
1. Introduction
The Internet, which emerged as a communication and sharing environment, quickly impacted the geography of the whole world. In particular, the 21st century has been and continues to be the century in which world geography is intertwined with the Internet network. Hence, people worldwide can communicate with each other at high speeds through the Internet, and a strong bond has been formed between states in critical commercial, political, economic, and sociocultural fields. The Internet has three basic entities: computers, users, and networks [1]. It has been observed that network technologies have developed thanks to constantly changing computer technologies and user segments that have started to have various abilities. However, the developing and widespread use of network technologies has also brought about critical security problems. Therefore, attempts have been made to provide a cyber security environment to protect the assets of institutions, organizations, and individuals [2].
The word "cyber" is used to describe networks with infrastructure information systems [3], also referred to as "virtual reality". Cyber security protects the security, integrity, and confidentiality of communication, life, integration, tangible or intangible assets, and data in an electronic environment established by institutions, organizations, and individuals in information systems. In summary, cyber security ensures the security of virtual life on cyber networks. The infrastructure of information systems data integrity protection, and confidentiality are protected under the name of cyber security [4]. The primary purpose of cyber security is to secure the data of individuals and institutions on the Internet. Ignorance of this vital issue can cause serious threats. For instance, someone with malicious intentions can infiltrate devices over the network and hijack the data [5] or steal user credentials such as credit card details or user ID passwords. Such attacks may cause financial damage to individuals, institutions, big companies, and even state governments. According to recent studies, cyberattacks cost billions of dollars to the world economy. Nowadays, cyber attacks are not just simple computer attacks but big businesses that large companies and state governments support. Each example is a cyber attack that can only be prevented with a good cyber security policy [6].
Considering all these mentioned points/aspects together, in this study, we aim to conduct a comprehensive analysis explaining the basics and importance of cyber security. In this context, all aspects of cyber security were discussed, shared risks and threats were presented, and solutions to prevent them were examined and discussed. A guided study was conducted for researchers in this field, from the fundamentals of cyber and network security to common attacks.
To facilitate a more transparent and precise understanding of the paper's language, frequently used phrases and their acronyms are listed in Table 1. This review paper is, to our knowledge, the most comprehensive article on cyber security from all perspectives. It differs from previous survey papers in many aspects. Previous studies mainly focused on one or two subjects, such as cyber security threats, attacks, using blockchain technology, using machine learning techniques in cyber security, challenges, or historical development [7,8,9,10,11,12,13]. In contrast, this study discusses various aspects of cyber security in detail. To understand the cyber security problem altogether, we break down the problem into smaller pieces, with each piece extensively covered in different sections. Each section lists the attack vector, possible remedies for each attack class, and challenges. Although some sections mention the same or similar information (threats, vulnerabilities, attacks, etc.), the aspects of the depicted information are different. Moreover, in some sections, more detailed information is provided on similar subjects. This paper contributes to researchers and anyone who wants to learn more about cyber security, from essential to advanced levels. The main topics covered in the article are summarized as follows:...
A Comprehensive Review of Cyber Security Vulnerabilities, Threats, Attacks, and Solutions