Download
slide1 n.
Skip this Video
Loading SlideShow in 5 Seconds..
Dr. Bhavani Thuraisingham PowerPoint Presentation
Download Presentation
Dr. Bhavani Thuraisingham

Dr. Bhavani Thuraisingham

105 Views Download Presentation
Download Presentation

Dr. Bhavani Thuraisingham

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. Summary of Cyber Security Modules February 4, 2011 Lecture #7 Dr. Bhavani Thuraisingham

  2. Cyber Security • Security traditionally has been about CIA (Confidentiality, Integrity, Availability) • Security now also includes areas like Trustworthiness, Quality, Privacy • Dependability includes Security, Reliability and Fault Tolerance • Initially the term used was Computer Security (Compusec); it then evolved into Infosec – Information security – to include data and networks – now with web its called Cyber Security

  3. C. I.A. • Confidentiality: Preventing from unauthorized disclosure • Integrity: Preventing from unauthorized modification • Availability: Preventing denial of service

  4. Ten Major Modules of Cyber Security • Information Security and Risk Management • Access Control • Security Architecture and Design • Cryptography • Network Security • Applications Security (aka Data and Applications Security) • Legal Regulations, Compliance and Investigations (aka Digital Forensics) • Physical and Environmental Security • Business Continuity Planning • Operations Security • Not included: Hardware security; Performance Analysis, Ethical Hacking and Penetration Testing, - - -

  5. Information Governance and Risk Management • Security Management, Administration and Governance • Policies, Standards, Guidelines, Procedures • Information Classification • Roles and Responsibilities • Risk Management and Analysis • Best Practices

  6. Security Management, Administration and Governance • Information security (ISec) describes activities that relate to the protection of information and information infrastructure assets against the risks of loss, misuse, disclosure or damage. Information security management (ISM) describes controls that an organization needs to implement to ensure that it is sensibly managing these risks. • The risks to these assets can be calculated by analysis of the following issues: • Threats to your assets. These are unwanted events that could cause the deliberate or accidental loss, damage or misuse of the assets • Vulnerabilities. How susceptible your assets are to attack • Impact. The magnitude of the potential loss or the seriousness of the event.

  7. Policies, Standards, Guidelines and Procedures • Policies are the top tier of formalized security documents. These high-level documents offer a general statement about the organization’s assets and what level of protection they should have. • Well-written policies should spell out who’s responsible for security, what needs to be protected, and what is an acceptable level of risk.. • Standards are much more specific than policies. Standards are tactical documents because they lay out specific steps or processes required to meet a certain requirement. As an example, a standard might set a mandatory requirement that all email communication be encrypted. So although it does specify a certain standard, it doesn’t spell out how it is to be done. That is left for the procedure.

  8. Policies, Standards, Guidelines and Procedures • A baseline is a minimum level of security that a system, network, or device must adhere to. Baselines are usually mapped to industry standards. As an example, an organization might specify that all computer systems comply with a minimum Trusted Computer System Evaluation Criteria (TCSEC) C2 standard. • A guideline points to a statement in a policy or procedure by which to determine a course of action. It’s a recommendation or suggestion of how things should be done. It is meant to be flexible so it can be customized for individual situations. • A procedure is the most specific of security documents. A procedure is a detailed, in-depth, step-by-step document that details exactly what is to be done. • A security model is a scheme for specifying and enforcing security policies. Examples include: Bell and LaPadula, Biba, Access control lists

  9. Information Classification • It is essential to classify information according to its actual value and level of sensitivity in order to deploy the appropriate level of security. • A system of classification should ideally be: • simple to understand and to administer • effective in order to determine the level of protection the information is given. • applied uniformly throughout the whole organization (note: when in any doubt, the higher, more secure classification should be employed).

  10. Roles and Responsibilities • Internal Roles • Executive Management; Information System Security Professionals; Owners: Data and System Owners; Custodians • Operational Staff; Users; Legal, Compliance and Privacy Officers; Internal Auditors; Physical Security Officers • External Roles • Vendors and Supplies; Contractors; Temporary Employees; Customers; Business Partners; Outsourced Relationships; Outsourced Security • Human Resources • Employee development and management; Hiring and termination; Signed employee agreements; Education

  11. Risk Management and Analysis • Risk is the likelihood that something bad will happen that causes harm to an informational asset (or the loss of the asset). A vulnerability is a weakness that could be used to endanger or cause harm to an informational asset. A threat is anything (man made or act of nature) that has the potential to cause harm. • The likelihood that a threat will use a vulnerability to cause harm creates a risk. When a threat does use a vulnerability to inflict harm, it has an impact. In the context of information security, the impact is a loss of availability, integrity, and confidentiality, and possibly other losses (lost income, loss of life, loss of real property). It should be pointed out that it is not possible to identify all risks, nor is it possible to eliminate all risk. The remaining risk is called residual risk.

  12. Risk Managementg and Analysis • A risk assessment is carried out by a team of people who have knowledge of specific areas of the business. Membership of the team may vary over time as different parts of the business are assessed. • The assessment may use a subjective qualitative analysis based on informed opinion (scenarios), or where reliable dollar figures and historical information is available, the analysis may use quantitative analysis • For any given risk, Executive Management can choose to accept the risk based upon the relative low value of the asset, the relative low frequency of occurrence, and the relative low impact on the business. Or, leadership may choose to mitigate the risk by selecting and implementing appropriate control measures to reduce the risk. In some cases, the risk can be transferred to another business by buying insurance or out-sourcing to another business.

  13. Risk Management and Analysis • Identification of assets and estimating their value. Include: people, buildings, hardware, software, data supplies. • Conduct a threat assessment. Include: Acts of nature, accidents, malicious acts originating from inside or outside the organization. • Conduct a vulnerability assessment, and for each vulnerability, calculate the probability that it will be exploited. Evaluate policies, procedures, standards, training, physical security, - - - • Calculate the impact that each threat would have on each asset. Use qualitative analysis or quantitative analysis. • Identify, select and implement appropriate controls. Provide a proportional response. Consider productivity, cost effectiveness, and value of the asset. • Evaluate the effectiveness of the control measures. Ensure the controls provide the required cost effective protection without discernible loss of productivity.

  14. Risk Management and Analysis • Step 1: Estimate Potential Loss • SLE = AV ($) x EF (%) • SLE: Single Loss Expectancy, AV: Asset Value. EF: Exposure Factor (percentage of asset value) • Step 2: Conduct Threat Likelihood Analysis • ARO Annual Rate of Occurrence • Number of times per year that an incident is likely to occur • Step 3: Calculate ALE • ALE: Annual Loss Expectancy • ALE = SLE x ARO

  15. Security Best Practices • Job Rotation • Separation of Duty • Security Awareness training • Ethics Education

  16. Security Architecture • Security critical components of the system • Trusted Computing Base • Reference Monitor and Security Kernel • Security Perimeter • Security Policy • Least Privilege

  17. Trusted Computing Base • The trusted computing base (TCB) of a computer system is the set of all hardware, firmware, and/or software components that are critical to its security, in the sense that bugs or vulnerabilities occurring inside the TCB might jeopardize the security properties of the entire system. By contrast, parts of a computer system outside the TCB must not be able to misbehave in a way that would leak any more privileges than are granted to them in accordance to the security policy. • The careful design and implementation of a system's trusted computing base is paramount to its overall security. Modern operating systems strive to reduce the size of the TCB so that an exhaustive examination of its code base (by means of manual or computer-assisted software audit or program verification) becomes feasible.

  18. Reference Monitor and Security Kernel • In operating systems architecture, a reference monitor is a tamperproof, always-invoked, and small-enough-to-be-fully-tested-and-analyzed module that controls all software access to data objects or devices (verifiable). • The reference monitor verifies that the request is allowed by the access control policy. • For example, Windows 3.x and 9x operating systems were not built with a reference monitor, whereas the Windows NT line, which also includes Windows 2000 and Windows XP, was designed to contain a reference monitor, although it is not clear that its properties (tamperproof, etc.) have ever been independently verified, or what level of computer security it was intended to provide.

  19. Security Models • Bell and LaPadula (BLP) Confidentiality Model • Biba Integrity Model (opposite to BLP) • Clark Wilson Integrity Model • Other Models • information Flow Model • Non Interference Model • Graham Denning Model • Harrison-Ruzzo-Ullman Model • Lattice Model

  20. Bell and LaPadula • A system state is defined to be "secure" if the only permitted access modes of subjects to objects are in accordance with a security policy. To determine whether a specific access mode is allowed, the clearance of a subject is compared to the classification of the object (more precisely, to the combination of classification and set of compartments, making up the security level) to determine if the subject is authorized for the specific access mode. The clearance/classification scheme is expressed in terms of a lattice. The model defines two mandatory access control (MAC) rules and one discretionary access control (DAC) rule with three security properties: • The Simple Security Property - a subject at a given security level may not read an object at a higher security level (no read-up). • The *-property (read "star"-property) - a subject at a given security level must not write to any object at a lower security level (no write-down). The *-property is also known as the Confinement property. • The Discretionary Security Property - use of an access matrix to specify the discretionary access control.

  21. Biba • In general, preservation of data integrity has three goals: • Prevent data modification by unauthorized parties • Prevent unauthorized data modification by authorized parties • Maintain internal and external consistency (i.e. data reflects the real world) • Biba security model is directed toward data integrity (rather than confidentiality) and is characterized by the phrase: "no read down, no write up". This is in contrast to the Bell-LaPadula model which is characterized by the phrase "no write down, no read up". • The Biba model defines a set of security rules similar to the Bell-LaPadula model. These rules are the reverse of the Bell-LaPadula rules: • The Simple Integrity Axiom states that a subject at a given level of integrity must not read an object at a lower integrity level (no read down). • The * (star) Integrity Axiom states that a subject at a given level of integrity must not write to any object at a higher level of integrity (no write up).

  22. Clark Wilson Model • The Clark-Wilson integrity model provides a foundation for specifying and analyzing an integrity policy for a computing system. • The model is primarily concerned with formalizing the notion of information integrity. • Information integrity is maintained by preventing corruption of data items in a system due to either error or malicious intent. • An integrity policy describes how the data items in the system should be kept valid from one state of the system to the next and specifies the capabilities of various principals in the system. • The model defines enforcement rules and certification rules. • The model’s enforcement and certification rules define data items and processes that provide the basis for an integrity policy. The core of the model is based on the notion of a transaction.

  23. Clark Wilson Model • A well-formed transaction is a series of operations that transition a system from one consistent state to another consistent state. • In this model the integrity policy addresses the integrity of the transactions. • The principle of separation of duty requires that the certifier of a transaction and the implementer be different entities. • The model contains a number of basic constructs that represent both data items and processes that operate on those data items. The key data type in the Clark-Wilson model is a Constrained Data Item (CDI). An Integrity Verification Procedure (IVP) ensures that all CDIs in the system are valid at a certain state. Transactions that enforce the integrity policy are represented by Transformation Procedures (TPs). A TP takes as input a CDI or Unconstrained Data Item (UDI) and produces a CDI. A TP must transition the system from one valid state to another valid state. UDIs represent system input (such as that provided by a user or adversary). A TP must guarantee (via certification) that it transforms all possible values of a UDI to a “safe” CDI

  24. Clark Wilson Model • At the heart of the model is the notion of a relationship between an authenticated principal (i.e., user) and a set of programs (i.e., TPs) that operate on a set of data items (e.g., UDIs and CDIs). The components of such a relation, taken together, are referred to as a Clark-Wilson triple. The model must also ensure that different entities are responsible for manipulating the relationships between principals, transactions, and data items. As a short example, a user capable of certifying or creating a relation should not be able to execute the programs specified in that relation. • The model consists of two sets of rules: Certification Rules (C) and Enforcement Rules (E). The nine rules ensure the external and internal integrity of the data items. To paraphrase these: • C1—When an IVP is executed, it must ensure the CDIs are valid. C2—For some associated set of CDIs, a TP must transform those CDIs from one valid state to another. Since we must make sure that these TPs are certified to operate on a particular CDI, we must have E1 and E2.

  25. Clark Wilson Model • E1—System must maintain a list of certified relations and ensure only TPs certified to run on a CDI change that CDI. E2—System must associate a user with each TP and set of CDIs. The TP may access the CDI on behalf of the user if it is “legal.” This requires keeping track of triples (user, TP, {CDIs}) called “allowed relations.” • C3—Allowed relations must meet the requirements of “separation of duty.” We need authentication to keep track of this. • E3—System must authenticate every user attempting a TP. Note that this is per TP request, not per login. For security purposes, a log should be kept. • C4—All TPs must append to a log enough information to reconstruct the operation. When information enters the system it need not be trusted or constrained (i.e. can be a UDI). We must deal with this appropriately. • C5—Any TP that takes a UDI as input may only perform valid transactions for all possible values of the UDI. The TP will either accept (convert to CDI) or reject the UDI. Finally, to prevent people from gaining access by changing qualifications of a TP: • E4—Only the certifier of a TP may change the list of entities associated with that TP

  26. Security Architecture • Security critical components of the system • Trusted Computing Base • Reference Monitor and Security Kernel • Security Perimeter • Security Policy • Least Privilege

  27. Trusted Computing Base • The trusted computing base (TCB) of a computer system is the set of all hardware, firmware, and/or software components that are critical to its security, in the sense that bugs or vulnerabilities occurring inside the TCB might jeopardize the security properties of the entire system. By contrast, parts of a computer system outside the TCB must not be able to misbehave in a way that would leak any more privileges than are granted to them in accordance to the security policy. • The careful design and implementation of a system's trusted computing base is paramount to its overall security. Modern operating systems strive to reduce the size of the TCB so that an exhaustive examination of its code base (by means of manual or computer-assisted software audit or program verification) becomes feasible.

  28. Reference Monitor and Security Kernel • In operating systems architecture, a reference monitor is a tamperproof, always-invoked, and small-enough-to-be-fully-tested-and-analyzed module that controls all software access to data objects or devices (verifiable). • The reference monitor verifies that the request is allowed by the access control policy. • For example, Windows 3.x and 9x operating systems were not built with a reference monitor, whereas the Windows NT line, which also includes Windows 2000 and Windows XP, was designed to contain a reference monitor, although it is not clear that its properties (tamperproof, etc.) have ever been independently verified, or what level of computer security it was intended to provide.

  29. Security Models • Bell and LaPadula (BLP) Confidentiality Model • Biba Integrity Model (opposite to BLP) • Clark Wilson Integrity Model • Other Models • information Flow Model • Non Interference Model • Graham Denning Model • Harrison-Ruzzo-Ullman Model • Lattice Model

  30. Bell and LaPadula • A system state is defined to be "secure" if the only permitted access modes of subjects to objects are in accordance with a security policy. To determine whether a specific access mode is allowed, the clearance of a subject is compared to the classification of the object (more precisely, to the combination of classification and set of compartments, making up the security level) to determine if the subject is authorized for the specific access mode. The clearance/classification scheme is expressed in terms of a lattice. The model defines two mandatory access control (MAC) rules and one discretionary access control (DAC) rule with three security properties: • The Simple Security Property - a subject at a given security level may not read an object at a higher security level (no read-up). • The *-property (read "star"-property) - a subject at a given security level must not write to any object at a lower security level (no write-down). The *-property is also known as the Confinement property. • The Discretionary Security Property - use of an access matrix to specify the discretionary access control.

  31. Biba • In general, preservation of data integrity has three goals: • Prevent data modification by unauthorized parties • Prevent unauthorized data modification by authorized parties • Maintain internal and external consistency (i.e. data reflects the real world) • Biba security model is directed toward data integrity (rather than confidentiality) and is characterized by the phrase: "no read down, no write up". This is in contrast to the Bell-LaPadula model which is characterized by the phrase "no write down, no read up". • The Biba model defines a set of security rules similar to the Bell-LaPadula model. These rules are the reverse of the Bell-LaPadula rules: • The Simple Integrity Axiom states that a subject at a given level of integrity must not read an object at a lower integrity level (no read down). • The * (star) Integrity Axiom states that a subject at a given level of integrity must not write to any object at a higher level of integrity (no write up).

  32. Clark Wilson Model • The Clark-Wilson integrity model provides a foundation for specifying and analyzing an integrity policy for a computing system. • The model is primarily concerned with formalizing the notion of information integrity. • Information integrity is maintained by preventing corruption of data items in a system due to either error or malicious intent. • An integrity policy describes how the data items in the system should be kept valid from one state of the system to the next and specifies the capabilities of various principals in the system. • The model defines enforcement rules and certification rules. • The model’s enforcement and certification rules define data items and processes that provide the basis for an integrity policy. The core of the model is based on the notion of a transaction.

  33. Clark Wilson Model • A well-formed transaction is a series of operations that transition a system from one consistent state to another consistent state. • In this model the integrity policy addresses the integrity of the transactions. • The principle of separation of duty requires that the certifier of a transaction and the implementer be different entities. • The model contains a number of basic constructs that represent both data items and processes that operate on those data items. The key data type in the Clark-Wilson model is a Constrained Data Item (CDI). An Integrity Verification Procedure (IVP) ensures that all CDIs in the system are valid at a certain state. Transactions that enforce the integrity policy are represented by Transformation Procedures (TPs). A TP takes as input a CDI or Unconstrained Data Item (UDI) and produces a CDI. A TP must transition the system from one valid state to another valid state. UDIs represent system input (such as that provided by a user or adversary). A TP must guarantee (via certification) that it transforms all possible values of a UDI to a “safe” CDI

  34. Clark Wilson Model • At the heart of the model is the notion of a relationship between an authenticated principal (i.e., user) and a set of programs (i.e., TPs) that operate on a set of data items (e.g., UDIs and CDIs). The components of such a relation, taken together, are referred to as a Clark-Wilson triple. The model must also ensure that different entities are responsible for manipulating the relationships between principals, transactions, and data items. As a short example, a user capable of certifying or creating a relation should not be able to execute the programs specified in that relation. • The model consists of two sets of rules: Certification Rules (C) and Enforcement Rules (E). The nine rules ensure the external and internal integrity of the data items. To paraphrase these: • C1—When an IVP is executed, it must ensure the CDIs are valid. C2—For some associated set of CDIs, a TP must transform those CDIs from one valid state to another. Since we must make sure that these TPs are certified to operate on a particular CDI, we must have E1 and E2.

  35. Secure System Evaluation: TCSEC • Trusted Computer System Evaluation Criteria (TCSEC) is a United States Government Department of Defense (DoD) standard that sets basic requirements for assessing the effectiveness of computer security controls built into a computer system. The TCSEC was used to evaluate, classify and select computer systems being considered for the processing, storage and retrieval of sensitive or classified information. • The TCSEC, frequently referred to as the Orange Book, is the centerpiece of the DoD Rainbow Series publications. Initially issued in 1983 by the National Computer Security Center (NCSC), an arm of the National Security Agency, and then updated in 1985,. • TCSEC was replaced by the Common Criteria international standard originally published in 2005.

  36. Secure System Evaluation: TCSEC • Policy: The security policy must be explicit, well-defined and enforced by the computer system. There are two basic security policies: • Mandatory Security Policy - Enforces access control rules based directly on an individual's clearance, authorization for the information and the confidentiality level of the information being sought. Other indirect factors are physical and environmental. This policy must also accurately reflect the laws, general policies and other relevant guidance from which the rules are derived. • Marking - Systems designed to enforce a mandatory security policy must store and preserve the integrity of access control labels and retain the labels if the object is exported. • Discretionary Security Policy - Enforces a consistent set of rules for controlling and limiting access based on identified individuals who have been determined to have a need-to-know for the information.

  37. Secure System Evaluation: TCSEC • Accountability: Individual accountability regardless of policy must be enforced. A secure means must exist to ensure the access of an authorized and competent agent which can then evaluate the accountability information within a reasonable amount of time and without undue difficulty. There are three requirements under the accountability objective: • Identification - The process used to recognize an individual user. • Authentication - The verification of an individual user's authorization to specific categories of information. • Auditing - Audit information must be selectively kept and protected so that actions affecting security can be traced to the authenticated individual. • The TCSEC defines four divisions: D, C, B and A where division A has the highest security. Each division represents a significant difference in the trust an individual or organization can place on the evaluated system. Additionally divisions C, B and A are broken into a series of hierarchical subdivisions called classes: C1, C2, B1, B2, B3 and A1.

  38. Secure System Evaluation: TCSEC • Assurance: The computer system must contain hardware/software mechanisms that can be independently evaluated to provide sufficient assurance that the system enforces the above requirements. By extension, assurance must include a guarantee that the trusted portion of the system works only as intended. To accomplish these objectives, two types of assurance are needed with their respective elements: • Assurance Mechanisms : Operational Assurance: System Architecture, System Integrity, Covert Channel Analysis, Trusted Facility Management and Trusted Recovery • Life-cycle Assurance : Security Testing, Design Specification and Verification, Configuration Management and Trusted System Distribution

  39. Secure System Evaluation: ITSEC • The Information Technology Security Evaluation Criteria (ITSEC) is a structured set of criteria for evaluating computer security within products and systems. The ITSEC was first published in May 1990 in France, Germany, the Netherlands, and the United Kingdom based on existing work in their respective countries. Following extensive international review, Version 1.2 was subsequently published in June 1991 by the Commission of the European Communities for operational use within evaluation and certification schemes. • Levels E1 – E6

  40. Secure System Evaluation: Common Criteria • The Common Criteria for Information Technology Security Evaluation (abbreviated as Common Criteria or CC) is an international standard (ISO/IEC 15408) for computer security certification. • Common Criteria is a framework in which computer system users can specify their security functional and assurance requirements, vendors can then implement and/or make claims about the security attributes of their products, and testing laboratories can evaluate the products to determine if they actually meet the claims. In other words, Common Criteria provides assurance that the process of specification, implementation and evaluation of a computer security product has been conducted in a rigorous and standard manner. • Levels: EAL 1 – EAL 7 (Evaluation Assurance Levels)

  41. Some Security Threats • Buffer Overflow • Maintenance Hooks • Time of check / Time of use attacks

  42. Access Control • Access Control Overview • Identification, Authentication, Authorization, Accountability • Single Sign-on and Kerberos • Access Control Models • Access Control Techniques and Technologies • Access Control Administration • Access Control Monitoring: Intrusion Detection • Threats to Access Control

  43. Access Control Overview • Access control is a system which enables an authority to control access to areas and resources in a given physical facility or computer-based information system. • In computer security, access control includes authentication, authorization and audit. It also includes measures such as physical devices, including biometric scans and metal locks, hidden paths, digital signatures, encryption, social barriers, and monitoring by humans and automated systems. • In any access control model, the entities that can perform actions in the system are called subjects, and the entities representing resources to which access may need to be controlled are called objects (see also Access Control Matrix). Subjects and objects should both be considered as software entities and as human users

  44. Access Control • Access control models used by current systems tend to fall into one of two classes: those based on capabilities and those based on access control lists (ACLs). • In a capability-based model, holding an unforgeable reference or capability to an object provides access to the object • Access is conveyed to another party by transmitting such a capability over a secure channel. • In an ACL-based model, a subject's access to an object depends on whether its identity is on a list associated with the object

  45. Identification, Authentication, Authorization • Access control systems provide the essential services of identification and authentication (I&A), authorization, and accountability where: • identification and authentication determine who can log on to a system, and the association of users with the software subjects that they are able to control as a result of logging in; • authorization determines what a subject can do; • accountability identifies what a subject (or all subjects associated with a user) did.

  46. Single Sign-On • Single sign-on (SSO) is a property of access control of multiple, related, but independent software systems. With this property a user logs in once and gains access to all systems without being prompted to log in again at each of them. Single sign-off is the reverse property whereby a single action of signing out terminates access to multiple software systems. • As different applications and resources support different authentication mechanisms, single sign-on has to internally translate to and store different credentials compared to what is used for initial authentication.

  47. Single Sign-on Kerberos • Kerberos is a computer network authentication protocol, which allows nodes communicating over a non-secure network to prove their identity to one another in a secure manner. It is also a suite of free software published by MIT that implements this protocol. Its designers aimed primarily at a client–server model, and it provides mutual authentication — both the user and the server verify each other's identity. Kerberos protocol messages are protected against eavesdropping and replay attacks. • Kerberos builds on symmetric key cryptography and requires a trusted third party, and optionally may use public-key cryptography by utilizing asymmetric key cryptography during certain phases of authentication

  48. Symmetric Key Cryptography • Symmetric-key algorithms are a class of algorithms for cryptography that use trivially related, often identical, cryptographic keys for both decryption and encryption. • The encryption key is trivially related to the decryption key, in that they may be identical or there is a simple transformation to go between the two keys. The keys, in practice, represent a shared secret between two or more parties that can be used to maintain a private information link. • The disadvantage of symmetric cryptography is that it presumes two parties have agreed on a key and been able to exchange that key in a secure manner prior to communication. This is a significant challenge. Symmetric algorithms are usually mixed with public key algorithms to obtain a blend of security and speed.

  49. Public Key Cryptography • Public-key cryptography is a cryptographic approach which involves the use of asymmetric key algorithms instead of or in addition to symmetric key algorithms. • Unlike symmetric key algorithms, it does not require a secure initial exchange of one or more secret keys to both sender and receiver. • The asymmetric key algorithms are used to create a mathematically related key pair: a secret private key and a published public key. Use of these keys allows protection of the authenticity of a message by creating a digital signature of a message using the private key, which can be verified using the public key. • It also allows protection of the confidentiality and integrity of a message, by public key encryption, encrypting the message using the public key, which can only be decrypted using the private key.

  50. What is Network Security • Network securityconsists of the provisions made in an underlying computer network infrastructure, policies adopted by the network administrator to protect the network and the network-accessible resources from unauthorized access, and consistent and continuous monitoring and measurement of its effectiveness • The terms network security and information security are often used interchangeably. Network security is generally taken as providing protection at the boundaries of an organization by keeping out intruders (hackers). • Information security, however, explicitly focuses on protecting data resources from malware attack or simple mistakes by people within an organization by use of data loss prevention (DLP) techniques.