CIST 1601 Information Security Fundamentals. Chapter 8 Security Policies and Procedures. Collected and Compiled By JD Willard MCSE, MCSA, Network+, Microsoft IT Academy Administrator Computer Information Systems Instructor Albany Technical College. Business Impact Analysis (2:39).
Chapter 8 Security Policies and Procedures
Collected and Compiled
By JD Willard
MCSE, MCSA, Network+,
Microsoft IT Academy Administrator
Computer Information Systems Instructor
Albany Technical College
Business continuity planning is a more comprehensive approach to provide guidance so the organization can continue making sales and collecting revenue.
Business continuity is primarily concerned with the processes, policies, and methods that an organization follows to minimize the impact of a system failure, network failure, or the failure of any key component needed for operation—essentially, whatever it takes to ensure that the business continues.
As with disaster recovery planning, it covers natural and man-made disasters.
Utilities, high availability, backups, and fault tolerance are key components of business continuity.
Utilities consist of services such as electricity, water, mail, and natural gas that are essential aspects of business continuity. Where possible, you should include fallback measures that allow for interruptions in these services.
In the vast majority of cases, electricity and water are restored—at least on an emergency basis—fairly rapidly.
Disasters, such as a major earthquake or hurricane, can overwhelm utility companies and government agencies, and services may be interrupted for quite a while. Critical infrastructure may be unavailable for days, weeks, or even months.
If possible, build infrastructures that don’t have single points of failure or connection.
As an administrator, it’s impossible to prepare for every emergency, but you can plan for those that could conceivably happen.
High availability refers to the process of keeping services and systems operational during an outage (e.g. power, telephone). With high availability, the goal is to have key services available 99.999 percent of the time (also known as five nines availability).
High availability and fault-tolerance refers to implementing mechanisms such as redundant array of independent disks (RAID), fault-tolerant servers and clustered servers, which would ensure that your business can still continue to operate when a system failure occurs.
Implementing fault-tolerant systems and redundant technologies, and performing regular backups of your servers are all solutions for ensuring high availability systems.
Server clustering in a networked environment
Redundancy refers to systems that are either duplicated or that fail over to other systems in the event of a malfunction.
Fail-over refers to when a system that is developing a malfunction automatically switches processes to another system to continue operations.
Clustering is the process of providing failover capabilities for servers by using multiple servers together. A cluster consists of several servers providing the same services. If one server in the cluster fails, the other servers will continue to operate. Clustering is a form of server redundancy.
It might be necessary to set up redundant servers so that the business can still function in the event of hardware or software failure. A simple equipment failure might result in days of downtime as the problem is repaired.
A single point of failure is any piece of equipment that can bring your operation down if it stops working. Neglecting single points of failure can prove disastrous.
In disaster recovery planning, you might need to consider redundant connections between branches or sites. Because the records must be available between offices, this is the single point of failure that requires redundancy.
If all your business is web based, to provide continued customer access it is a good idea to have some ISP redundancy in the event the Internet connection goes down.
If the majority of your business is telephone based, you might look for redundancy in the phone system as opposed to the ISP.
In this cluster, each system has its own data storage and data-processing capabilities. The system that is connected to the network has the additional task of managing communication between the cluster and its users. Many clustering systems allow all the systems in the cluster to share a single disk system.
Fault tolerance is primarily the ability of a system to sustain operations in the event of a component failure. It ensures that you have the required number of components plus one extra to plug into any system in case of failure. It can be built into a server by adding a second power supply, a second CPU, and other key components.
Tandem, Stratus, and HP all involve a fault-tolerant implementation where everything is N+1, and multiple computers are used to provide 100 percent availability of a single server. The redundancy strategy N+1 means that you have the number of components you need, plus one to plug into any system should it be needed.
It is imperative that fault tolerance be built into your electrical infrastructure as well. At a bare minimum, an uninterruptible power supply (UPS)—with surge protection—should accompany every server and workstation.
An UPS protects computers from power loss due to power outages. It contains a battery that keeps a computer running during a power sag or power outage, and gives a user time to save any unsaved data when a power outage occurs.
In an onlineUPS, the computer is always running off of battery power, and the battery is continuously being recharged. There is no switchover time, and these supplies generally provide the best isolation from power line problems.
A “offline” or “standby” UPS usually derives power directly from the power line, until power fails.
Ferro-resonant units operate in the same way as a standby UPS unit; however, they are online with the exception that a ferro-resonant transformer is used to filter the output. This transformer is designed to hold energy long enough to cover the time between switching from line power to battery power and effectively eliminates the transfer time.
Backup power can be done through the use of a generator. A generator can be used for rolling blackouts, emergency blackouts, or electrical problems. A backup generator will provide power for a limited time. It runs on gasoline or diesel to generate electricity.
Brownouts are short-term decreases in voltage levels triggered by faults on the utility provider’s systems. To protect your environment from such damaging fluctuations in power, always connect your sensitive electronic equipment to power conditioners, surge protectors, and a UPS, which provides the best protection of all.
Redundant Array of Independent Disks (RAID) 0 is disk striping. RAID enables a group, or array, of hard disks to act as a single hard disk. RAID 0 stores files in stripes, which are small blocks of data that are written across the disks in an array. Parts of a large file might be stored on every disk in a RAID 0 array.
RAID 0 provides no fault tolerance If any drive fails, the entire disk space is unavailable. If a drive in a disk striping volume fails, the data is lost. This RAID implementation is primarily used for performance purposes and not for providing data availability during hard disk failures.
RAID 1 includes both disk mirroring and disk duplexing. With disk mirroring, two hard disks are connected to a single hard disk controller, and a complete copy of a file is stored on each hard disk in a mirror set. Disk duplexing, which is similar to disk mirroring, uses a separate hard disk controller for each hard disk.
RAID 1 provides full redundancy. If either drive fails, the data can be retrieved from the remaining drive. All data is stored on both disks which mean that when one disk fails, the other disk continues to operate. This allows you to replace the failed disk, without interrupting business operation.
This solution requires a minimum of two disks and offers 100% redundancy. RAID 1 disk usage is 50% as the other 50% is for redundancy.
RAID 3, disk striping with a parity disk, uses RAID 0 with a separate disk that stores parity information. Here, when a disk in the array fails, the system can continue to operate while the failed disk is being removed. Parity information is a value that is based on the value of the specific data stored on each disk.
RAID 5 is referred to as disk striping with parity across multiple disks. RAID 5 also stores files in disk stripes, but one stripe is a parity stripe, which provides fault tolerance.
The parity information is stored on a drive separate from its data so that in the event of a single drive failure, information on the functioning disks can be used to reconstruct the data from the failed disk. RAID 5 requires at least three hard disks but typically uses five to seven disks. The maximum number of disks supported is 32.
Depending on Backups
Backups are duplicate copies of key information.
One important method of ensuring business continuity is to back up mission-critical servers and data.
Computer records are usually backed up using a backup program, backup systems, and backup procedures.
Data should be backed up regularly, and you should store a copy of your backup offsite.
Several types of storage mechanisms are available for data storage:
Also referred to as shadow copies, are partial or full backups that are stored at the computer center for immediate use in recovering a system or lost file, if necessary.
Refers to a location on the site of the computer center, which the company uses to store data locally. Onsite storage containers are used to store backup media. These onsite storage containers are classed according to fire, moisture, and pressure resistance.
Refers to a location away from the computer center where backup media are kept. It can be as simple as keeping a copy of backup media at a remote office, or as complicated as a nuclear hardened high-security storage facility. The storage facility should be bonded, insured, and inspected on a regular basis to ensure that all storage procedures are being followed.
Most offsite storage facilities charge based on the amount of space you require and the frequency of access you need to the stored information.
Disaster Recovery and Succession Planning (3:26)
IT Contingency Planning (3:19)
Contingency Plans (5:07)
Crafting a Disaster-Recovery Plan
A disaster recovery plan is a written document that defines how the organization will recover from a disaster and how to restore business with minimum delay.
The disaster-recovery plan deals with site relocation in the event of:
As part of the business continuity plan, it mainly focuses on alternate procedures for processing transactions in the short term. It is carried out when the emergency occurs and immediately following the emergency.
A contingency plan would be part of a disaster-recovery plan.
Understanding Backup Plan Issues
When selecting backup devices and media, you should consider the physical characteristics or type of the drive. The type of the drive includes:
The frequency of backups and tape retention time.
The backup time is the amount of time a tape takes to back up the data. It is based on the speed of the device and the amount of data being backed up.
The restoration time is the amount of time a tape takes to restore the data. It is based on the speed of the device, the amount of data being restored, and the type of backups used.
The retention time is the amount of time a tape is stored before its data is overwritten. The longer the retention time, the more media sets will be needed for backup purposes. A longer retention time will give you more flexibility for restoration.
The life of a tape is the amount of time a tape is used before being destroyed. The life of a tape is based on the amount of time it is used. Most vendors provide an estimate on backup media life.
Most modern database systems provide the ability to globally back up data or database and also provide transaction auditing and data-recovery capabilities.
Transaction, or audit files, can be stored directly on archival media. In the event of a system outage or data loss, the audit file can be used to roll back the database and update it to the last transactions made.
Word-processing documents, spreadsheets, and other user files are extremely valuable to an organization.
By doing a regular backup on user systems, you can protect these documents and ensure that they’re recoverable in the event of a loss.
With the cost of media being relatively cheap, including the user files in a backup every so often is highly recommended.
If backups that store only the changed files are created, keeping user files safe becomes a relatively less-painful process.
Although you can back up applications, it is usually considered a waste of backup space as these items don’t change often and can usually be re-installed from original media.
You should keep a single up-to-date version that is available for download and reinstallation.
Knowing the Backup Types
A full backup provides a complete backup of all files on a server or disk, with the end result being a complete archive of the system at the specific time when the backup was performed. The archive attribute is cleared.
Because of the amount of data that is backed up, full backups can take a long time to complete.
A full backup is used as the baseline for any backup strategy and most appropriate when using offsite archiving.
While the backup is being run, the system should not be used.
In the event of a total loss of data, restoration from a full backup will be faster than other methods.
Knowing the Backup Types
An incremental backup backs up files that have been created or changed since the immediately preceding backup, regardless of whether the preceding backup was a full backup, a differential backup, or an incremental backup, and resets the archive bit.
Incremental backups build on each other; for example, the second incremental backup contains all of the changes made since the first incremental backup.
Incremental backups are smaller than full backups, and are also the fastest backup type to perform.
When restoring the data, the full backup must be restored first, followed by each incremental backup in order.
Knowing the Backup Types
A differential backup includes all files created or modified since the last full backup without resetting the archive bit.
Differential backups are not dependent on each other. Each differential backup contains the changes made since the last full backup. Therefore, differential backups can take a significantly loner time than incremental backups.
Differential backups tend to grow as the week progresses and no new full backups have been performed.
When restoring the data, the full backup must be restored first, followed by the most recent differential backup.
Developing a Backup Plan
Grandfather-father-son backup refers to the most common rotation scheme for rotating backup media. Originally designed for tape backup, it works well for any hierarchical backup strategy. It allows for a minimum usage of backup media.
The basic method is to define three sets of backups;
For short term archival the monthly backup is referred to as the grandfather, the weekly backup is the father, and the daily backup is the son.
The last backup of the month becomes the archived backup for that month.
For long term archival the annual backup is referred to as the grandfather, the monthly backup is the father, and the weekly backup is the son.
The last backup of the month becomes the archived backup for that month.
The last backup of the year becomes the annual backup for the year.
Developing a Backup Plan
The Full Archival method keeps all data that has ever been on the system during a backup and stores it either onsite or offsite for later retrieval.
In short, all full backups, all incremental backups, and any other backups are permanently kept somewhere.
One major problem involves keeping records of what information has been archived.
For these reasons, many larger companies don’t find this to be an acceptable method of keeping backups.
Developing a Backup Plan
The Backup Server method establishes a server with large amounts of disk space whose sole purpose is to back up data.
All files on all servers are copied to the backup server on a regular basis; over time, this server’s storage requirements can become enormous.
The advantage is that all backed-up data is available online for immediate access. If a system or server malfunctions, the backup server can be accessed to restore information from the last backups performed on that system.
Several software manufacturers backup software create hierarchies of files: Over time, if a file isn’t accessed, it’s moved to slower media and may eventually be stored offline. This helps reduce the disk storage requirements, yet it still keeps the files that are most likely to be needed for recovery readily available
In this instance, the files on the backup server contain copies of all the information and data on the APPS, ACCTG, and DB servers.
Notice that the installation CDs are being used for the base OS and applications.
Recovering a System
Workstation and server failures, accidental deletion, virus infection, and natural disasters are all reasons why information might need to be restored from backup copies.
When a system fails, you’ll be unable to reestablish operation without regenerating all of the system’s components. This process includes making sure hardware is functioning, restoring or installing the operating systems, restoring or installing applications, and restoring data files.
When you install a new system, make a full backup of it before any data files are created.
Windows Server 2008, allow you to create a model user system as a disk image on a server; the disk image is downloaded and installed when a failure occurs.
Planning for Alternate Sites
Hot, cold and warm sites are maintained in facilities that are owned by another company.
Hot sites generally contain everything you need to bring your IT facilities up.
Warm sites provide some capabilities, including computer systems and media capabilities, in the event of a disaster.
Cold sites do not provide any infrastructure to support a company’s operations and requires the most setup time.
A hot site is up and available 24 hours a day, seven days a week, has the advantage of a very quick return to business, as well as the ability to test a DRP without affecting current operations.
It is similar to the original site in that it is equipped with all necessary hardware, software, network, and Internet connectivity fully installed, configured, and operational. It usually “mirrors” the configuration of the corporate facility.
Usually, testing is as simple as switching over after ensuring it contains the latest versions of your data.
When setting up a hot site, ensure that this site is sufficiently far from the corporate facility being mirrored so that it does not get affected by the same damages.
Hot sites are traditionally more expensive, but they can be used for operations and recovery testing before an actual catastrophic event occurs. They require a lot of administration time to ensure that the site is ready within the maximum tolerable downtime (MTD).
Expense, administration time, and the need for extensive security controls are disadvantages to using a hot site. Recovery time and testing availability are two advantages to using a hot site.
Planning for Alternate Sites
A warm site represents a compromise between a hot site, which is a very expensive site and a cold site, which is not preconfigured. A warm site usually only contains the power, phone, network ports, and other base services required. When a disaster occurs at the corporate facility, additional effort is needed to bring the computers, data, and resources to the warm site.
A warm site is harder to test than a hot site, but easier to test than a cold site. It only contains telecommunications equipment. Therefore, to properly test disaster recovery procedures at the warm site, alternate computer equipment such as servers would need to be set up and configured.
Warm sites are less expensive than hot sites, but more expensive than cold sites.
The recovery time of a warm site is slower than for a hot site, but faster than for a cold site.
Warm sites usually require less administration time because only the telecommunications equipment is maintained, not the computer equipment.
Planning for Alternate Sites
A cold site does not provide any equipment. These sites are merely a prearranged request to use facilities if needed.
A cold site is usually only made up of empty office space, electricity, raised flooring, air conditioning, and telecommunications lines and bathrooms. A cold site still needs networking equipment and complete configuration before it can operate when a disaster strikes the corporate facilities. This DRP option is the cheapest.
To properly test disaster recovery procedures at the cold site, alternate telecommunications and computer equipment would need to be set up and configured.
Recovery time and testing availability are two disadvantages to using a cold site.
Expense and administration time are two advantages to using a cold site.
A service-level agreement (SLA) is an agreement between a company and a vendor in which the vendor agrees to provide certain functions for a specified period.
They establish the contracted requirements for service through software and hardware vendors, utilities, facility management, and ISPs.
The following are key measures in SLAs:
Mean Time Between Failures (MTBF) is the average length of time a component will last, given average use. Usually, this number is given in hours or days. MTBF is helpful in evaluating a system’s reliability and life expectancy.
Mean Time to Repair (MTTR) is the measurement of how long it takes to repair a system or component once a failure occurs. In the case of a computer system, if the MTTR is 24 hours, this tells you it will typically take 24 hours to repair it when it breaks.
Code Escrow Agreements
Code escrow refers to the storage and conditions of release of source code provided by a vendor. Code escrow allows customers to access the source code of installed systems under specific conditions, such as the bankruptcy of a vendor.
Make sure your agreements provide you with either the source code for projects you’ve had done or a code escrow clause to acquire the software if the company goes out of business.
Human Resource Policies
Human resource policies deal with specifying standards and enforcing behaviors.
Hiring policies specify the process of how new personnel are hired, and detail processes for screening employees for new positions including:
Investigating references, college degrees, and certifications
Termination policies involve more than simply firing a person. Have a clear process for informing affected departments about voluntary and involuntary terminations.
When an employee leaves a company, their computer access should be discontinued immediately.
An ethics policy is the written policy governing accepted organizational ethics. Ethics are the personnel or organizational rules about how interactions, relationships, and dealings occur.
Acceptable-use policies (AUP) deal primarily with computers and information provided by the company.
It dictates how computers can be used within an organization. It should also outline the consequences of misuse.
Employees are commonly asked to sign such a document, which is a binding agreement to adhere to the policy.
Privacy policies must clearly define:
Which information can be disclosed
What information cannot be disclosed
What types of information employees are provided
The policy must clearly state that employees should have no expectations of privacy. Employers are allowed to search desks, computers, files, and any other items brought into the building.
By explicitly stating your policies, you can avoid misunderstandings and potentially prevent employees from embarrassing themselves.
Need to Know policies define that information should be limited to only those individuals who require it, so as to minimize unauthorized access of information.
Background investigations should include credit history and criminal-record checks as well as information about work experience and education.
A background check should weed out individuals who have misrepresented their background and experiences.
These checks must be done with the permission of the prospective employee.
Business policies address organizational and departmental business issues and have an impact on the security of an organization.
Separation of duties policies describe rules that reduce the risk of fraud and other losses.
These policies should define more than one person for completing business critical tasks. Multiple people conspiring to corrupt a system is less likely than a single person corrupting it.
It may involve both the separation of logons, such as day-to-day and admin accounts both assigned to the same network admin, as well as the separation of roles, such as security assignment and compliance audit procedures.
Due care is the knowledge and actions that a reasonable and prudent person would possess or act upon. The objectives of due care policies are to protect and safeguard customer and/or client records.
Due care is determined based on legislative requirements.
The company exercises the practice of due care in the following manner:
The company implements physical and logical access controls.
The company ensures telecommunication security by using authentication and encryption.
Information, application, and hardware backups are performed at regular intervals.
Disaster recovery and business continuity plans are in place within the company.
Periodic reviews, drills, and tests are performed by the company to test and improve the disaster recovery and business continuity plans.
The company’s employees are informed regarding the anticipated behavior and implications of not following the expected standards.
The company has security policies, standards, procedures, and guidelines for effective security management.
The company performs security awareness training for its employees.
The company network runs updated antivirus definitions at all times.
The administrator periodically performs penetration tests from outside and inside the network.
The company implements either a call-back or a preset dialing feature on remote access applications.
The company abides by and updates external service level agreements (SLAs).
The company ensures that downstream security responsibilities are being met.
The company implements counter measures that ensure that software piracy is not taking place within the company.
The company ensures that proper auditing and reviewing of the audit logs is taking place.
The company conducts background checks on potential employees.
If a company does not exercise due care, the company’s senior management can be held legally accountable for negligence and might have to pay damages under the principle of culpable negligence legislation for the loss suffered because of insufficient security controls.
Physical Access Control Policies refer to the authorization of individuals to access facilities or systems that contain information.
They limit issues such as unauthorized disclosure of information, unauthorized access to the company facilities, and data theft.
Document Disposal and Destruction Policies detail the methods on how information that is no longer needed gets disposed. Data in all forms must be properly disposed of.
Some data and data sources must be destroyed or thoroughly erased. Because many sophisticated recovery techniques exist, destroying all data and data sources may be more appropriate. Discarded hard drives might need to be physically destroyed.
Certificate Policies dictate how an organization uses, manages, and validates certificates.
A certificate policy needs to identify:
Which certificate authorities (CAs) are acceptable
How certificates are used
How certificates are issued
An organization must also determine whether to use third-party CAs, such as VeriSign, or create its own CA systems.
Incident-Response Policies define how an organization will respond to an incident.
An incident is:
Any attempt to violate a security policy
A successful penetration
A compromise of a system
Unauthorized access to information
Disruption of services
It’s important that an incident-response policy establish at least the following items:
Outside agencies that should be contacted or notified in case of an incident
Resources used to deal with an incident
Procedures to gather and secure evidence
List of information that should be collected about an incident
Outside experts who can be used to address issues if needed
Policies and guidelines regarding how to handle an incident
The primary objective of privilege management is to define the entitlement rights of users to access the organization’s information.
Determines the security requirements of users
Provides access authorization
Monitors the resources accessed by users
Ensures that the privileges assigned to users in the form of permissions and access rights to information resources corroborate with their job requirements
The standard practices for an effective privilege management are use of the “need to know” and “least privilege” principals.
The need to know principal is based on the premise that users should be provided access to information that they absolutely require to fulfill their job responsibilities. Access to any additional information is denied to users who work under the least privilege principle.
A security group can have predefined access capabilities associated with it. It is used to manage user access to a network or system.
In this way, you can develop a comprehensive security model that addresses the accessibility needs of everyone in an organization. Departmental groups access information based on established needs and predefined access.
Each department may have different access capabilities. In some cases, different roles within a department have different needs. It comes down to an issue of trust, experience, and need.
This figure illustrates the group process. In this example, most individuals are placed into one of two departmental groups. The top user in the picture only has access to accounting applications on the ACCTG server, the middle user has access to both, and the bottom user only has access to the APPS server.
Privilege escalation is a vulnerability represented by the accidental or intentional access to resources not intended for access by the user. Application flaws can allow a normal user access to administrative functions reserved for privileged accounts, or to access features of an application reserved for other users.
Privilege escalation takes advantage of a program’s flawed code, which then crashes the system and leaves it in a state where arbitrary code can be executed or an intruder can function as an administrator.
An example of privilege escalation is logging in to a system using your valid user account and then finding a way to access files that you do not have permissions to access. This usually involves invoking a program, that can change your account permissions, or by invoking a program that runs in an administrative context.
There are several methods of dealing with privilege escalation, including using least privilege accounts and privilege separation. Privilege escalation can lead to denial of service attacks.
AD validating a user
The principle of single sign-on (SSO) is based on granting users access to all the systems, applications and resources they need, when they start a computer session. The SSO capability is usually provided by a directory service, with a digital certificate being used to authenticate the user. Once authenticated, the user is granted access to the appropriate systems and resources.
To reduce user support and authentication complexity, an SSO capable of granting access to all services is desirable. SSO solutions may employ a central directory service like Microsoft’s Active Directory or Novell’s eDirectory service, or may sequester services behind a series of proxy applications as in the Service-Oriented Architecture approach.
Single sign-on provides many advantages:
It is an efficient logon method because users only have to remember one password and only need to log on once.
Resources are accessed faster because you do not need to log in for each resource access. It lowers security administration costs because only one account exists for each user.
It lowers setup costs because only one account needs to be created for each user.
It allows the use of stronger passwords.
Kerberos uses single sign-on to grant users access to resources. Other technologies that provide single sign-on authentication are security domains, directory services, and thin clients.
In this instance, the database application, e‑mail client, and printers all authenticate with the same logon. Like Kerberos, this process requires all the applications that want to take advantage of AD to accept AD controls and directives. Access can be established through groups, and it can be enforced through group memberships.
In the case of a highly centralized environment, a single department or person is responsible for making decisions about access that affect the entire organization.
In a decentralized environment, decision making is spread throughout the organization.
It’s important that personnel receive access only to the information they really need.
Establishing a standardized policy or set of policies is important; these policies, and their effects, must be well documented and enforced.
Sometimes individuals may need special access to information that they wouldn’t normally be given. Specialized access should be granted only for the period of time during which they need the access. A separate account with these special privileges is usually the best way to manage these types of situations. When finished, the account can be disabled or deleted.
If an organization has multiple servers, it may not want administrators to have access to all the servers for administrative purposes. As a rule, you should grant administrative access only to specific systems and possibly grant it only at specific times.
Auditing is the process of tracking users and their actions on the network, and ensuring that corporate security policies are carried out consistently.
An audit is used to inspect and test procedures within an organization to verify that those procedures are working and up-to-date. The result of an audit is a report to management.
A periodic security audit of user access and rights review can help determine whether privilege-granting processes are appropriate and whether computer usage and escalation processes are in place and working.
Auditing user privileges is generally a two-step process that involves enabling auditing within the operating system and then specifying the resources to be audited.
Without proper planning and policies, you probably will quickly fill your log files and hard drives with useless or unused information. The more quickly you fill up your log files, the more frequently you need to check the logs; otherwise, important security events may get deleted unnoticed.
A privilege audit is used to determine that all groups, users, and other accounts have the appropriate privileges assigned according to the policies of an organization.
Privilege audits assist in ensuring that all user accounts, groups, and roles are correctly defined and assigned.
Usage audits assist with ensuring that all computer systems and software are being used appropriately and consistently with organizational policies.
A usage audit may entail physically inspecting systems, verifying software configurations, and conducting other activities intended to prove that resources are being used appropriately.
Escalation audits verify that the organization has the necessary procedures, policies and tools for dealing with emergencies, catastrophes, and other needs for management intervention. This type of auditing can assist in ensuring that an organization's procedures are operating correctly.
Disaster recovery plans, business continuity plans, and other plans are tested and verified for accuracy.
It is important to document the procedures undertaken during the classification of information, and who is involved in this process. You must also document who is involved in investigations, when it is suspected that something is awry, and the procedures they follow—known as due diligence.
Auditing and Log Files
The security log of Event Viewer contains all security events based on your auditing configuration.
The Event Viewer logs should be backed up on a daily basis.
You should enable the Do not overwrite events option for the Security log. This option configures the log so that events are only configured manually by an administrator.
The Maximum event log size option configures the maximum size of the event log.
When designing an audit policy for your company, the following steps need to be followed:
1. Develop the company’s security policy.
2. Plan the audit strategy.
3. Conduct the audit.
4. Evaluate the audit results.
5. Communicate the results and needed changes.
6. Conduct follow up.
To configure the audit, you should enable auditing, configure auditing on the objects, and then review event logs.
Reporting to Management
An audit should always conclude with a report to management. This report should outline any organizational strengths and weaknesses as they existed at the time of the audit. The audit should also explain any violations of policy, recommendations for improvement, and recommendations for the organization overall.
Implementation of access management is based on one of two models: centralized or decentralized.
Both the group-based and role-based methods of access control have a centralized database of accounts and roles or groups to which the accounts are assigned.
Decentralized security management is less secure but more scalable. Responsibilities are delegated and employees at different locations are made responsible for managing privileges within their administrative areas.
The three primary methods of access control are Mandatory (MAC), Discretionary (DAC), and Role-Based (RBAC). A fourth method, Rule-Based Access Control is gaining in popularity.
Each of these methods has advantages and disadvantages to the organization from a security perspective.
Authorization and Access Control (3:59)
Mandatory Access Control
The Mandatory Access Control (MAC) method enforces a rigid non-discretionary security control model of security.
This method is centrally controlled and managed. All access capabilities are predefined and users have no influence on permissions, nor can they share information that has not been previously established by administrators.
MAC involves the assignment of labels to resources and accounts. Documents and users are assigned to security levels, such as Confidential, Secret, and Top Secret.
If the labels on the account and resource do not match, the resource remains unavailable in a nondiscretionary manner.
A user can read documents at or below the user’s assigned security level, and the user can only write documents at or above the user’s security level.
Discretionary Access Control
The Discretionary Access Control (DAC) method allows some flexibility in information-sharing capabilities within the network. It enables users to set permissions when necessary, but within specific guidelines
In DAC, a subject has complete control over the objects that it owns. The owner assigns security levels based on objects and subjects.
An access control list (ACL) is used. An ACL for a document indicates the users and groups that are granted or denied access to a file or resource.
While the DAC method allows greater flexibility than the MAC method, the risk of unauthorized disclosure is increased.
Role-Based Access Control
In Role-based access control (RBAC) access control is managed on job function or responsibility. Each employee has one or more roles that allow access to specific information.
Access rights are first assigned to roles, and accounts are then assigned those roles.
This type of access is used with groups for inheritance by group member account.
If a person moves from one role to another, the access for the previous role will no longer be available.
Many systems offer a hybrid of DAC and RBAC. In some cases, the operating system might use DAC, whereas applications such as SQL Server use roles to determine access permission to data in tables and the database itself.
Rule-Based Access Control
Rule-based access control uses the settings in pre-configured security policies to make all decisions.
The rules defined usually include connection times and days.
Access rights may vary by account, by time of day, or through other forms of conditional testing.
The most common form of rule-based access control involves testing against an access control list (ACL) that details systems and accounts with access rights and the limits of its access for the resource. This type of access control is used by remote access connections.