1 / 50

Chapter 11 Advanced NOS Administration

Chapter 11 Advanced NOS Administration. 11.1 Backups 11.2 Drive Mapping 11.3 Partition and Processes Management 11.4 Monitoring Resources 11.5 Analyzing and Optimizing Network Performance. Backups. Overview of Backup Methods.

josiah
Download Presentation

Chapter 11 Advanced NOS Administration

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 11 Advanced NOS Administration 11.1 Backups 11.2 Drive Mapping 11.3 Partition and Processes Management 11.4 Monitoring Resources 11.5 Analyzing and Optimizing Network Performance

  2. Backups

  3. Overview of Backup Methods • The backup process involves copying data from one computer to some other reliable storage medium for safekeeping. • Once the data has been archived, the system administrator can then restore data to the system from any previously recorded backup. • Considerations that are relevant for storage devices: • Cost • Size • Manageability • Reliability

  4. Overview of Backup Methods • There are four types of backup procedures that define how the backup will take place: • Full - will backup everything on the hard drive at the scheduled point in the day • Partial - backs up selected files • Incremental - only the files that have changed since the last backup will be selected for back up • Differential - backs up files created or changed since the last normal or incremental backup

  5. Installing Backup Software • With the release of Windows 2000, the backup utility that existed with Windows NT was greatly enhanced. • This Windows NT tool has replaced RDISK, and it is now used to back up the Registry. • It is no longer confined to only tapes as output media, and it offers a number of other possibilities as well.

  6. Installing Backup Software • TheLinux OS includes two utilities that can perform backups. • They are tar and cpio. The tar utility, which is a tape archiever, combines multiple files into a single file that can be copied to the media. • The cpio utility is used to copy in/out. • It can perform three basic actions, one of which must be specified: • -i To extract from an archive • -o To make a new archive • -p To print/pass (through the files)

  7. Backup Hardware • The most common backup hardware device used before and now is some form of magnetic tape drive. • Tape devices are known for their long-lasting performance, which is partly due to the tape drive mechanics that some systems include. • There are a variety of tape devices that use different tape formats for storing data. • Many tape drives can also compress the data before it is stored on the tape. • In most cases the compression ratio is 2:1.

  8. Backup Hardware • Quarter Inch Cartridge (QIC, pronounced “quick”), is a tape standard. • Travan tape drives have a higher storage capacity than the older QIC tape drives. The most recent standard implemented on Travan tape drives was hardware compression.

  9. Backup Hardware • 8mm tape technology uses a tape similar to 8mm videotape and the same helical scan system used by a VCR. • Mammoth 8mm tape technologies are an improvement on the original 8mm tape technologies with higher storage capacities and faster transfer speeds. • AIT technology uses 8mm tapes that use the helical scan recording hardware, which is much like a VCR and have memory in the tape cartridge, known as Memory-In-Cassette (MIC).

  10. Backup Hardware • The Digital Audio Tape (DAT) tape standard uses 4mm digital audiotapes to store data in the Digital Data Storage (DDS)format. • Digital Linear Tape (DLT)technology offers high capacity and relatively high-speed tape backup capabilities. DLT tapes record information on the tape in a linear format. • Linear Tape-Open (LTO)technology comes in two distinct forms. • One is designed for high storage capacity (Ultrium), and the other is designed for fast access (Accelis).

  11. Backup Strategies • There are several accepted backup strategies, and many of them are based on this popular Grandfather-Father-Son (also known as Child-Parent-Grandparent) method. • This backup strategy uses three sets of tapes for daily, weekly, and monthly backups.

  12. Automating Backups • System backups should be performed at regular intervals to provide consistent data protection. • Automation of such regularly scheduled events is an important backup consideration. • Automation not only increases backup consistency but also gives system administrators more time to address other important network issues. • The most common automated backup procedure consists of a system administrator inserting a tape into the drive at the end of the day. • The system performs the regularly scheduled nightly backup of data onto the tape. • The system administrator ejects that tape the next day and stores it for a predetermined period of time.

  13. Drive Mapping

  14. What is Drive Mapping? • Drive mapping is a useful tool that allows an administrator to share resources that are stored on a server. • The client computers that are connected to the network assign a drive letter that will act as a direct path to access those resources stored on a server over the network. • After a user identifies a network resource to be used locally, the resource can be "mapped" as a drive.

  15. Mapping Drives in Windows Networks • To map a drive with Windows Explorer, navigate to the folder on the remote system in Windows Explorer by selecting Network > Neighborhood > Server name > Shared folder name. • Another way to do this is to choose the Tools menu, and then choose Map Network Drive. • The net usecommand can be used instead of mapping drives through Windows Explorer. • net use can also be incorporated into a login script that automatically runs when the user logs in to the network.

  16. Mapping Drives in Linux Networks • A client computer running Linux must be mapped in a slightly different way. • Use the mount command to establish a connection to the shared directory on the server. • Entering the syntax will map a drive to a Linux/UNIX share. • The local directory designation that points to the remote share denoted by the first part of the command is called the directory mount point. • The mount point location must already exist before a share can be mapped to it. mount servername:/directory/subdirectory /localdirectory

  17. Mapping Drives in Novell Networks • Mapping a drive to a share on a NetWare server can be done using Windows Explorer. • If the NetWare client machines are running Windows 95, 98, NT, 2000, or XP, follow the same process for mapping drives on a Microsoft network. • A drive can be mapped at the command line by using the mapcommand. • This mapcommand can also be put in a login script that executes automatically as the user logs into the network.

  18. Partition and Processes Management

  19. Using fdisk, mkfs, and fsck • fdisk is a text-based and requires the use of one-letter commands to manipulate the options. • type m or ? at the fdisk promptto obtain a list of the commands that can be used. • Once the partition changes have been made, a filesystem must be created on the partition. • This is also referred to as formatting the partition. • Use the mkfs utility to create a filesystem in Linux.

  20. Using fdisk, mkfs, and fsck • The fsck utility is used to check file systems for errors, which occur more frequently than the need to add, remove, or format partitions. • It is a good idea to use this utility often to check for file system integrity.

  21. Managing System Processes with cron Jobs • The way to schedule tasks to run at regular intervals on a Linux system is with Cron Programs. • Also known as Cron jobs, they schedule system maintenance tasks that are performed automatically. • System Cron jobs are controled via the /etc/crontab file. • The file begins with a set of environmental variables. These sets certain parameters for the Cron jobs such as the PATH and MAILTO • The other lines in this file specify the minute, hour, date, month, and day of the week the job will run.

  22. Core Dumps • Core Dump is a recording of the memory that a program was using at the time it crashed. • The purpose of Core Dumps is to allow programmers to study the file to figure out exactly what caused the program to crash.

  23. Viewing Running Processes • The processes that are currently running on a Linux system can be viewed by using the ps command. • The ps command has a variety of options that can be used with it to manipulate its output. • These options can be used together to display specific output using the ps command. • There can be some considerable output that is generated when a command such as ps -a --forest is executed.

  24. Core Dumps • The top command functions much like the Windows 2000 Performance tool by providing detailed information regarding CPU and RAM usage. • Sometimes a process will cause the system to lock up. • The kill command can be used to terminate the process. • The signal option represents the specified signal that is sent to the process. • There are 63 different parameters that can be entered for the signal that is sent to the process.

  25. Assigning Permissions for Processes • Typically, programs have the same types of permission and can read the same files as the user who runs that program. • There are certain programs that require additional permission to be run by certain users. • Regular users cannot execute the su command because it requires root account privileges. • Programs such as these are run using the SUID or SGID bit, which allows these programs to be run under the permission of another user.

  26. Monitoring Resources

  27. Disk Management • By regularly using error-checking and defragmentation programs and continually managing free disk space, the system administrator can maintain healthy hard drives. • One preventive disk management tool available to system administrators is the use of "quotas" for user accounts. • A quota acts as a storage ceiling that limits the amount of data each user can store on the network.

  28. Memory Usage • Memory diagnostic tools that allow RAM intensive applications to be discovered, and stopped if necessary, are typically built into most NOS platforms. • System administrators can compensate for the lack of physical memory through the use of "virtual memory". • Virtual memory allocates space on the hard drive (swap partition or file) and treats it as an extension of the system RAM.

  29. CPU Usage • All information used by the NOS, including the NOS itself, is processed millions of times per second by the CPU to display this information to the user. • Built-in tools are commonly provided to allow system administrators to monitor the current level of CPU activity. • This feedback is often presented in terms of the percentage of the CPU currently being used and is refreshed at frequent intervals.

  30. Reviewing Daily Logs • Most computer programs, servers, login processes, as well as the system kernel, record summaries of their activities in log files. • These summaries can be used and reviewed for various things, including software that might be malfunctioning or attempts to break into the system. • In Windows 2000, the Computer Management tool allows users to browse the logged events generated by the NOS.

  31. Reviewing Daily Logs • Linux uses log daemons to control the events that are entered in the system log. • Most of the Linux system’s log files are located in the /var/log directory. • The log files that are located in this directory are maintained by the system log daemon (syslogd) and the kernel log daemon (klogd). • These two daemons are configured using the syslog.conf file.

  32. Checking Resource Usage on Windows 2000 and Windows XP • System resources are monitored in Windows 2000 and Windows XP with the Performance tool. • This application is found under the StartMenu > Programs > Administrative Tools > Performance menu option. • Users can then right-click on the graph and select Add Counters to specify which system resources to monitor in the graph.

  33. Checking Resource Usage on Linux • The df command is used to display the amount of disk space currently available to the various filesystems on the machine. • When a directory name is specified, the du command returns the disk usage for both the contents of the directory and the contents of any subdirectories beneath it. • The top command functions much like the Windows 2000 Performance tool by providing detailed information regarding CPU and RAM usage.

  34. Analyzing and Optimizing Network Performance

  35. Key Concepts in Analyzing and Optimizing Network Performance • The network administrator should make time to devise a proactive plan for managing the network. • This plan enables the detection of small problems before they become large ones. • The three key concepts in analyzing and optimizing network performance include: • Bottlenecks • Baselines • Best practices

  36. Bottleneck • A bottleneck is the point in the system that limits the data throughput, which is the amount of data that can flow through the network. • The primary performance-monitoring tool for Microsoft’s Windows 2000 Server is called Performance. • Performance can monitor nearly all hardware and software components on a Windows 2000 server.

  37. Bottleneck • The various versions of the UNIX/Linux operating systems have command-line utilities that can be used to monitor performance of the UNIX/Linux network server. • The primary tools are sar, vmstat, iostat, and ps. • The flags used by these commands can vary among the different versions of UNIX/Linux. • Use the UNIX/Linux mancommand to get specifics about the use of these commands. • The information displayed by the man command also tells how to interpret the output generated by the command.

  38. Baselines • The baseline measurements should include the following statistics: • Processor, Memory, Disk subsystem, Network queue length • Determine how efficiently a network is performing by comparing various measurements to the same measurements taken at an earlier time. • This point of comparison is called a baseline, which is the level of performance that is acceptable when the system is handling a typical workload.

  39. Determining Internet Connection Speed • The speed of a connection is limited by its lowest-speed component or the bottleneck. • This means that even if the equipment is capable of a 50-kbps connection, the connection will be at the slower speed if the remote modem supports only 33.6-kbps.

  40. Network Monitoring Software • The network monitor that comes with Windows NT and Windows 2000 is a functional and useful tool for performing routine protocol analysis. • Network Monitor can be used to display the individual frames of captured data. • The figure shows that packets for several different protocols have been captured, including TCP, UDP, and SMB.

  41. Network Monitoring Software • The Sniffer products enable sophisticated filtering based on pattern matches, IP/IPX addresses, and so on. • Sniffer Pro includes a traffic generator to assist in testing new devices or applications. • It can be used to simulate network traffic or to measure response times and hop counts. • Sniffer uses a dashboard-style interface.

  42. Network Management Software • Microsoft SMS is a high-end network management package that provides hardware and software inventory by installing the client agent on target computers. • One of the most useful features of SMS is its software distribution feature. • The distribution package contains the information used by SMS to coordinate the distribution of the software.

  43. Network Management Software • ManageWise includes an alarm/notification feature, NetWare LANalyzer agent, the management agent, Intel LANDesk Manager, Desktop Manager, and LANDesk virus protection. • Tivoli is capable of providing a complete view of the network topology. • Reporting tools enable customization of the view in which the data is presented, and "smart sets" can be created that group data logically.

  44. Management Software for Small and Medium-Sized Networks • SNMP is a protocol that is included in most implementations of TCP/IP. It has several advantages: • Simplicity, low cost, relative ease of implementation, low overhead on the network, supported by most network hardware devices • It uses a hierarchical database called a Management Information Base (MIB) to organize the information it gathers about the network.

  45. Management Software for Small and Medium-Sized Networks • Common Management Information Protocol (CMIP) was designed to improve on SNMP and expand its functionality. • It works in much the same way as SNMP, but it has better security features. • Also, it enables notification when specified events occur. • Since the overhead for CMIP is considerably greater than that which is required for SNMP, it is less widely implemented. • CMIP is based on the OSI protocol suite.

  46. Management Service Provider (MSP) • A new development in network management is the Management Service Provider (MSP). • A company subscribes to an MSP service, which provides performance monitoring and network management. • This saves the organization the cost of buying, installing, and learning to use monitoring and management software.

  47. SNMP Concepts and Components • There are two main parts to SNMP: • Management side – The management station is the centralized location from which users can manage SNMP. • Agent – The agent station is the piece of equipment from which issuers are trying to extract data. • At least one management system is needed to even be able to use the SNMP Service. • Certain commands can be issued specifically at the management system.

  48. SNMP Concepts and Components • The SNMP agent is responsible for complying with the requests and responding to the SNMP manager accordingly. • Generally, the agent is a router, server, or hub. • The agent is usually a passive component only responding to a direct query. • In one particular instance, however, the agent is the initiator, acting on its own without a direct query. • This special case is called a trap. • A trap is set up from the management side on the agent.

  49. SNMP Structure and Functions • The data that the management system requests from an agent is contained in a Management Information Base (MIB). • The namespace for MIB objects is hierarchical. • It is structured in this manner so that each manageable object can be assigned a globally unique name. • After the SNMP agent is installed, it must be configured with an SNMP community name. • The default SNMP community name is Public.

  50. SNMP Structure and Functions • These three steps show what SNMP is really doing • A community is a group of hosts running the SNMP Service to which they all belong. • There really is no established security with SNMP. The data is not encrypted. • Notification is possible using SNMP and will vary on an operating system–by–operating system basis.

More Related