1 / 96

Chapter 7

Chapter 7. Switches & VLANs. Learning Objectives. Explain the features and benefits of Fast Ethernet Describe the guidelines and distance limitations of Fast Ethernet Define full- and half-duplex Ethernet operation Distinguish between cut-through and store-and-forward LAN switching

bryce
Download Presentation

Chapter 7

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 7 Switches & VLANs

  2. Learning Objectives • Explain the features and benefits of Fast Ethernet • Describe the guidelines and distance limitations of Fast Ethernet • Define full- and half-duplex Ethernet operation • Distinguish between cut-through and store-and-forward LAN switching • Define the operation of the Spanning Tree Protocol and its benefits • Describe the benefits of virtual LANs

  3. Chapter Overview • In this chapter, you will revisit some of the concepts surrounding Ethernet operations. • Specifically, you will learn about Ethernet performance and methods for improving it. • Standard and Fast Ethernet will be part of this discussion, as will half- and full-duplex Ethernet operations. • The concepts central to LAN switching--such as switch operations, forwarding techniques, and VLANs—will also be explained.

  4. Ethernet Operations • Ethernet is a network access method. • Is is described by IEEE 802.3 • Ethernet is the most pervasive LAN technology in use and continues to be the most commonly implemented media access method in new LANs. • Many companies and individuals are continually working to improve the performance and increase the capabilities of Ethernet technology.

  5. CSMA/CD • Ethernet uses Carrier Sense Multiple Access with Collision Detection (CSMA/CD) as its contention method. • Any station connected to the network can transmit any time that there is not already a transmission on the wire. • After each transmitted signal, each station must wait a minimum of 9.6 microseconds before transmitting another packet. • This is called the interframe gap or interpacket gap (IPG).

  6. Collisions • Two stations could listen to the wire simultaneously and not sense a carrier signal. • In such a case, both stations might begin to transmit their data simultaneously. • Shortly after the simultaneous transmissions, a collision would occur on the network wire. • The stations would detect the collision as their transmitted signals collided with one another.

  7. Collisions Continued • Once a collision is detected, the sending stations transmit a 32-bit jam signal that tells all other stations not to transmit for a brief period (9.6 microseconds or slightly more). • The jam signal enforces the collision so that all stations on the wire detect it. • After the jam signal is transmitted, the two stations that caused the collision use an algorithm to enter a backoff period, which causes them not to transmit for a random interval. • The backoff period is an attempt to ensure that those two stations do not immediately cause another collision.

  8. Collision Domain • A collision domain is the physical area in which a packet collision might occur. • This concept is related to network segmentation, which is essentially the division of collision domains. • Repeaters do not segment the network and therefore do not divide collision domains. • Routers, switches, bridges, and gateways do segment networks and thus create separate collision domains.

  9. Collision Domain Continued • If a station transmits at the same time another station in the same collision domain transmits; there will be a collision. • The 32-bit jam signal that is transmitted when the collision is discovered prevents all stations on that collision domain from transmitting. • If the network is segmented, the collision domain is also divided, and the 32-bit jam signal will only affect those stations that operate within that collision domain. • Stations that operate within remote segments are not subject to the collisions or frame errors that occur on the local segment.

  10. Latency • The time that a signal takes to travel from one point to another point on the network affects the performance of the network. • Latency, or propagation delay, is the length of time that is required to forward, send, or otherwise propagate a data frame. • Latency differs depending on the resistance offered by the transmission medium and, in the case of a connectivity device, the amount of processing that must be done on the packet. • For example, sending a packet across a copper wire does not introduce as much latency as sending a packet across an Ethernet switch.

  11. Latency Continued • The time that it takes a packet from one host to be received by another host is called the transmission time. • The latency of the devices and media between the two hosts affects the transmission time; the more processing a device must perform on a data packet, the higher the latency. • The maximum propagation delay for an electronic signal to traverse a 100-meter section of Category 5 unshielded twisted-pair (UTP) or shielded twisted-pair (STP) cable is 111.2 bit times. • A bit time is the time to transmit one data bit on the network, which is 100 nanoseconds on 10 Mbps Ethernet network and 10 nanoseconds on a 100 Mbps Ethernet network.

  12. Maximum Propagation Delays • Table 7.1 below illustrates the maximum propagation delays for various media and devices on an Ethernet network

  13. Bit Times and Slot Time • Slot time (512 bit times) is an important specification because it limits the physical size of each Ethernet collision domain. • Slot time specifies that all collisions should be detected from anywhere in the network in less time than is required to place a 64-byte frame on the network. • Slot time is the reason the IEEE created the 5-4-3 rule, which limits collision domains to 5 segments, 4 repeaters, and three populated segments between any two stations. • If a station at one end of the Ethernet network didn't receive the jam signal before transmitting a frame on the network, another collision could occur as soon as the jam signal and newly transmitted frame crossed paths.

  14. Ethernet Errors • Different errors and different causes for errors exist on Ethernet networks. • Most errors are caused by defective or incorrectly configured equipment. • Errors impede the performance of the network and the transmission of useful data. • The next slides describe several Ethernet packet errors and their potential causes.

  15. Frame Size Errors • An Ethernet packet sent between two stations should be between 64 bytes and 1518 bytes. Frames that are shorter or longer than that are considered errors: • Short frame or runt: A frame that is shorter than 64-bytes; caused by a collision, a faulty network adapter, corrupt NIC software drivers, or a repeater fault. • Long frame: A frame that is larger than 1518 bytes, but under 6000 bytes; caused by a collision, a faulty network adapter, an illegal hardware configuration, a transceiver or cable fault, a termination problem, corrupt NIC software drivers, a repeater fault, or noise. • Giant: An error similar to the long frame, except that its size exceeds 6000 bytes; causes - same as long frame. • Jabber: Another classification for giant or long frames; longer than Ethernet standards allow with an incorrect FCS.

  16. Frame Check Sequence Errors • An FCS error, which indicates that bits of the frame were corrupted during transmission, can be caused by any of the previously listed errors. • An FCS error is detected when the calculation at the end of the packet doesn't agree with the number and sequence of bits in the frame, which means there was some type of bit loss or corruption. • An FCS error can be present even if the packet is within the accepted size parameters for Ethernet transmission. • A frame with an FCS error and a missing octet is called an alignment error.

  17. Collision Errors • Network administrators should expect collisions to occur on an Ethernet network. • Most administrators consider collision rates above 5% to be too high. • The more devices on a collision domain, the higher the chance that there will be a significant number of collisions. • Reducing the number of devices per collision domain will usually solve the problem. • Reduce the number of devices per collision domain by segmenting the network with a router, a bridge, or a switch.

  18. NIC Errors • A transmitting station will attempt to send its packet 16 times before discarding it as a NIC error. • A network with a high rate of collisions, which prompts multiple retransmissions, may also have a high rate of NIC errors. • Replacing bad NICs is the solution for errors caused by bad NICs.

  19. Late Collision Errors • Another Ethernet error related to collisions is called a late collision. • A late collision occurs when two stations transmit their entire frames without detecting a collision. • This can occur when there are too many repeaters on the network or when the network cabling is too long. • A late collision means that the slot time of 512 bytes has been exceeded. • A station can distinguish between a late and normal collision because a late collision occurs after the first 64 bytes of the frame has been transmitted.

  20. Late Collision Solution • The solution for eliminating late collisions is to determine which part of the Ethernet configuration violates design standards. • As previously mentioned, this usually involves too many repeaters or populated segments, or excessive cable lengths. • Occasionally, a network device malfunction could cause late collisions. • When such problems are located, the device must be replaced.

  21. Broadcasts • Broadcasting is necessary to carry out normal network tasks such as IP address to MAC address resolution. • When there is too much broadcast traffic on a segment, utilization increases and network performance suffers. • Slower file transfers, e-mail access delays, and slower Web access can be the result when broadcast traffic is above 10% of the available network bandwidth. • Reducing the number of services that servers provide on your network and limiting the number of protocols in use on your network will mitigate performance problems.

  22. Broadcasts Continued • Limiting the number of services will help because each computer that provides a service, such a file sharing, broadcasts its service at a periodic interval over each protocol it has configured. • Limiting the number of protocols in use on stations that share files can reduce the amount of broadcast traffic on the network because typically, each service is broadcast for each protocol configured. • Many operating systems will allow you to selectively bind the service to only a specific protocol.

  23. Broadcast Storms • If a broadcast from one computer causes multiple stations to respond with additional broadcast traffic, it could result in a broadcast storm. • Broadcast storms will slow down or completely stop network communications because no other traffic will be able to be transmitted on the network. • A broadcast storm occurs on an Ethernet collision domain when there are 126 or more broadcast packets per second. • Software faults with network card drivers or computer operating systems are the typical causes of broadcast storms. • You can locate problem devices by using a protocol analyzer to locate the device causing the broadcast storm.

  24. Half- Duplex Communications • In half-duplex communications, devices can send and receive signals, but not simultaneously. • In half-duplex Ethernet communications, when a twisted-pair NIC sends a transmission, the card loops back that transmission from its transmit wire pair onto its receive pair. • The transmission is also sent out of the card. • It travels along the network through the hub to all other stations on the collision domain as shown on the next slide. • Half-duplex NICs cannot transmit and receive simultaneously, so all stations on the collision domain will listen to the transmission before sending another.

  25. Half - Duplex Example

  26. Full - Duplex Communications • In full-duplex communications, devices can send and receive signals simultaneously. Full-duplex communications use one set of wires to send and a separate set to receive. • 10Base-T, 10Base-F, 100Base-FX, and 100Base-TX Ethernet networks can utilize equipment that supports half- and full-duplex communications. • Since full-duplex network devices conduct the transmit and receive functions on different wire pairs and do not loopback transmissions as they are sent, collisions cannot occur in full-duplex Ethernet communications. • Full-duplex effectively doubles the throughput between devices because there are two separate communication paths.

  27. Full - Duplex Continued • 10BaseT full-duplex network cards are capable of transferring at a rate equivalent to 20 Mbps when compared to half-duplex 10BaseT cards. • The benefits of using full-duplex are listed below: • Time is not wasted retransmitting frames, because there are no collisions. • The full bandwidth is available in both directions because the send and receive functions are separate. • Stations do not have to wait until other stations complete their transmissions, because there is only one transmitter for each twisted pair

  28. Fast Ethernet • When a 10BaseT network is experiencing congestion, upgrading to Fast Ethernet can reduce congestion considerably. • Fast Ethernet uses the same network access method as common 10BaseT Ethernet, but provides ten times the data transmission rate—100 Mbps. • Frames can be transmitted in 90% less time with Fast Ethernet than with standard Ethernet. • All network cards, hubs, and other connectivity devices that are expected to operate at 100 Mbps per second must be upgraded. • If the 10BaseT network is using Category 5 or higher cable, however, that cable can still be used for Fast Ethernet operations.

  29. Fast Ethernet Continued • A 10 Mbps Ethernet adapter can function on a Fast Ethernet network because the Fast Ethernet hub or switch to which the 10 Mbps device attaches will automatically negotiate a 10 Mbps connection. • The Fast Ethernet hub will continue to operate at 100 Mbps with the other Fast Ethernet devices. • Fast Ethernet devices are also capable of full-duplex operation, which allows them to obtain effective throughput of 200 Mbps. • Fast Ethernet, which is defined under the IEEE 802.3u standard, has three defined implementations.

  30. Fast Ethernet Implementations • 100Base-TX: Uses two-pair of either Category 5 unshielded twisted-pair (UTP) or shielded-twisted pair (STP); one pair is used for transmit (TX) and the other is used for receive (RX). The max segment length is 100 meters; 200 with repeaters. • 100Base-T4: Uses four-pair of either Category 3, 4, or 5 UTP cable; one pair is used for TX, one pair for RX, and two pairs are used as bi-directional data pairs. The max segment length is 100 meters; 200 with repeaters. • 100Base-FX: Uses multimode fiber optic (MMF) cable with one TX and one RX strand per link. The maximum segment length is 412 meters.

  31. Repeaters • IEEE 802.3u specifies two types of repeaters: Class I and Class II. Class I repeaters have higher latency than Class II repeaters, as shown in Table 7.1 on a previous slide. • When two Class II repeaters are deployed on a twisted-pair network, the specification allows for an additional 5 meter patch cord to connect the repeaters. This means that the maximum distance between two stations can be up to 205 meters. • When two Class II repeaters are used on a fiber optic cable network, the maximum distance is 412 meters or less when repeaters are used, because repeaters introduce latency. • Latency increases the propagation delay, which means that the maximum distance possible between stations must be reduced to ensure the slot time is maintained.

  32. Quick Quiz • Ethernet uses which network access method? • Which devices create collisions domains? • What is the correct frame size range for Ethernet? • How does this chapter suggest broadcast traffic can be reduced? • What are the benefits of upgrading to Fast Ethernet?

  33. LAN Segmentation • You can improve the performance of your Ethernet network by reducing the number of stations per collision domain. • Typically, network administrators implement bridges, switches, or routers to segment the network and divide the collision domain. • This segmentation and division reduces the number of devices per collision domain. • In your previous studies, you learned about using bridges, switches, and routers to segment a network. • First, you will review the concepts behind segmenting a LAN with bridges and routers. Next, you will learn how to use switches to segment a LAN.

  34. Segmenting With Bridges • Bridges divide a network into segments and only forward a packet from one segment to another if the packet is a broadcast or has the MAC address of a station on the opposite segment. • Bridges learn MAC addresses by reading packets as the packets are passed across the bridge. • The MAC addresses are contained in the header information inside each packet. If the bridge does not recognize a MAC address, it will forward the packet to all segments. • The bridge maintains a bridging table to keep track of the different hardware addresses on each segment. • The table maps the MAC addresses to the port on the bridge that leads to the segment containing that device.

  35. Segmenting With Bridges Continued • Bridges increase latency by 10 to 30 percent, but since they divide the collision domain, this does not affect slot time. • When you segment a LAN with one or more bridges, remember these points: • Bridges reduce collisions by segmenting the LAN and filtering traffic. • A bridge does not reduce broadcast and multicast traffic. • A bridge can extend the useful distance of the Ethernet LAN because distance limitations apply to collision domains and a bridge separates collision domains. • The bandwidth for the new individual segments is increased because they can operate separately at 10 Mbps or 100 Mbps, depending on the technology. • Bridges can be used to limit traffic for security purposes by keeping traffic segregated.

  36. Segmenting With Routers • A router operates at layer 3 of the OSI reference model. • It interprets the Network layer protocol and makes forwarding decisions based on the layer 3 address. • Routers typically do not propagate broadcast traffic; thus, they reduce network traffic even more than bridges. • Routers maintain routing tables that include the Network layer addresses of different segments. • The router forwards packets to the correct segment or another router based on those Network layer addresses. • Since the router has to read the layer 3 address and determine the best path to the destination station, latency is higher than with a bridge or repeater.

  37. Segmenting With Routers Continued • Keep in mind that when you segment a LAN with routers, routers will: • Decrease collisions by filtering traffic. • Reduce broadcast and multicast traffic by blocking or selectively filtering packets. • Support multiple paths and routes between them. • Provide increased bandwidth for the newly created segments. • Increase security by preventing packets between hosts on one side of the router from propagating to the other side of the router. • Increase the effective distance of the network by creating new collision domains. • Provide layer 3 routing, packet fragmentation and reassembly, and traffic flow control. • Have a higher latency than bridges because they have more to process.

  38. LAN Switching • Although switches are similar to bridges in several ways, using a switch on the LAN has a different effect on the way network traffic is propagated. • The remainder of this chapter focuses on the ways in which a switch can affect LAN communications. • First, you will learn how a switch segments the LAN. The benefits and drawbacks of using a switch on the LAN also will be described. • Next, you will learn how a switch operates and the switching components that are involved. • Finally, you will learn how you can use switches to create virtual LANs.

  39. Segmentation With Switches • Bridges and switches are similar, so much so that switches are often called multiport bridges. • The main difference between a switch and a bridge is that the switch typically connects multiple stations individually, thereby segmenting the LAN into separate ports. A bridge typically only divides two segments. • Although a switch propagates broadcast and multicast traffic to all ports, it performs microsegmentation on unicast traffic, as shown in Figure 7-2 on the next slide. • Microsegmentation means that the switch sends a packet with a specific destination directly to the port to which the destination host is attached.

  40. Microsegmentation Example

  41. Microsegmentation • In the figure, when Host A sends a unicast to Host D, the switch receives the unicast packet on the port to which Host A is attached. • Then the switch opens the data packets, reads the destination MAC address, and then passes the packet directly to the port to which Host D is attached. • When Host B sends a broadcast packet, the switch forwards the packet to all devices attached to the switch. Figure 7-3 on the next slide shows the inherent logic of this process. • Given the number of steps that a switch must perform on each packet, its latency is typically higher than that of a repeater.

  42. Microsegementation Logic Example

  43. Microsegmentation Continued • Faster processors and a variety of switching techniques make many switches faster than bridges. • Since switches microsegment most traffic, bandwidth on the collision domain improves. • When one host is communicating directly with another host, the hosts can utilize the full bandwidth of the connection. • For example, with a 10 Mbps switch on a 10BaseT LAN, the switch provides 10 Mbps connections between each host that is attached. • If a half-duplex hub were used instead of a switch, all devices on the collision domain would share the 10 Mbps connection.

  44. Benefits of Switching • Switches provide the following benefits: • Reduction in network traffic and collisions • Increase in available bandwidth per station because stations can communicate in parallel • Increase in the effective distance of the LAN by dividing it into multiple collision domains • Increased security because unicast traffic is sent directly to its destination and not to all other stations on the collision domain

  45. Switch Operations • A switch learns the hardware address of devices to which it is attached by reading the source address of packets as they are transmitted across the switch. • The switch matches the source MAC address with the port from which the frame was sent. The MAC to switch port mapping is stored in the switch's content addressable memory (CAM). • The switch refers to the CAM when it is forwarding packets, and it updates the CAM continuously. • Each mapping receives a timestamp every time it is referenced. • Old entries, which are ones that are not referenced frequently enough, are removed from the CAM.

  46. Switch Memory • The switch uses a memory buffer to store frames as it determines to which port(s) the frame will be forwarded. • There are two different types of memory buffers that a switch can use: port-based memory buffering or shared memory buffering. • In port-based memory buffering, each port has a certain amount of memory that it can use to store frames. If a port is inactive, then its memory buffer is idle. • If a port is receiving a high volume of traffic near network capacity, the traffic may overload its buffer and other frames may be delayed or require retransmission.

  47. Shared Memory Buffering • Shared memory buffering offers an advantage over port-based memory buffering in that any port can store frames in the shared memory buffer. • The amount of memory that each port uses in the shared memory buffer is dynamically allocated based on the port's activity level and the size of frames transmitted. • Shared memory buffering works best when a few ports receive a majority of the traffic. • This situation occurs in client/server environments, because the ports to which servers are attached will typically see more activity than the ports to which clients are attached.

  48. Asymmetric Switching • Some switches can interconnect network interfaces of different speeds. These switches use asymmetric switching and, typically, a shared memory buffer. • The shared memory buffer allows switches to store packets from the ports operating at higher speeds when it is necessary to send that information to ports operating at lower speeds. • Asymmetric switching is also better for client/server environments when the server is configured with a network card that is faster than the network cards of the clients. • This allows the server to handle the client's requests more quickly than if it were limited to 10 Mbps.

  49. Symmetric Switching • Switches that require all attached network interface devices to use the same transmit/receive speed use symmetric switching. • For example, a symmetric switch could require all ports to operate at 100 Mbps per second or maybe at 10 Mbps, but not at a mix of the two speeds.

  50. Switching Methods • All switches base packet-forwarding decisions on the packet's destination MAC address. • However, all switches do not forward packets in the same way. • There are actually two main methods for processing and forwarding packets. One is called cut-through and the other is called store-and-forward. • From those two methods, two additional forwarding methods were derived: fragment free and adaptive cut-through. • Cisco switches come with a menu system, which allows you to choose from the available switch options, as shown in Figure 7.4 on the next slide.

More Related