carlos nivon n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Carlos Nivon PowerPoint Presentation
Download Presentation
Carlos Nivon

Loading in 2 Seconds...

play fullscreen
1 / 110

Carlos Nivon - PowerPoint PPT Presentation


  • 179 Views
  • Uploaded on

Carlos Nivon. Cisco Catalyst 6500 Series Switches:. Comunidad de Sopórte de Cisco – Webcast en vivo. Carlos Nivón. Gracias por su asistencia el día de hoy. La presentación incluirá algunas preguntas a la audiencia.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Carlos Nivon' - emily-ramsey


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
carlos nivon

Carlos Nivon

Cisco Catalyst 6500 Series Switches:

gracias por su asistencia el d a de hoy
Gracias por su asistencia el día de hoy
  • La presentación incluirá algunas preguntas a la audiencia.
  • Le invitamos cordialmente a participar activamente en las preguntas que le haremos durante la sesión
copia de la presentaci n
Copia de la presentación
  • Si desea bajar una copia de la presentación de hoy, vaya a la liga indicada en el chat o use ésta dirección
  • https://supportforums.cisco.com/docs/DOC-28341
cat 6500 slot orientation
Cat 6500 slot Orientation

6509-NEBS-A

6513

6509-NEBS

(EOS)

6509

6506

6503

Vertically Aligned Slots

Horizontally Aligned Slots

supervisor engine 32
Supervisor Engine 32

Access Layer

Supervisor 32

supervisor engine 720
Supervisor Engine 720

Switch Fabric

Supervisor 720

with Integrated

Switch Fabric

Core Layer

ethernet and wan line cards
Ethernet and WAN Line Cards

Ethernet Line Cards

WAN Line Cards

advanced services modules
Advanced Services Modules

Security

Application Networking Services

advanced services modules cont
Advanced Services Modules (Cont.)

Wireless Services

IP Telephony

Network Monitoring

classic 32 gbps shared bus backplane
Classic 32-Gbps Shared-Bus Backplane

Line Card

Multilayer Forwarding Table

32-Gbps Shared Switching Bus

PFC Switching System

Control Bus

Results Bus

Multilayer Switch Feature Card

Bus

ASIC

Port or Bus

ASIC

Local

Buffer

Fabric Arbitration

Port

ASIC

Network MGMT

NMP/MCP

Local Buffer

Supervisor Engine

10/100 Ethernet

Gigabit Ethernet

crossbar switch fabric
Crossbar Switch Fabric

Multilayer Forwarding Table

C

R

O

S

S

B

A

R

Fabric

ASIC

CEF256

Port ASIC

1 x 8 Gbps

PFC Switching System

dCEF256

Fabric

ASIC

Port ASIC

1 x 8 Gbps

Multilayer Switch Feature Card

Fabric

ASIC

Port ASIC

1 x 8 Gbps

Fabric Arbitration

CEF720

Port ASIC

Fabric

ASIC

1 x 20 Gbps

Network MGMT

NMP/MCP

1 x 20 Gbps

Fabric

ASIC

Port ASIC

Supervisor Engine 720

crossbar switch fabric layout nine slot chassis

Fabric ASIC

Fabric ASIC

Slot6

Slot5

Crossbar Switch Fabric Layout Nine-Slot Chassis

Slot1

Slot2

Slot3

Slot4

Fabric ASIC

Fabric ASIC

Fabric ASIC

Fabric ASIC

Slot 5

Slot 6

Type of card in slot:

Fabric ASIC

Fabric ASIC

Fabric ASIC

= Fabric (SFM/Sup)

Slot7

Slot8

Slot9

= Line Card

crossbar switch fabric 13 slot chassis

Fabric ASIC

Fabric ASIC

Slot7

Slot8

Crossbar Switch Fabric 13-Slot Chassis

Slot1

Slot2

Slot3

Slot4

Slot5

Slot6

Fabric ASIC

Fabric ASIC

Fabric ASIC

Fabric ASIC

Fabric ASIC

Fabric ASIC

Slot 7

Slot 8

Type of card in slot:

Fabric ASIC

Fabric ASIC

Fabric ASIC

Fabric ASIC

Fabric ASIC

= Fabric (SFM/Sup)

Slot9

Slot10

Slot11

Slot12

Slot13

= Line Card

cef forwarding architectures

CEF

  • Hardware-based centralized forwarding
  • PFC on supervisor makes all forwarding decisions
  • Handles centralized forwarding up to 30 Mpps
CEF Forwarding Architectures

Features of CEF forwarding architectures include the following:

  • Hardware-based distributed forwarding
  • dCEF engine has a copy of the entire forwarding table at the line card
  • All traffic is switched at a sustained 48 Mpps (for DFC3 on CEF720)

dCEF

supervisor engine 720 switch fabric connectivity
Supervisor Engine 720 Switch-Fabric Connectivity

30 to 400 Mpps Forwarding Performance

Supervisor Engine 720

MSFC3

Routing Table

PFC3

Hardware Fwd

Tables

z

CEF720 Series

dCEF720 Series

20

Optional

DFC3

Integrated

DFC3

20

20

Integrated Switch Fabric

20

20

8

8

8

32-Gbps Switching Bus

CEF256 Series

dCEF256 Series

Integrated

DFC3

Optional

DFC3

Classic Series

supervisor engine 321
Supervisor Engine 32

Supervisor Engine 32 with Eight GE Uplinks

WS-SUP32-GE-3B

Supervisor Engine 32 with Two 10-GE Uplinks

WS-SUP32-10GE-3B

supervisor engine 32 front panel
Supervisor Engine 32: Front Panel

8 x SFP based GE Uplink Ports

2 x USB Ports

Compact Flash

Slot

1 x 10/100/1000 GE

Uplink Port

RS-232

Console Port

integrated pfc3
Integrated PFC3

PFC3B

Supervisor Engine 32

integrated msfc2a
Integrated MSFC2a

MSFC2a

Supervisor Engine 32

supervisor engine 32 line card compatibility
Supervisor Engine 32 Line Card Compatibility

Supervisor Engine 32

*OSM: Original Storage Manufacturer

supervisor engine 720 overview
Supervisor Engine 720 Overview

Console Port

Uplink Ports

Removable Storage Slots

supervisor engine 720 options
Supervisor Engine 720 Options

Supervisor Engine 720-3B

Supervisor Engine 720-3BXL

Incorporates new PFC3B to provide the same features as the XL version but not as high a capacity for routes and flow information

Incorporates new PFC3BXL, extending hardware features and system capacity for routes and flow information

supervisor engine 720 switch fabric
Supervisor Engine 720 Switch Fabric
  • Integrated 720-Gbps switch fabric.
  • CEF256 and dCEF256 connect in at 8 Gbps per fabric channel.
  • CEF720 and dCEF720 connect in at 20 Gbps per fabric channel.

Switch Fabric

supervisor engine 720 hardware features
Supervisor Engine 720 Hardware Features

IPv6 Software Features

IPv6 addressing

ICMP for IPv6

DNS for IPv6

V6 MTU path discovery

SSH for IPv6

IPv6 Telnet

IPv6 traceroute

dCEF for IPv6

RIP for IPv6

IS-IS for IPv6

OSPF v3 for IPv6

BGP for IPv6

IPv6 Hardware Features

128,000 FIB entries

IPv6 load sharing up to 16 paths

EtherChannel hash across 48 bits

IPv6 policing/NetFlow/classification

STD and EXT V6 ACLs

IPv6 QoS lookups

IPv6 multicast

IPv6-to-IPv4 Tunneling

IPv6 edge over MPLS (6PE)

IPv6 function located

on PFC3

mpls hardware features
MPLS Hardware Features

MPLS applies to any Ethernet port on the following line cards:

Classic Ethernet Line Cards

CEF256 Ethernet Line Cards

MPLS HARDWARE FEATURES

dCEF256 Ethernet Line Cards

Up to 1000 MPLS VPNs

MPLS VPN (RFC 2457) on any Ethernet port

MPLS multicast VPN

MPLS label switch router (LSR)

MPLS label edge router (LER)

MPLS Traffic Engineering (TE)

MPLS Ethernet over MPLS (EoMPLS) on PFC3B

DSCP-to-EXP mapping

CEF720 Ethernet Line Cards

dCEF720 Ethernet Line Cards

MPLS function located

on PFC3

catalyst 6500 architecture overview

Catalyst 6500 Architecture Overview

Catalyst 6500 Line Cards

catalyst 6500 line cards
Catalyst 6500 Line Cards

C

A

T

A

L

Y

S

T

6

5

0

0

10/100BASE-TX and

100BASE-FX

10/100/1000BASE-TX

Gigabit Ethernet SFP

LI

N

E

C

A

R

D

S

GE GBIC

10GE

WAN

Optical Services Modules

In-line Power

SIP

classic and crossbar switch fabric line cards
Classic and Crossbar Switch Fabric Line Cards

Shared Bus Connector

Crossbar Connector

Shared Bus Connector

Classic

CEF256

line card types

Classic

Line Cards

Line Card Types

32-Gbps Shared Bus

CEF256

Line Cards

CEF720

Line Cards

20

20

8

dCEF720

Line Cards

dCEF256

Line Cards

Supervisor

8

8

20

20

Switch Fabric Crossbar

classic line card architecture
Classic Line Card Architecture

Classic line cards support a connection to the 32- Gbps shared bus only.

32-Gbps Shared Bus

Gigabit Ethernet ASIC

10/100 ASIC

10/100 ASIC

10/100 ASIC

10/100 ASIC

Buffer

Buffer

Buffer

Buffer

Ports 1–12

Ports 13–24

Ports 25–36

Ports 37–48

48-Port 10- and 100-MBps Line Card

cef256 line card architecture
CEF256 Line Card Architecture

Crossbar

CEF256 line cards support a connection to the 32-Gbps shared bus and an 8-Gbps connection to the switch fabric.

32-Gbps Shared Bus

8

Optional DFC

Daughter Card

FabricASIC

32 Gbps Local Switching Bus

Port ASIC

Port ASIC

Port ASIC

Port ASIC

512-KB Buffer

512-KB Buffer

512-KB Buffer

512-KB Buffer

Ports 1–4

Ports 5–8

Ports 9–12

Ports 13–16

16-Port Gigabit Ethernet Line Card

dcef256 line card architecture
dCEF256 Line Card Architecture

Crossbar

8

8

dCEF256 line cards support two 8-Gbps connections to the switch fabric only.

Fabric ASIC

Fabric ASIC

Integrated DFC and DFC3

32-Gbps Local Bus

32-Gbps Local Bus

Port ASIC

Port ASIC

Port ASIC

Port ASIC

512-KB Buffer

512-KB Buffer

512-KB Buffer

512-KB Buffer

Ports 1–4

Ports 5–8

Ports 9–12

Ports 13–16

16-Port Gigabit Ethernet Line Card

cef720 line card architecture
CEF720 Line Card Architecture

Crossbar

20

20

32-Gbps Shared Bus

Optional DFC3

Daughter Card

Fabric

ASIC

Fabric

ASIC

Port ASIC

Port ASIC

Port ASIC

Port ASIC

Ports 1–12

Ports 13–24

Ports 25–36

Ports 37–48

48-Port Gigabit Ethernet Line Card

dcef720 line card architecture
dCEF720 Line Card Architecture

Crossbar

20

20

dCEF720 line cards support two 20-Gbps connections to the switch fabric only.

Integrated

DFC

Fabric

ASIC

Fabric

ASIC

Port ASIC

Port ASIC

Port ASIC

Port ASIC

Ports 1–12

Ports 13–24

Ports 25–36

Ports 37–48

48-Port Gigabit Ethernet Line Card

classic to classic centralized forwarding

4

3

2

1

Classic-to-Classic Centralized Forwarding

Layer 3 and Layer 4

Engine

SupervisorEngine 720

Red

D

Port

ASIC

Port

ASIC

ClassicModule B

Layer 2 Engine

720-Gbps SwitchFabric

X

PFC3

DBUS

RBUS

Source

Destination

Blue VLAN

Red VLAN

Entire Packet

Packet Header

S

X

X

ClassicModule A

Port

ASIC

Port

ASIC

D

S

Blue

cef256 to cef256 centralized forwarding

8Gbps

4

1

6

5

3

2

CEF256-to-CEF256 Centralized Forwarding

D

PortASIC

PortASIC

Layers 3 and

4 Engine

SupervisorEngine 720

LCRBUS

LCDBUS

L2 Engine

720-Gbps SwitchFabric

FabricInterface

CEF256Module B

PFC3

DBUS

Source

Destination

Blue VLAN

Red VLAN

Entire packet

Packet header

S

RBUS

D

FabricInterface

CEF256Module A

8Gbps

LCDBUS

LCRBUS

X

X

PortASIC

PortASIC

Note: Packet flow for a CEF256-to-CEF720 is similar. The main differences are the CEF720 module architecture and the speed of the fabric channel to the CEF720 module.

Blue

S

cef720 and dfc3 to cef720 and defc3 distributed forwarding

1

4

2

3

5

CEF720 and DFC3-to-CEF720 and DEFC3 Distributed Forwarding

Red

D

CEF720Module B

and DFC3

PortASIC

PortASIC

Layers 3 and 4Engine

DFC3

Supervisor Engine 720

Fabric Interface and ReplicationEngine

720-Gbps SwitchFabric

PFC3

20Gbps

Layer 2Engine

20Gbps

Source

Destination

Blue VLAN

Red VLAN

Entire Packet

Packet Header

S

CEF720Module A

and DFC3

D

Fabric Interface and ReplicationEngine

Layer 2Engine

Layers 3 and

4 Engine

PortASIC

PortASIC

DFC3

Blue

S

catalyst 6500 line card options

Interface Type

Classic

CEF256

dCEF256

CEF720

10BASE-FL

10/100BASE-TX

100BASE-FX

10/100/1000BASE-TX

1000BASE GBIC

1000BASE SFP

10GE XENPAK

Services Modules

SIP

FlexWAN

OSMs*

Catalyst 6500 Line Card Options

* OSM: Optical Services Module

show commands
show Commands

The switch supports two slots for the supervisor engines. A CLI command is provided to allow the administrator to inspect which of the SFMs is active:

6500# show fabric active

Active fabric card in slot 5

No backup fabric card in the system

The mode of operation in use by the SFM can also be inspected by issuing the following command:

6500# show fabric switching-mode

Fabric module is not required for system to operate

Modules are allowed to operate in bus mode

Truncated mode is not allowed unless threshold is met

Threshold for truncated mode operation is 2 SFM-capable cards

Module Slot Switching Mode

1 Crossbar

2 Crossbar

3 Crossbar

5 DCEF

show commands cont
show Commands (Cont.)

The status of the SFM can be inspected by using the following command:

6500# show fabric status

slot channel speed module fabric

status status

1 0 8G OK OK

2 0 8G OK OK

3 0 8G OK OK

5 0 20G OK OK

The utilization of the SFM can be inspected by using the following command:

6500# show fabric utilization

slot channel speed Ingress % Egress %

1 0 8G 28 0

2 0 8G 0 0

3 0 8G 0 25

5 0 20G 0 0

show commands cont1
show Commands (Cont.)

During troubleshooting, the SFM can be inspected for transmission errors:

6500# show fabric errors

Module errors:

slot channel crc hbeat sync DDR sync

1 0 0 0 0 0

2 0 0 0 0 0

3 0 0 0 0 0

5 0 0 0 0 0

Fabric errors:

slot channel sync buffer timeout

1 0 0 0 0

2 0 0 0 0

3 0 0 0 0

5 0 0 0 0

6500#

system capacity planning
System Capacity Planning

C6500# show platform hardware capacity ?

acl Show QoS/Security ACL capacity

cpu Show CPU resources capacity

eobc Show EOBC resources capacity

fabric Show Switch Fabric resources capacity

flash Show Flash/NVRAM resources capacity

forwarding Show forwarding engine capacity

interface Show Interface resources capacity

monitor Show SPAN resources capacity

multicast Show L3 Multicast resources capacity

netflow Show Netflow capacity

pfc Show PFC resources capacity

power Show Power resources capacity

qos Show QoS resources capacity

rate-limit Show CPU Rate Limiters capacity

system Show System resources capacity

vlan Show VLAN resources capacity

  • New CLI command that provides a dashboard view of system hardware capacity, as well as the current utilization of the system.
simplified campus example
Simplified Campus Example

6:1

WS-X6548-GE-TX (CEF256)

48 ports and 8-Gb

4:1 oversubscription

WS-X6548-GE-TX (CEF256)

48 ports, 8-Gbps backplane

8:1 oversubscription

2x

= 16 Gb

8:1

Access

Supervisor Engine 720

2x 1-Gb uplinks

1x

= 2 Gb

1.2:1

WS-X6724-SFP (CEF720)

24 ports and 20-Gb backplane

1.2:1 oversubscription

Aggregation

  • Total core-edge oversubscription ≈ 58:1
  • Traffic flows vertically, bidirectional
  • Low overall bandwidth requirements
high cpu utilization
High CPU Utilization

Why should I be concerned about high CPU usage ?

It is very important to protect the control-plane for network stability, as resources (CPU, Memory and buffer) are shared by control-plane and data-plane traffic

At what percentage level at should I start troubleshooting ?

It depends on the nature and level of the traffic. It is very essential to find a baseline CPU usage during normal working conditions, and start troubleshooting when it goes above specific threshold.

E.g., Baseline RP CPU usage 25%. Start troubleshooting when the RP CPU usage is consistently at 40% or above.

What are the usual symptoms of high CPU usage ?

  • Control-plane instability e.g., OSPF flap
  • Traffic loss
  • Reduced switching/forwarding performance
  • Slow response to Telnet / SSH
  • SNMP poll miss
high cpu utilization1
High CPU Utilization

Commands used to set baseline

RP: show process cpu

RP: show ibc

RP: show msfc netint

1 Gbps

Inband

MSFC 3

Port ASIC

C

Flash

RP

CPU

DRAM

C

RP: show ip traffic

RP: show interfaces

SP

CPU

Flash

1 Gbps

Inband

DRAM

Sup720

SP: show msfc netint

SP: show ibc

SP: show process cpu

Monitor the CPU usage in DFCs also using “remote command module <mod#> show process cpu”

C

= Controller

high cpu utilization2
High CPU Utilization

CPU utilization is due to:

  • Process (e.g., due to recurring events, control-plane process)
  • Interrupts (e.g., due to inappropriate switching path)
  • Investigate CPU utilization via “show proc cpu” and find if the usage is due to process or interrupts

DUT#show proc cpu

CPU utilization for five seconds: 99%/90%; one minute: 9%; five minutes: 8%

PID Runtime(ms) Invoked uSecs 5Sec 1Min 5Min TTY Process

2 720 88 8181 9.12% 1.11% 0.23% 18 Virtual Exec

Total CPU usage (Process + Interrupt)

CPU usage due to Interrupt

high cpu utilization process
High CPU utilization – Process

Process: ARP Input

  • Caused by ARP flooding.
  • Static route configured with interface instead of next-hop IP address. This will generate ARP request for every packet that is not reachable via more specific routes.

ip route 0.0.0.0 0.0.0.0 GigabitEthernet 2/5

DUT#show ip traffic | begin ARP

ARP statistics:

Rcvd: 6512 requests, 2092 replies, 0 reverse, 0 other

Sent: 258 requests, 707 replies (0 proxy), 0 reverse

Drop due to input queue full: 20

<snip>

DUT#show interfaces | include line protocol|rate

Vlan501 is up, line protocol is up

5 minute input rate 23013521 bits/sec, 2535 packets/sec

5 minute output rate 0 bits/sec, 0 packets/sec

Incrementing at very high rate

Look for abnormal input rate

high cpu utilization process1
High CPU utilization – Process

Process: IP Input

  • Caused by traffic that needs to process-switched or destined to the CPU

Common Reasons:

      • Traffic with IP-options enabled
      • Fragmentation (due to MTU mismatch)
      • Broadcast storm
      • Traffic that needs further CPU processing e.g., ACL Logging
      • Traffic to which ICMP Redirect or Unreachable required e.g., TTL=1, ACL Deny etc.

Configure Optimized ACL Logging (OAL) in PFC3 onwards

high cpu utilization traffic to rp cpu
High CPU utilization – Traffic to RP CPU

DUT#show ip traffic

IP statistics:

Rcvd: 81676 total, 20945 local destination

0 format errors, 0 checksum errors, 41031 bad hop count

0 unknown protocol, 19609 not a gateway

0 security failures, 0 bad options, 120 with options

Frags: 0 reassembled, 0 timeouts, 0 couldn't reassemble

0 fragmented, 0 couldn't fragment

Bcast: 417 received, 0 sent

Mcast: 11423 received, 52655 sent

Sent: 61340 generated, 0 forwarded

Drop: 0 encapsulation failed, 0 unresolved, 0 no adjacency

0 no route, 0 unicast RPF, 0 forced drop

0 options denied, 0 source IP address zero

ICMP statistics:

Rcvd: 0 format errors, 0 checksum errors, 17 redirects, 112 unreachable

812 echo, 812 echo reply, 0 mask requests, 0 mask replies, 0 quench

0 parameter, 0 timestamp, 0 info request, 0 other

0 irdp solicitations, 0 irdp advertisements

0 time exceeded, 0 timestamp replies, 0 info replies

ARP statistics:

Rcvd: 3518120 requests, 3636408 replies, 0 reverse, 0 other

  • TTL<2
  • IP options
  • Fragmentation
  • Broadcasts
  • ARP not resolved
  • Ping Request
  • Punts to generate ICMP redirect
  • ARPs

It also displays stats for : BGP, EIGRP, TCP, UDP, PIM, IGMP and OSPF

Do this command few times to find the fastest growing counter

high cpu utilization traffic to rp cpu1
High CPU utilization – Traffic to RP CPU

Find the interface that's holding most of the buffers

Commands to see packets getting punted

DUT#show buffers assigned

Header DataArea Pool Rcnt Size Link Enc Flags Input Output

46FDBC14 8029784 Small 1 77 36 1 200 Vl100 None

46FE0010802CBC4 Small 1 77 36 1 200 Vl100 None

. . .

DUT#show buffers input-interface vlan 100 dump

Buffer information for RxQ3 buffer at 0x378B3BC

data_area0x7C05EF0, refcount 1, next 0x0, flags 0x200

linktype 7 (IP), enctype 1 (ARPA), encsize 14, rxtype 1

if_input0x46C7C68 (Vlan100), if_output0x0 (None)

inputtime2d03h (elapsed 00:00:01.024)

outputtime 00:00:00.000 (elapsed never), oqnumber 65535

datagramstart0x7C05F36, datagramsize 62, maximum size 2196

mac_start0x7C05F36, addr_start0x7C05F36, info_start0x0

network_start0x7C05F44, transport_start0x7C05F58, caller_pc0x6C1564

source: 137.34.219.3, destination: 224.0.0.2, id: 0x0000, ttl: 1,

TOS: 192 prot: 17, source port 1985, destination port 1985

0: AFACEFAD 00000000 00000000 /,o-........

12: 00000000 00000000 00000000 00000000 ................

28: 00000000 00000000 00000000 00000000 ................

44: 00000000 0000CC43C00C0002000200A0 ......LC@......

60: 00420000 12FF74D5 00000000 00000100 .B....tU........

76: 5E00000218A90518 00850800 45C00030 ^....)......E@.0

92: 00000000 011174D58922DB03E0000002 ......tU."[.`...

108: 07C107C1001CECB4 00001001 04640100 .A.A..l4.....d..

124: 63697363 6F0000008922DB01 41920450 cisco...."[.A..P

. . .

Find the traffic. Please remember that the traffic seen may be normal control-plane traffic, expected to be sent to RP CPU

Packet details

Remember, this command shows only the process-switched traffic

high cpu utilization interrupt
High CPU utilization – Interrupt

How to troubleshoot high CPU due to interrupts ?

DUT#show proc cpu

CPU utilization for five seconds: 99%/90%; one minute: 9%; five minutes: 8%

Most of the times, packets punted to CPU has common factors.

  • Packets received on the same vlan / interface or interfaces in the same module or same VRF etc.
  • Packet have specific destination or destination prefixes learnt from a specific neighbor
  • Packet have same L4 source or destination ports
  • Anything else common ?

Details on all supported Packet Capture Tools

high cpu utilization interrupt1
High CPU utilization – Interrupt

Verify CEF is enabled globally and on all interfaces

DUT#show cef state

CEF Status:

RP instance

common CEF enabled

IPv4 CEF Status:

CEF enabled/running

dCEF enabled/running

CEF switching enabled/running

DUT#show ip interfaces | include line pro|CEF switching

Vlan2 is up, line protocol is up

IP CEF switching is enabled

Vlan3 is up, line protocol is up

IP CEF switching is enabled

Verify if CEF is enabled globally and per interface

high cpu utilization interrupt2
High CPU utilization – Interrupt

Switching path statistics – per interface basis

DUT#show interface gig7/4 stats

GigabitEthernet7/4

Switching path Pkts In Chars In Pkts Out Chars Out

Processor 4406750 353281375 32881 12422509

Route cache 74026 4589612 0 0

Distributed cache 0 0 0 0

Total 4480776 357870987 32881 12422509

DUT#show interface switching

GigabitEthernet2/2

Protocol Path Pkts In Chars In Pkts Out Chars Out

IP Process 11594 717908 16 1838

Cache misses 0

Fast 0 0 0 0

Auton/SSE 0 0 0 0

ARP Process 94 5640 5 560

Cache misses 0

Fast 0 0 0 0

Auton/SSE 0 0 0 0

. . . .

Process switched

SW CEF switched

Hw-switched

Process name

Process switched

Distributed switched packets

netdriver netdr debug
NetDriver (Netdr) Debug

Be as specific as possible; on SP, remote login switch, then same set of commands)

DUT#debug netdr capture ?

acl (11) Capture packets matching an acl

and-filter (3) Apply filters in an and function: all must match

continuous (1) Capture packets continuously: cyclic overwrite

destination-ip-address (10) Capture all packets matching ip dst address

dstindex (7) Capture all packets matching destination index

ethertype (8) Capture all packets matching ethertype

interface (4) Capture packets related to this interface

or-filter (3) Apply filters in an or function: only one must match

rx (2) Capture incoming packets only

source-ip-address (9) Capture all packets matching ip src address

srcindex (6) Capture all packets matching source index

tx (2) Capture outgoing packets only

vlan (5) Capture packets matching this vlan number

<cr>

This debug should not be service-impacting

does the cpu inband driver see the packet
Does the CPU Inband Driver See the Packet?

DUT#show netdr captured-packets

A total of 289 packets have been captured

The capture buffer wrapped 0 times

Total capture capacity: 4096 packets

------- dump of incoming inband packet -------

interface Vl1000, routine mistral_process_rx_packet_inlin

dbus info: src_vlan 0x3E8(1000), src_indx 0x45(69), len 0x40(64)

bpdu 0, index_dir 0, flood 1, dont_lrn 0, dest_indx 0x43E8(17384)

80000401 03E80400 00450000 40800000 E0000000 00000000 00000008 43E80000

mistral hdr: req_token 0x0(0), src_index 0x45(69), rx_offset 0x76(118)

requeue 0, obl_pkt 0, vlan 0x3E8(1000)

destmac FF.FF.FF.FF.FF.FF, srcmac 00.A0.CC.21.94.C4, protocol 0806

layer 3 data: 00010800 06040001 00A0CC21 94C40500 01660000 00000000

05000102 00000000 00000000 00000000 00000000 000001FE

00000006 00000000 000003E8

...

DUT#undebug netdr

DUT#debug netdr clear-capture

Example of inbound packet on interface VLAN 1000

ARP packet

Make sure to turn it off afterwards

Make sure to clear memory used up by captured packets

crashes
Crashes
  • Crashes will require TAC involvement
  • Open a TAC service request and collect the following info:
    • Crashinfo file
    • Core file (if configured so)
    • Show tech-support
    • What you were doing that made it crash!!
example of process crash output
Example of Process Crash Output

Crashing process ID

Crashing process name

00:05:29: %DUMPER-3-PROCINFO: pid = 16427: (sbin/tcp.proc), terminated due to signal SIGTRAP, trace trap (not reset when caught) (Signal from user)

00:05:29: %DUMPER-3-REGISTERS_INFO: 16427: zero at v0 v1

00:05:29: %DUMPER-3-REGISTERS_INFO: 16427: R0 00000000 00000000 00000004 00000000

00:05:29: %DUMPER-3-REGISTERS_INFO: 16427: a0 a1 a2 a3

00:05:29: %DUMPER-3-REGISTERS_INFO: 16427: R4 7BC22298 00000000 00000000 00000000

00:05:29: %DUMPER-3-REGISTERS_INFO: 16427: t0 t1 t2 t3

00:05:29: %DUMPER-3-REGISTERS_INFO: 16427: R8 00000000 00000000 00000000 00000000

00:05:29: %DUMPER-3-REGISTERS_INFO: 16427: t4 t5 t6 t7

00:05:29: %DUMPER-3-REGISTERS_INFO: 16427: R12 00000000 00000000 00000000 00000000

00:05:29: %DUMPER-3-REGISTERS_INFO: 16427: s0 s1 s2 s3

00:05:29: %DUMPER-3-REGISTERS_INFO: 16427: R16 00FDDFA0 00000000 00000000 00000000

00:05:29: %DUMPER-3-REGISTERS_INFO: 16427: s4 s5 s6 s7

00:05:29: %DUMPER-3-REGISTERS_INFO: 16427: R20 00000000 00000000 00000000 00000000

00:05:29: %DUMPER-3-REGISTERS_INFO: 16427: t8 t9 k0 k1

00:05:29: %DUMPER-3-REGISTERS_INFO: 16427: R24 00000000 722B3F4C 00000000 00000000

00:05:29: %DUMPER-3-REGISTERS_INFO: 16427: gp sp s8 ra

00:05:29: %DUMPER-3-REGISTERS_INFO: 16427: R28 7828FF90 00FDDF60 00000000 72297450

00:05:29: %DUMPER-3-REGISTERS_INFO: 16427: sr lo hi bad

00:05:29: %DUMPER-3-REGISTERS_INFO: 16427: R32 1001FC73 00000000 00000000 78288970

00:05:29: %DUMPER-3-REGISTERS_INFO: 16427: cause pc epc

00:05:29: %DUMPER-3-REGISTERS_INFO: 16427: R36 00800020 722B3F5C 00000000

00:05:29: %DUMPER-3-TRACE_BACK_INFO: 16427: (libc.so+0x2EF5C) (libc.so+0x12450) (s72033_rp-adventerprisek9_wan-58-dso-p.so+0x17C00) (libc.so+0x127AC)

00:05:30: %DUMPER-3-CRASHINFO_FILE_NAME: 16427: Crashinfo for process sbin/tcp.proc at bootflash:/crashinfo_tcp.proc-20050910-012841

00:05:30: %DUMPER-3-CORE_FILE_NAME: 16427: Core for process sbin/tcp.proc at disk0:/tcp.proc.012842.dmp.Z

00:05:31: %DUMPER-5-DUMP_SUCCESS: 16427: Core dump success

00:05:31: %SYSMGR-3-ABNORMTERM: tcp.proc:1 (jid 91) abnormally terminated, restarted scheduled

Crashinfo filename and location

Core filename and location

example of what files to collect after crash
Example of What Files to Collect After Crash
  • For previous slide tcp.proc process crash you need to collect the following files:

Cat6K#dir bootflash:

Directory of bootflash:/

4 -rw- 139528 Sep 9 2008 19:28:42 -06:00 crashinfo_tcp.proc-20050910-012841

65536000 bytes total (64979832 bytes free)

Cat6K#dir disk0:

Directory of disk0:/

1 -rw- 111923344 Sep 1 2008 10:26:54 -06:00 s72033-adventerprisek9_wan_dbg-vz.PP_R31_INTEG_050829

2 -rw- 112078968 Sep 9 2008 14:50:54 -06:00 s72033-adventerprisek9_wan_dbg-vz.pikespeak_r31_0908_1

3 -rw- 107608208 Sep 9 2008 18:50:04 -06:00 s72033-adventerprisek9_wan-vz.122-99.SX1010

4 -rw- 131517 Sep 9 2008 19:28:42 -06:00 tcp.proc.012842.dmp.Z

512040960 bytes total (180281344 bytes free)

Both filenames encode the process that crashed

Crashinfo filename and location

cisco 6500 system reliability
Cisco 6500 System Reliability

Resiliency (Layer 2 or Layer 3): SSO, NSF

Fault Detection

GOLD

Soft HA

NetworkElementRedundancy

Operations

OIR of Line Cards

OIR of Sup

OIR of PSU, Modules

TDR

NAIS

Redundancy

Supervisor

Switch Fabric

Service Modules

Clock

Fans

Power Supplies

Network Resilience

Operational Processes

Protection Schemes: HSRP/GLBP/VRRP, EtherChannel, 802.1s/w, PVST+

rpr and rpr
RPR and RPR+

The Catalyst 6500 supports failover between two supervisors installed in the switch. Two fault tolerant modes can be configured; Route Processor Redundancy (RPR) and Route Processor Redundancy Plus (RPR+).

Catalyst 6500

RPR

RPR provides failover generally within 2 to 4 minutes

RPR+ requires both supervisors to be the same, and both must run the same IOS image.

Sup720-A

Sup720-B

RPR+

RPR+ provides failover generally within 30-60 seconds

PSU

PSU

configuring rpr and rpr
Configuring RPR and RPR+

Configuration of RPR and RPR+ is achieved by entering redundancy configuration mode, then choosing the mode you wish to run.

6500# conf t

Enter configuration commands, one per line. End with CNTL/Z.

6500(config)# redundancy

6500(config-red)# mode ?

rpr Route Processor Redundancy

rpr-plus Route Processor Redundancy Plus

RPR

RPR+

6500(config-red)# mode rpr

6500(config-red)# mode rpr-plus

confirming rpr rpr status
Confirming RPR, RPR+ Status

The redundant configuration status of the switch can be viewed using the following command:

6500# show redundancy states

my state = 13 -ACTIVE

peer state = 1 -DISABLED

Mode = Simplex

Unit = Primary

Unit ID = 5

Redundancy Mode (Operational) = Route Processor Redundancy Plus

Redundancy Mode (Configured) = Route Processor Redundancy Plus

Split Mode = Disabled

Manual Swact = Disabled Reason: Simplex mode

Communications = Down Reason: Simplex mode

client count = 11

client_notification_TMR = 30000 milliseconds

keep_alive TMR = 9000 milliseconds

keep_alive count = 0

keep_alive threshold = 18

RF debug mask = 0x0

Redundant State Configured

sso overview
SSO Overview
  • Active and standby supervisors run in synchronized mode.
  • Redundant MSFC is in hot-standby mode.
  • Switch processors synchronize STP, port and VTP states.
  • PFCs synchronize Layer 2 and Layer 3 FIB, Netflow and ACL tables.
  • DFCs are not repopulated with Layer 2 and Layer 3 FIB, Netflow and ACL tables.
  • Very fast failover (0 to 3 seconds) between supervisors but still need to rebuild routes on external routers.

Sup

MSFC

PFC

Active Supervisor

Sup

MSFC

PFC

Standby Supervisor

Line Card

DFC

Line Card

DFC

DFC

Line Card

srm with sso overview
SRM with SSO Overview

Active

Standby

Standby

Active

RP

RP

RP

RP

New RP builds table and

reestablishes neighbor

relationships.

SP

SP

SP

SP

STP, Port, VTP States

STP, Port, VTP States

Layer 3 traffic forwards

on lastknown FIB in hardware.

PFCx

PFCx

PFCx

PFCx

Layer 2 and Layer 3 FIB, Netflow, ACL Tables

Layer 2 and Layer 3 FIB, Netflow, ACL Tables

DFCx

DFCx

DFCs not affected by supervisor failover

Layer 2 and Layer 3 FIB, Netflow, ACL Tables

Layer 2 and Layer 3 FIB, Netflow, ACL Tables

Before Failover

After Failover

nsf overview
NSF Overview

Catalyst 6500

NSF-aware neighbor

Linecard 1

Linecard 3

Linecard 3

Failover time: 0 to 3 seconds

NSF-capable router

Linecard 4

Primary Supervisor 720

NSF-aware neighbor

Redundant Supervisor 720

Linecard 7

Linecard 8

  • Predictable traffic path
  • No route flap

Linecard 9

PSU

1

PSU

2

  • NSF-aware neighbors do not reconverge.
  • NSF-aware neighbors help the NSF-capable router restart.
  • NSF-aware neighbors continue forwarding traffic to the restarting router.
  • NSF-capable router rebuilds Layer 3 routing protocol database from neighbor.
  • Data is forwarded in hardware based on preswitchover CEF information while routing protocols reconverge.
nsf configuration
NSF Configuration
  • To configure SSO to use NSF:
  • 6500(config)# redundancy
  • 6500(config-red)# mode sso
  • To verify the configuration:
  • 6500# show redundancy states
bgp nsf configuration
BGP NSF Configuration
  • To configure BGP NSF:
  • 6500(config)# router bgp as-number
  • 6500(config-router)# bgp graceful-restart
  • To verify the configuration:
  • 6500# show ip bgp neighbors x.x.x.x
ospf nsf configuration
OSPF NSF Configuration
  • To configure OSPF NSF:
  • 6500(config)# router ospf processID
  • 6500(config-router)# nsf
  • To verify the configuration:
  • 6500# show ip ospf
isis nsf configuration
ISIS NSF Configuration
  • To configure ISIS NSF:
  • 6500(config)# router isis tag
  • 6500(config-router)# nsf [cisco | ietf]
  • To verify the configuration:
  • 6500# show running-config
  • 6500# show isis nsf
eigrp nsf configuration
EIGRP NSF Configuration
  • To configure EIGRP NSF:
  • 6500(config)# router eigrp as-number
  • 6500(config-router)# nsf
  • To verify the configuration:
  • 6500# show running-config
  • 6500# show ip routing
dos protection control plane protection
DoS Protection: Control Plane Protection
  • High rates of link level broadcast traffic impact switch CPU and the stability of the network:
    • Storm control limits the rate of broadcast traffic received by the distribution switch.
    • Broadcast traffic within the local switch remains unrestrained.
    • Local subnet devices may still be affected, but the network remains alive.

CONST_DIAG-SP-6-HM_MESSAGE: High traffic/CPU util seen on Module 5 [SP=40%,RP=99%,Traffic=0%]

dos protection storm control
DoS Protection: Storm Control
  • Storm control is also known as broadcast suppression:
    • limits the volume of broadcast, multicast and/or unicast traffic
    • protects the network from intentional and unintentional flood attacks and STP loops
    • limits the combined rate of broadcast and multicast traffic to normal peak loads

Dropped Packets

Quantity

Threshold

Time

0

2

1

3

Seconds

p rotecting the distribution layer
Protecting the Distribution Layer
  • Configure storm control on distribution downlinks. Limit broadcast and multicast to 1.0% of a GigE link to ensure distribution CPU remains in the safe zone.

! Enable storm control

storm-control broadcast level 1.0

storm-control multicast level 1.0

Broadcast Traffic CPU Impact

Conservative Max

Sup720 CPU Load

configuring storm control
Configuring Storm Control

Storm control suppression is configured in interface configuration mode as follows:

6500(config-if)# storm-control ?

broadcast Broadcast address storm control

multicast Multicast address storm control

unicast Unicast address storm control

6500(config-if)# storm-control broadcast ?

level Set storm suppression level on this interface

6500(config-if)# storm-control broadcast level ?

<0 - 100> Enter Integer part of storm suppression level

6500(config-if)# storm-control multicast level ?

<0 - 100> Enter Integer part of storm suppression level

6500(config-if)# storm-control unicast level ?

<0 - 100> Enter Integer part of storm suppression level

configuring storm control cont
Configuring Storm Control (Cont.)

Statistics for storm control suppression can be displayed as follows:

6500# show interface g1/9 counters broadcast

Port TotalSuppDiscards

Gi1/9 1033

6500# show interface g1/9 counters multicast

Port TotalSuppDiscards

Gi1/9 12

6500# show interface g1/9 counters unicast

Port TotalSuppDiscards

Gi1/9 204

6500#

fault management on the catalyst 6500
Fault Management on the Catalyst 6500

Improving resiliency in redundant and nonredundant deployments:

Fault Management

Enhanced System Stability

Enhanced Network Stability

Misconfigured system

Memory corruption

Software inconsistency

Hardware faults

Detection

Isolation

Correction

  • Software enhancements for better fault detection
  • Mechanisms to detect and correct soft failures in the system
  • Proactive fault detection and isolation
  • Routines to detect failures that the runtime software may not be able to detect
fault management framework
Fault Management Framework

Reports Faults and Takes Action

Call Home, Syslogs, SNMP

EEM

Automates actions based on events that have occurred; TCL-based configurable fault policy

GOLD

Soft High Availability

Troubleshooting

Detects and correct soft failures

Detects system problems proactively

Provides intelligent troubleshooting and debugging mechanisms

generic online diagnostics
Generic Online Diagnostics

GOLD implements a number of health checks both at system startup and while the system is running. GOLD complements existing HA features like NSF/SSO running in the background, and alerting HA features when disruption occurs.

Diagnostic Results

Bootup Diagnostics

SYSLOG Message

%DIAG-SP-3-MAJOR: Module 2: Online Diagnostics detected a Major Error. Please use diagnostic Module 2' to see test results.

Check operational status of components

Run Time Diagnostics

On-demand diagnostics statically triggered by an administrator

Scheduled diagnostics to run at a specific time

Non-disruptive health diagnostics running in the background

Diagnostic Action

Invoke action to resolve issue i.e. reset component, invoke HA action, CallHome, etc

slide100
GOLD

Fault Detection Framework for high availability :

Boot Up Diagnostics

  • Quick go and no-go tests
  • Disruptive and nondisruptive tests

Proactive diagnostics serve as high availability triggers and take faulty hardware out of service.

Health Monitoring Diagnostics

  • Periodic background tests
  • Nondisruptive tests

Troubleshooting Tools:

On-demand Diagnostics and Schedule Diagnostics

Reactive diagnostics for troubleshooting

  • Can run all the tests
  • Include disruptive tests used in manufacturing
gold test suite
GOLD Test Suite

On-demand Diagnostics:

  • Exhaustive memory test
  • Exhaustive TCAM search test
  • Stress Testing
  • All bootup and health monitoring tests can be run on-demand

Scheduled Diagnostics:

  • All bootup and health monitoring tests can be scheduled
  • Scheduled switch-over
  • Bootup Diagnostics:
    • EARL learning tests (Sup & DFC)
    • L2 tests (channel, BPDU, capture)
    • L3 tests (IPv4, IPv6, MPLS)
    • Span and multicast tests
    • CAM lookup tests (FIB, NetFlow, QoS CAM)
    • Port loopback test (all cards)
    • Fabric snake tests

Health Monitoring Diagnostics:

    • SP-RP inband ping test (Sup’s SP/RP, EARL(L2&L3), RW engine)
    • Fabric channel health test (fabric enabled line cards)
    • MacNotification test (DFC line cards)
    • Non-disruptive loopback test
    • Scratch registers test (PLD & ASICs)
trivia
Trivia

¿Qué tienen en común la Copa Confederaciones FIFA con los Catalyst Switches de Cisco?

sesi n de preguntas y respuestas
Sesión de Preguntas y Respuestas

El experto responderá verbalmente algunas de las preguntas que hayan realizado. Use el panel de preguntas y respuestas (Q&A) para preguntar a los expertos ahora

slide105

Nos interesa su opinión!!!

Habrá un sorteo con los que llenen el questionario de evaluación

Tres asistentes recibirán un

Regalo sorpresa

Para llenar la evaluación haga click en el link que está en el chat. También saldrá automáticamente al cerrar el browser de la sesión.

slide106

Pregunte al Experto

Si tiene preguntas adicionales pregunte aquí

https://supportforums.cisco.com/message/3790884#3790884

Carlos responderá del martes 4 de diciembre al viernes 14 de diciembre del 2012.

pr ximo webcast en portugu s
Próximo Webcast en portugués

Tema: Resolución de problemas en el Session Initiation Protocol (SIP)

  • Martes 6 de diciembre
  • 7:00 a.m. Ciudad de México
  • 8:30 a.m. Caracas
  • 10:00 a.m Bs.As.
  • 2:00 p.m. Madrid
  • Michelle Jardim
  • http://tools.cisco.com/gems/cust/customerSite.do?METHOD=E&LANGUAGE_ID=P&SEMINAR_CODE=S17480&PRIORITY_CODE=
slide109

Respuesta a la Trivia

¿Qué tienen en común la Copa Confederaciones FIFA con los Catalyst Switches de Cisco?

En 1999, Cisco lanzó la familia de switches inteligentes multi-gigabit Cisco Catalyst 6000. Ese mismo año México se convierte en la primera nación que gana la copa confederaciones FIFA en casa.

slide110

Muchas gracias por su asistencia

Por favor complete la encuesta de evaluación de este evento y gane premios