1 / 68

ITD Overview

ITD Overview. Mouli Vytla Samar Sharma Rajendra Thirumurthi. ITD: Multi-Terabit Load-balancing with N5k/N6k/N7k. ASIC based L4 load-balancing at line-rate Every N7k port can be used for load-balancing

Download Presentation

ITD Overview

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ITD Overview Mouli Vytla Samar Sharma Rajendra Thirumurthi

  2. ITD: Multi-Terabit Load-balancing with N5k/N6k/N7k • ASIC based L4 load-balancing at line-rate • Every N7k port can be used for load-balancing • Redirect line-rate traffic to any devices, for example web cache engine, Web Accelerator Engine (WAE), WAAS, VDS-TC, etc. • No service module or external L4 load-balancer needed • Provides IP-stickiness, resiliency (like resilient-ECMP) • NAT (available for EFT) • Weighted load-balancing • Nexus 5k/6k (EFT/PoC for now) • Provides the capability to create clusters of devices, for e.g., Firewalls, IPS, or Web Application Firewall (WAF) • Performs health monitoring and automatic failure handling • Provides ACL along with redirection and load balancing simultaneously. • Order of magnitude reduction in configuration and ease of deployment • The servers/appliances don’t have to be directly connected to N7k • Supports both IPv4 and IPv6

  3. ITDDeployment example Redirect loadbalance ACL to select traffic ITD Select the traffic destined to VIP Clients Po-6 Po-5 Po-8 Po-7 Note: the devices don’t have to be directly connected to N7k

  4. ITD feature Advantages slide 1 of 3 • Scales to large number of Nodes • Significant reduction of Configuration Complexity • eg, 32 node cluster would require ~300 configuration lines without ITD • ITD configuration requires only 40 lines • N + M redundancy. Health Monitoring of servers/appliances • DCNM Support (EFT/PoC) • IP-stickiness, resiliency • Supports both IPv4 and IPv6, with VRF awareness • Zero-Touch Appliance deployment • No certification, integration, or qualification needed between the appliances and the Nexus 7k switch.

  5. ITD feature Advantages slide 2 of 3 • Simultaneously use heterogeneous appliances (different models / vendors) • Flow coherent symmetric traffic distribution • Flow coherency for bidirectional flows. Same device receives the forward and reverse traffic • Traffic Selection: • ACL • VIP/Protocol/Port • Not dependent on N7k HW architecture • Independent of Line-card types, ASICs, Nexus 7000, Nexus 7700, etc. • Customer does not need to be aware of “hash-modulo”, “rotate” options for Port-Channel configuration • ITD feature does not add any load to the supervisor CPU • ITD uses orders of magnitude less hardware TCAM resources than WCCP

  6. ITD feature Advantages slide 3 of 3 • CAPEX : Wiring, Power, Rackspace and Cost savings • Automatic Failure Handling • Dynamically reassign traffic (going towards failed node) to Standby node • No manual configuration or intervention required if a link or server fails • Migration from N7000 to N7700 and F3 • Customer does not need to be concerned about upgrading to N7700 and F3 • ITD feature is hardware agnostic, feature works seamlessly after upgrade • Complete transparency to the end devices • Simplified provisioning and ease of deployment • Debuggability: ITD doesn't have WCCP-like handshake messages • The solution handles an unlimited number of flows

  7. Why & Where Do We Need This FeatureNetwork Deployment Examples

  8. ITD use-cases • Use with clustering (Services load-balancing) • Eg, Firewall, Hadoop/Big Data, Web application Firewalls (WAF), IPS, load-balance to Layer 7 load-balancers. • Redirecting • Eg. Web accelerator Engines (WAE), Web caches • Server Load-balancing • Eg, application servers, web servers, VDS-TC (Video transparent caching) • Replace PBR • Replace ECMP, Port-channel • DCI Disaster Recovery Please note that ITD is not a replacement for Layer-7 load-balancer (URL, cookies, SSL, etc).

  9. ITD Use-case: Clustering • Performance gap between Switch and Servers/Appliances • Appliance vendors try to scale capacity by stacking or clustering. Both models have deficiencies • Stacking Solution (port-channel, ECMP) drawbacks: • Manual configuration with large number of steps • Application level node failure not detected • Ingress/Egress Failure handling across pair of switches requires manual intervention • Traffic black-holing can easily occur. • Doesn’t scale for large number of nodes • Clustering solution drawbacks: • Redirection of traffic among cluster nodes • Doesn’t scale typically above 8 nodes • Dedicated control link between nodes • Dedicated port(s) reserved on each node for control link traffic • Very complex to implement and debug

  10. ITD comparison with Port-channel, ECMP, PBR

  11. ITD use-case : Web Accelerator Engines • Traffic redirection to devices such as web caches, Video caches • Appliance vendors try to redirect using WCCP or PBR. Both models have deficiencies • WCCP Solution drawbacks: • Appliance has to support WCCP protocol • Explosion in the number of TCAM entries due to WCCP • Complex protocol between switch and appliance • Troubleshooting involves both switch and appliance • User cannot choose the load-balancing method • Appliances have to be aware of health of other appliances. • Supervisor CPU utilization becomes high • Only IPv4 supported on N7k • PBR solution drawbacks: • Very manual and error prone method • Very limited probing • No automatic failure detection and correction (failaction) • Doesn't scale

  12. ITD comparison with WCCP

  13. ITD use-case : Server Load-Balancing • Server migration from 1G to 10G • Largest load-balancers today can support ~100G • Large data centers need multi-Terabit load-balancing • ITD can perform (ACL + VIP + Redirection + LB) on each packet at line-rate. • ITD also provides support for advertising the VIP to the network. • ITD allows wild-card VIP and L4 port number • Server health monitoring • Eg, Load-balance traffic to 256 servers of 10G each. • Weighted Load balancing to distribute load proportionately

  14. ITD comparison with Traditional Load-balancer

  15. ITD Clustering: one-ARM mode Topology src-ip loadbalance ITD Clients Po-6 Po-5 Po-8 Po-7 Note: the devices don’t have to be directly connected to N7k

  16. ITD Clustering: Sandwich Mode topology Outside Inside dst-ip loadbalance src-ip loadbalance ITD ITD N7k-2 N7k-1 Clients

  17. ITD Clustering: Sandwich Mode with NAT Outside Inside dst-ip loadbalance src-ip loadbalance ITD ITD N7k-2 N7k-1 Src IP = client IP Dest IP = RS Src IP = VIP Dest IP = Client Src IP = Client Dest IP = VIP Src IP = RS Dest IP = Client Clients External Internal Mobile dev

  18. ITD Clustering: Sandwich Mode (two VDCs) Outside Inside src-ip loadbalance dst-ip loadbalance Clients ITD VDC 2 VDC 1 ITD Clients

  19. ITD Clustering: one-ARM mode, VPC Topology N7k-1 N7k-2 ITD ITD Po-1 Po-3 Po-4 Po-2

  20. ITD Load-balancing: VIP mode ITD Po-1 Po-2 Po-3 Loadbalancing VIP: 210.10.10.100 Clients

  21. ITD: Load-balance selective Traffic (ACL + VIP + Redirect + LB) Redirect Src-IP loadbalance ACL to select traffic ITD Select the traffic destined to VIP Clients Po-6 Po-5 Po-8 Po-7 Web-cache/video-cache/CDN

  22. Traditional Data center (without ITD) Outside Inside Firewall LB Server L4 LB Server L4 LB Clients Web servers App servers

  23. ITD enabled Data center Server L4 LB Firewall LB ITD Web servers Server L4 LB Clients App servers

  24. N7K: NAT Client-1: 51.51.51.2 50.50.50.100 ITD 2 1 Po-1 Loadbalancing VIP: 210.10.10.100 4 3 Clients

  25. N7K ITD: VIP Loadbalancing with NAT Client-1: 51.51.51.2 50.50.50.100 ITD 1 2 Po-1 Loadbalancing VIP: 210.10.10.100 4 3 Clients 50.50.50.101 Client-2: 51.51.51.3

  26. ITD Clustering: Use with VMs Web Server 210.10.10.100 Clients ITD VLAN 2000 e3/1 CiscoUCS vNIC / vSwitch vNIC / vSwitch vNIC / vSwitch vNIC / vSwitch vNIC / vSwitch vNIC / vSwitch 210.10.10.12 210.10.10.11 210.10.10.13 210.10.10.14 VLAN 2000 220.10.10.20 220.10.10.30 220.10.10.40 220.10.10.10

  27. Feature Specs & Details

  28. ITD Feature Sizing Note : These are for 6.2(10) NX-OS release.

  29. Configuration & Troubleshooting

  30. ITD: Enabling Feature Command Syntax: [no]feature itd • Executed in CLI config mode • Enables/Disables ITD feature • N7k# conf t • Enter configuration commands, one per line. End with CNTL/Z. • N7k(config)# feature itd • N7k# sh feature | grep itd • itd 1 enabled

  31. ITD: Service Creation steps • Three Primary steps to configure an ITD Service • Create Device group • Create ITD service • Attach Device group to ITD Service NOTE: • ITD is a conditional feature and needs to be enabled via “feature itd” • EL2 license required

  32. ITD: Creating a Device group • Provide a template to group devices. Device Group contains: • Node IP address • Active or Standby mode of a node. • Probe to use for health monitoring of node N7k(config)# itd device-group FW-INSPECT Creating a device group N7k(config-device-group)# node ip 4.4.4.4 Configuring an active node N7k(config-device-group)# node ip 5.5.5.5 mode hot-standby Configuring standby node N7k(config-device-group)# probe ? icmp ITD probe icmp tcp ITD probe tcp udp ITD probe udp dns ITD DNS probe N7k(config-device-group)# probe icmp frequency 10 retry-count 5 timeout 3 N7k(config-device-group)# probe tcp port 80 frequency 10 retry-count 5 timeout 5 N7k(config-device-group)# probe udp port 53 frequency 10 retry-count 5 timeout 5 Note: for TCP/UDP probes, destination port number can be specified

  33. ITD: Configuring Device Group Command Syntax: [no]itd device-group <device-group-name> • Executed in CLI config mode • Creates/Deletes Device Group N7k(config)# feature itd N7k(config)# itd device-group WEBSERVERS N7k(config-device-group)# node ip 20.20.20.2 N7k(config-device-group)# node ip 20.20.20.3 N7k(config-device-group)# node ip 20.20.20.4 N7k(config-device-group)# node ip 20.20.20.5

  34. ITD: Configuring Device Group w/ group-level standby Command Syntax: [no]itd device-group <device-group-name> • Executed in CLI config mode • Creates/Deletes Device Group N7k(config)# feature itd N7k(config)# itd device-group WEBSERVERS N7k(config-device-group)# node ip 20.20.20.2 N7k(config-device-group)# node ip 20.20.20.3 N7k(config-device-group)# node ip 20.20.20.4 N7k(config-device-group)# node ip 20.20.20.5 N7k(config-device-group)# node ip 20.20.20.6 mode hot-standby

  35. ITD: Configuring Device Group w/ node-level standby Command Syntax: [no]itd device-group <device-group-name> • Executed in CLI config mode • Creates/Deletes Device Group N7k(config)# feature itd N7k(config)# itd device-group WEBSERVERS N7k(config-device-group)# node ip 20.20.20.2 standby 20.20.20.6 N7k(config-device-group)# node ip 20.20.20.3 N7k(config-device-group)# node ip 20.20.20.4 N7k(config-device-group)# node ip 20.20.20.5

  36. ITD: Configuring Device Group w/ weights for load distrbution Command Syntax: [no]itd device-group <device-group-name> • Executed in CLI config mode • Creates/Deletes Device Group N7k(config)# feature itd N7k(config)# itd device-group WEBSERVERS N7k(config-device-group)# node ip 20.20.20.2 weight 2 N7k(config-device-group)# node ip 20.20.20.3 weight 4 N7k(config-device-group)# node ip 20.20.20.4 N7k(config-device-group)# node ip 20.20.20.5

  37. ITD: Configuring Probe Command Syntax: [no]probe icmp [ frequency <freq> | timeout <timeout> | retry-count <retry-count>] [no] probe [tcp | udp] <port-num> [ frequency <freq> | timeout <timeout> | retry-count <retry-count> ] • Executed in CLI config mode • Executed as sub-mode of ITD device-group CLI • Used for health monitoring of nodes N7k(config)# itd device-group WEBSERVERS N7k(config-device-group)# node ip 20.20.20.2 N7k(config-device-group)# node ip 20.20.20.3 N7k(config-device-group)# node ip 20.20.20.4 N7k(config-device-group)# node ip 20.20.20.5 N7k(config-device-group)# probe icmp

  38. ITD: Creating ITD Service • ITD service attributes: • device-group Associate Device Group with service • ingress interface  Specify list of ingress interfaces • load-balance Select Load distribution method • virtual Configuring virtual IP N7k(config)# itd <service-name> ? device-group ITD device group failaction ITD failaction ingress ITD Ingress interface load-balance ITD Loadbalance scheme peer Peer for sandwich mode virtual ITD virtual ip configuration vrfITD service vrf nat Network Address Translation N7k(config-itd)# load-balance method ? dst Destination based parameters src Source based parameters N7k(config-itd)# load-balance method src ? ip IP ip-l4port IP and L4 port • N7k(config-itd)# virtual ip 4.4.4.4 255.255.255.255 ? advertise Advertise tcp TCP Protocol udp UDP Protocol

  39. ITD: Configuring a Service Command Syntax: [no]itd <service-name> • Executed in CLI config mode • Creates/Deletes ITD service N7k(config)# itd WebTraffic

  40. ITD: Configuring Ingress Interface Command Syntax: [no]ingress interface <interface 1>, <interface 2>, <interface range> • Executed in CLI config mode • Executed as sub-mode of ITD service CLI • Specify list of ingress interfaces for ITD service N7k(config)# itd WebTraffic N7k(config-itd)# ingress interface e3/1, e4/1-10

  41. ITD: Associating Device Group Command Syntax: [no]device-group <device group name> • Executed in CLI config mode • Executed as sub-mode of ITD service CLI • Specify Device Group to associate with ITD service N7k(config)# itd WebTraffic N7k(config-itd)# ingress interface e3/1, e4/1-10 N7k(config-itd)# device-group WEBSERVERS

  42. ITD: Configuring Loadbalance method Command Syntax: [no]load-balance method [src | dst ] [ip | ip-l4port [tcp | udp] range start end]] • Executed in CLI config mode • Executed as sub-mode of ITD service CLI • Specify Loadbalancing method N7k(config)# itd WebTraffic N7k(config-itd)# ingress interface e3/1, e4/1-10 N7k(config-itd)# device-group WEBSERVERS N7k(config-itd)# load-balance method src ip

  43. ITD: Configuring Loadbalance buckets Command Syntax: [no]load-balance method [src | dst] buckets <bucket> mask-position <mask> • Executed in CLI config mode • Executed as sub-mode of ITD service CLI • Specify Loadbalancing method N7k(config)# itd WebTraffic N7k(config-itd)# ingress interface e3/1, e4/1-10 N7k(config-itd)# device-group WEBSERVERS N7k(config-itd)# load-balance buckets 16

  44. Loadbalance Bucket • Load balance bucket option provides user to specify the number of ACLs created per service. • The bucket value must be configured in powers of 2. • When buckets are configured more than the configured Active nodes, the buckets are applied in Round Robin. • Bucket configuration is optional, by default the value is computed based on the number of configured nodes.

  45. ITD: Configuring Loadbalance mask-position Command Syntax: [no]load-balance mask-position <mask> • Executed in CLI config mode • Executed as sub-mode of ITD service CLI • Specify Loadbalancing method N7k(config)# itd WebTraffic N7k(config-itd)# ingress interface e3/1, e4/1-10 N7k(config-itd)# device-group WEBSERVERS N7k(config-itd)# load-balance mask-position 8

  46. ITD: Configuring VIP Command Syntax: [no]virtual [ip | ipv6] <ip-address> [<net mask> | <prefix>] [ip | tcp <port-num> | udp <port-num> ] [advertise enable| disable] • Executed in CLI config mode • Executed as sub-mode of ITD service CLI • Used to host VIP on N7k N7k(config)# itd WebTraffic N7k(config-itd)# ingress interface e3/1, e4/1-10 N7k(config-itd)# device-group WEBSERVERS N7k(config-itd)# loadbalance method src-ip N7k(config-itd)# virtual ip 210.10.10.100255.255.255.255

  47. ITD: Configuring VIP with advertise Command Syntax: [no]virtual [ip | ipv6] <ip-address> [<net mask> | <prefix>] [ip | tcp <port-num> | udp <port-num> ] [advertise enable| disable] • Executed in CLI config mode • Executed as sub-mode of ITD service CLI • Used to host VIP on N7k, with advertise enable • Advertise enable is RHI for ITD, creates static routes for the configured VIP • The static routes can be redistributed, based on user configured routing protocol. N7k(config)# itd WebTraffic N7k(config-itd)# ingress interface e3/1, e4/1-10 N7k(config-itd)# device-group WEBSERVERS N7k(config-itd)# loadbalance method src-ip N7k(config-itd)# virtual ip 210.10.10.100 255.255.255.255 advertise enable

  48. ITD: Configuring VIP with NAT Command Syntax: [no]nat destination • Executed in CLI config mode • Executed as sub-mode of ITD service CLI • Used to translate destination-IP to VIP N7k(config)# itd WebTraffic N7k(config-itd)# ingress interface e3/1, e4/1-10 N7k(config-itd)# device-group WEBSERVERS N7k(config-itd)# loadbalance method src-ip N7k(config-itd)# virtual ip 210.10.10.100 255.255.255.255 advertise enable N7k(config-itd)# nat destination

  49. ITD: Configuring failaction node reassign Command Syntax: [no]failaction node reassign • Executed in CLI config mode • Executed as sub-mode of ITD service CLI • Used to reassign traffic to an Active node, on a node failure • ITD probe configuration is mandatory, also supported only for IPv4 addresses. • Once the failed node comes back, the recovered node starts getting traffic N7k(config)# itd WebTraffic N7k(config-itd)# ingress interface e3/1, e4/1-10 N7k(config-itd)# device-group WEBSERVERS N7k(config-itd)# failaction node reassign

  50. Failaction node reassign contd. • Failaction reassign with Standby • When the node goes down/probe failed, the traffic would be reassigned to the first available Active node. • When the node comes up/probe success from failed state, the node that came up will start handling the connections. • If all the nodes are down, the packets will be get routed automatically. • Failaction reassign without Standby • When the node goes down/probe failed, and if there is a working Standby node  traffic is directed to the first available Standby node. • When all nodes are down, including the Standby node. The traffic will be reassigned to the first Available Active Nodes. • When the node comes up/probe success from failed state, the node that came up will start handling the connections. • If all the nodes are down, the packets will be get routed automatically.

More Related