1 / 28

NPAR Dell - QLogic

NPAR Dell - QLogic. October 2011. Dell and QLogic Drive Next-generation Blade Server I/O Virtualization with NPAR*. *Based on QLogic VMflex TM Technology. Agenda. Why NPAR (NIC Partitioning)? Highlights Operations Configurations. QME8242-k. PowerEdge M-Series Blade Server.

dyami
Download Presentation

NPAR Dell - QLogic

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. NPARDell - QLogic October 2011

  2. Dell and QLogic Drive Next-generation Blade Server I/O Virtualization with NPAR* *Based on QLogic VMflexTM Technology

  3. Agenda • Why NPAR (NIC Partitioning)? • Highlights • Operations • Configurations QME8242-k PowerEdge M-Series Blade Server PowerConnect M8424-k Converged Network Switch PowerEdge M1000e Modular Blade Enclosure

  4. Why NPAR? • Lowers TCO • Consolidates cables, infrastructure, and I/O • Saves server resources • Reduces operational complexity • Flexible SAN and LAN personality • VM to VM NIC traffic without external switch • Efficient I/O Utilization • Allows dynamic bandwidth provisioning • Minimizes bandwidth waste • Scales I/O workloads and connections • Scale-Out performance for virtualized servers • Provides finer control for SLA / on demand services • Simpler Deployment • Solution not dependent on OS or switch • Configuration at pre-boot or OS level Maximize Data Center Efficiency

  5. Key Attributes • No OS or BIOS changes required • NIC controls transmit flow rate • User configurable • Dynamic bandwidth allocation • Storage and NIC personalities (function type) • Full offload for iSCSI and FCoE with NPAR • Concurrent FCoE, iSCSI and NIC support • Minimum bandwidth allows fine grain QoS OS & Switch Agnostic Solution Delivers Highest Levels of Interoperability

  6. NPAR Theory of Operation Up to 4 Physical Functions on each physical port PF0 NIC* PF2 NIC PF4 iSCSI / NIC PF6 FCoE / NIC PF1 NIC* PF3 NIC PF5 iSCSI / NIC PF7 FCoE / NIC PF0 & PF1 NIC* PF2 & PF3Disabled PF4 & PF5 iSCSI PF6 & PF7 FCoE Physical Port 0 Physical Port 1 • Default Function State • NIC* Function Always Enabled

  7. NPAR Theory of Operation • * Functions 0 & 1 (Port 0 and Port 1) • Always present • Always NIC • 2 & 3 - NIC or disabled • 4 & 5 - iSCSI, NIC or disabled • 6 & 7 - FCoE, NIC or disabled • One iSCSI and/or FCoE per Physical Port • NIC, iSCSI, & FCoE have fixed function numbers • Functions 2 -7 can be independently disabled

  8. NPAR Configuration Options • Enable / Disable NPAR Functions • Change Function Type (Personality) • Allocate Min and Max Bandwidth

  9. NPAR Configuration QLogic Utility Invoke QLogic Fast!UTIL during POST Type <Ctrl>Q

  10. NPAR Configuration QLogic Utility • Configuration Utility • Change Function Type • Allocate Min Bandwidth • Save Changes and Reboot

  11. NPAR Configuration Dell USC UEFI  F10  USC  Advanced Configuration  Select Port for NPAR

  12. NPAR Configuration Dell USC

  13. NPAR Configuration Dell USC

  14. NPAR Configuration Windows Properties Page NPAR Configured for three NICs and FCoE NPAR Configured for three NICs and iSCSI

  15. NPAR Configuration QCC GUI Web based tool provides same interface for Windows and Linux

  16. NPAR Configuration QCC CLI CLI tool provides same interface for Windows and Linux

  17. NPAR Configuration ESX Server Hypervisor VM a-b VM c-d VM e-f VM g-h VM i-j VM l-m VM n-o VM p-q vNIC a-b vNIC c-d vNIC e-f vNIC g-h vNIC i-j vNIC l-m vDisk vDisk • Independently configured for each port • eSwitch used for VM to VM NIC communication vSwitch vSwitch vSwitch vSwitch vSwitch vSwitch SCSI Layer PF 0 PF 2 PF 4 PF 6 PF 1 PF 3 PF 5 PF 7 Port 0 Port 1 iSCSI FCoE eSwitch eSwitch Uplink TX/RX TX/RX PHY PHY Switch Port Switch

  18. NPAR Configuration ESX Server VM1 VM2 VM VM VM VM VM VM vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC Hypervisor vSwitch vSwitch vSwitch vSwitch PF 4 PF 2 PF 6 PF 0 NIC Port 0 eSwitch TX/RX PHY Switch NIC Port 1

  19. NPAR Configuration – vCenter Plugin (1 of 2) Save NPAR configuration Reboot to initiate change Enable NIC Function Type for Function_2 using pull down menu

  20. NPAR Configuration - vCenter Plugin (2 of 2) Enabled with NIC Function_2

  21. NPAR Configuration Bandwidth Allocation • Minimum Bandwidth • Minimum guaranteed bandwidth • Specified as % of link speed • Total of all partitions is up to Max bandwidth link • May exceed specified Min value up to Max value • If excess bandwidth available on the physical port • Maximum Bandwidth • Up to the Max allowed bandwidth • Specified as % of link speed • Not allowed to exceed Max value • Even if excess bandwidth available on physical port

  22. NPAR Configuration Bandwidth Allocation using Dell USC

  23. NPAR Configuration Bandwidth Allocation with Dell USC • From Global Bandwidth Allocation • Select the Partition • Default Allocation Shown • Set Relative Bandwidth Weighting • Set Maximum Bandwidth

  24. NPAR Configuration Bandwidth Allocation with Dell USC Configure Minimum Bandwidth Configure Maximum Bandwidth

  25. NPAR Configuration Bandwidth Allocation • RT Click Function 0, to Enable • Bandwidth Configuration Window • Configure Min and Max Value Dynamically

  26. NPAR Configuration Bandwidth Allocation with vCenter Plugin

  27. NPAR Configuration Oversubscription Without Oversubscription Bandwidth • Fixed • Unused bandwidth is wasted With Oversubscription Bandwidth • Unused bandwidth available • Can be used automatically • Partitions use it when needed An NPAR enabled 10Gb port can be configured to allow each NIC partition to claim up to 100% of bandwidth that is going unused by the other NIC partitions on the same port

More Related