1 / 34

Overview

Overview. Dan Braden and Bill Wiegand IBM Advanced Technical Skills March 15, 2012. Power VIO and SVC Disk Path Design. Dan Braden – IBM Power ATS Bill Wiegand – IBM Storage ATS December 14, 2011. Agenda. Determine and examine all possible 4 path designs

Download Presentation

Overview

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Overview Dan Braden and Bill Wiegand IBM Advanced Technical Skills March 15, 2012

  2. Power VIO and SVC Disk Path Design Dan Braden – IBM Power ATS Bill Wiegand – IBM Storage ATS December 14, 2011

  3. Agenda • Determine and examine all possible 4 path designs • Dual VIOS, dual SAN fabric and SVC • Examine each for availability characteristics • Handling multiple VIOCs • Extend to more SVC IO groups • Increasing bandwidth

  4. Objective The objective of this presentation is to show alternative disk path designs to setup a dual VIOS environment with SVC storage, and their availability characteristics. This will examine zoning with 4 vFCs, each connected to a single SVC port to generate 4 paths/LUN.

  5. Definitions • SAN link – a cable connecting a storage port to the SAN or a server port to the SAN • Disk path – a logical path from a specific host port to a specific storage port • WWPN – World Wide Port Name – unique 16 digit ID for a host or storage port • Active/passive disk subsystem – a disk subsystem with dual controller/processors where a controller handles all IOs for a LUN except under failure scenarios • Half the LUNs are typically owned by each controller to use all resources • VIOS – Virtual IO Server • VIOC – Virtual IO Client • vFC – Virtual Fibre Channel adapter created with VIO for a VIOC • Tied to a specific real FC adapter in a VIOS • Has an online and offline WWPN to facilitate LPM • The active WWPN changes with each LPM • LPM – Live Partition Mobility – the ability to move a running VIOC LPAR from one Power system to another • Requires all IO for the VIOC to be virtualized thru VIOSs • WWPN zone: Typically a single initiator (host port) zone indicating which storage ports with which the host port may communicate: • host port WWPN, storage port WWPN, storage port WWPN, ….

  6. Zoning nomenclature • Single initiator zone – a zone with one host port, and one or more storage ports • A best practice • Alias – Used to give a descriptive name to WWPNs • Samples: • vFC0_myhost – the first virtual adapter port on the myhost LPAR • LPARa_fcs0 - fcs0 on a LPAR representing a host port • N1_P3 – SVC Node 1 Port 3 • This presentation assumes WWPN zoning

  7. Availability Requirements • We’d like to setup the zoning such that we can survive a double failure • Failure of two of : VIOS, SAN fabric or SVC node • But not failure of both VIOSs, both SAN fabrics or both SVC nodes which will always cause an outage • With 2 ports per VIOS or SVC node, we can ignore failure of individual ports • Two ports failing in a VIOS or SVC is equivalent to the VIOS/SVC failing

  8. Why 4 paths? • Minimizing the number of paths minimizes the time needed by AIX to determine paths have failed and to recover • AIX must distinguish between a slow IO and a path failure • We don’t want to mark paths as failed if the IO is just slow

  9. SAN cabling Power Server • Assumes 2 host ports per VIOS • Additional ports can be used for additional bandwidth • One may create up to 4 vFCs here per VIOC • Availble host HBAs have 1, 2 or 4 ports • Each SVC node has 4 ports VIOC VIOS1 VIOS2 server port 1 server port 4 SAN1 SAN2 N2P3 Node 2 Port 3 N1P1 Node 1 Port 1 SVC node1 SVC node2 SVC IO Group

  10. How do we get 4 paths? • Given 4 host ports and 4 storage ports we have 16 potential paths • Paths reduced by half due to dual SAN fabric • Can be further reduced by SAN zoning at the switch • Can be further reduced by LUN masking at the SVC • A LUN served from the SVC is served from one IO group only • Connections to other IO groups do not increase the number of paths • Options for 4 paths: • 4 vFCs each connected to a different storage port • 2 vFCs each connected to two storage ports • Doesn’t meet availability requirements as each vFC only uses one SAN fabric, so failure of a SAN fabric and the real adapter the other vFC uses causes an outage • 1 vFC connected to 4 storage ports • Doesn’t meet availability requirements as this can only use one SAN fabric • Thus we’ll use sets of ports that include 4 unique host ports and 4 unique storage ports • We’ll examine all the possible ways to use 4 vFCs connected to 4 storage ports • Skip to slide 28 to see designs meeting the requirement • Or go thru them to understand what designs to avoid and why

  11. How many ways are there to set this up? • Given 4 host ports and 4 storage ports, with one host port connected to one storge port, there are 24 ways to pair up the ports 4!=24 • Not all can use all 4 storage ports • Many are equivalent Table entries are host port#, storage port# 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 11 11 11 11 11 11 12 12 12 12 12 12 13 13 13 13 13 13 14 14 14 14 14 14 22 22 23 23 24 24 21 21 23 23 24 24 21 21 22 22 24 24 21 21 22 22 23 23 33 34 32 34 32 33 33 34 31 34 31 33 32 34 31 34 31 32 32 33 31 33 31 32 44 43 44 42 43 32 44 43 44 41 43 41 44 42 44 41 42 41 43 42 43 41 42 41 Design# • Since there’s no difference between ports on a VIOS, or ports on a SVC node, we can redo the table showing VIOS# and SVC node# for the ports Table entries are VIOS#, SVC node# 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 11 11 11 11 11 11 11 11 11 11 11 11 12 12 12 12 12 12 12 12 12 12 12 12 11 11 12 12 12 12 11 11 12 12 12 12 11 11 11 11 12 12 11 11 11 11 12 12 22 22 21 22 21 22 22 22 21 22 21 22 21 22 21 22 21 21 21 22 21 22 21 21 22 22 22 21 22 21 22 22 22 21 22 21 22 21 22 21 21 21 22 21 22 21 21 21 Design#

  12. How many ways are there to set this up? • Examining each design for unique ones, and reducing the design name to (SVCN#, SVCN#, SVCN#, SVCN#) for connection to VIOS1, VIOS1, VIOS2, VIOS2 respectively yields these 6 designs: Or simply: 1. 1122 2. 1212 3. 1221 4. 2112 5. 2121 6. 2211

  13. Design 1 with 4 paths/LUN - 1122 Power Server LPAR1 vFC1 vFC2 vFC3 vFC4 • SAN zoning • Zone1: vFC1 WWPN1, N1P1 • Zone2: vFC1 WWPN2, N1P1 • Zone3: vFC2 WWPN1, N1P3 • Zone4: vFC2 WWPN2, N1P3 • Zone5: vFC3 WWPN1, N2P1 • Zone6: vFC3 WWPN2, N2P1 • Zone7: vFC4 WWPN1, N2P3 • Zone8: vFC4 WWPN2, N2P3 • Italic zones are for inactive WWPNs VIOS1 VIOS2 SAN1 SAN2 • Disk Paths • vFC1 - N1P1 • vFC2 – N1P3 • vFC3 – N2P1 • vFC4 – N2P3 SVC node1 SVC node2 SVC IO Group

  14. Design 1 with 4 paths/LUN VIOS failure Power Server • SAN zoning • Zone1: vFC1 WWPN1, N1P1 • Zone2: vFC1 WWPN2, N1P1 • Zone3: vFC2 WWPN1, N1P3 • Zone4: vFC2 WWPN2, N1P3 • Zone5: vFC3 WWPN1, N2P1 • Zone6: vFC3 WWPN2, N2P1 • Zone7: vFC4 WWPN1, N2P3 • Zone8: vFC4 WWPN2, N2P3 • Italic zones are for inactive WWPNs LPAR1 vFC1 vFC2 vFC3 vFC4 VIOS1 VIOS2 • Disk Paths • vFC1 - N1P1 • vFC2 – N1P3 • vFC3 – N2P1 • vFC4 – N2P3 SAN1 SAN2 SVC node1 SVC node2 • Failure of SVC node2 results in an outage • Lost zones and paths shown in red SVC IO Group

  15. Design 2 with 4 paths/LUN - 1212 Power Server • SAN zoning • Zone1: vFC1 WWPN1, N1Px • Zone2: vFC1 WWPN2, N1Px • Zone3: vFC2 WWPN1, N2Px • Zone4: vFC2 WWPN2, N2Px • Zone5: vFC3 WWPN1, N1Px • Zone6: vFC3 WWPN2, N1Px • Zone7: vFC4 WWPN1, N2Px • Zone8: vFC4 WWPN2, N2Px • Italic zones are for inactive WWPNs LPAR1 vFC1 vFC2 vFC3 vFC4 VIOS1 VIOS2 SAN1 SAN2 • Disk Paths • vFC1 - N1Px • vFC2 – N2Px • vFC3 – N1Px • vFC4 – N2Px SVC node1 SVC node2 Not possible to zone this to use all storage ports SVC IO Group

  16. Design 3 with 4 paths/LUN - 1221 Power Server LPAR1 vFC1 vFC2 vFC3 vFC4 • SAN zoning • Zone1: vFC1 WWPN1, N1P1 • Zone2: vFC1 WWPN2, N1P1 • Zone3: vFC2 WWPN1, N2P3 • Zone4: vFC2 WWPN2, N2P3 • Zone5: vFC3 WWPN1, N2P1 • Zone6: vFC3 WWPN2, N2P1 • Zone7: vFC4 WWPN1, N1P3 • Zone8: vFC4 WWPN2, N1P3 • Italic zones are for inactive WWPNs VIOS1 VIOS2 SAN1 SAN2 • Disk Paths • vFC1 - N1P1 • vFC2 – N2P3 • vFC3 – N2P1 • vFC4 – N1P3 SVC node1 SVC node2 SVC IO Group

  17. Design 3 with 4 paths/LUN – VIOS failure Power Server LPAR1 vFC1 vFC2 vFC3 vFC4 • SAN zoning • Zone1: vFC1 WWPN1, N1P1 • Zone2: vFC1 WWPN2, N1P1 • Zone3: vFC2 WWPN1, N2P3 • Zone4: vFC2 WWPN2, N2P3 • Zone5: vFC3 WWPN1, N2P1 • Zone6: vFC3 WWPN2, N2P1 • Zone7: vFC4 WWPN1, N1P3 • Zone8: vFC4 WWPN2, N1P3 • Italic zones are for inactive WWPNs VIOS1 VIOS2 SAN1 SAN2 • Disk Paths • vFC1 - N1P1 • vFC2 – N2P3 • vFC3 – N2P1 • vFC4 – N1P3 SVC node1 SVC node2 SVC IO Group

  18. Design 3 with 4 paths/LUN – SAN fabric failure Power Server LPAR1 vFC1 vFC2 vFC3 vFC4 • SAN zoning • Zone1: vFC1 WWPN1, N1P1 • Zone2: vFC1 WWPN2, N1P1 • Zone3: vFC2 WWPN1, N2P3 • Zone4: vFC2 WWPN2, N2P3 • Zone5: vFC3 WWPN1, N2P1 • Zone6: vFC3 WWPN2, N2P1 • Zone7: vFC4 WWPN1, N1P3 • Zone8: vFC4 WWPN2, N1P3 • Italic zones are for inactive WWPNs VIOS1 VIOS2 SAN1 SAN2 • Disk Paths • vFC1 - N1P1 • vFC2 – N2P3 • vFC3 – N2P1 • vFC4 – N1P3 SVC node1 SVC node2 SVC IO Group

  19. Design 3 with 4 paths/LUN – SVC node failure Power Server LPAR1 vFC1 vFC2 vFC3 vFC4 • SAN zoning • Zone1: vFC1 WWPN1, N1P1 • Zone2: vFC1 WWPN2, N1P1 • Zone3: vFC2 WWPN1, N2P3 • Zone4: vFC2 WWPN2, N2P3 • Zone5: vFC3 WWPN1, N2P1 • Zone6: vFC3 WWPN2, N2P1 • Zone7: vFC4 WWPN1, N1P3 • Zone8: vFC4 WWPN2, N1P3 • Italic zones are for inactive WWPNs VIOS1 VIOS2 SAN1 SAN2 • Disk Paths • vFC1 - N1P1 • vFC2 – N2P3 • vFC3 – N2P1 • vFC4 – N1P3 SVC node1 SVC node2 SVC IO Group

  20. Design 4 with 4 paths/LUN - 2112 Power Server LPAR1 vFC1 vFC2 vFC3 vFC4 • SAN zoning • Zone1: vFC1 WWPN1, N2P1 • Zone2: vFC1 WWPN2, N2P1 • Zone3: vFC2 WWPN1, N1P3 • Zone4: vFC2 WWPN2, N1P3 • Zone5: vFC3 WWPN1, N1P1 • Zone6: vFC3 WWPN2, N1P1 • Zone7: vFC4 WWPN1, N2P3 • Zone8: vFC4 WWPN2, N2P3 • Italic zones are for inactive WWPNs VIOS1 VIOS2 SAN1 SAN2 • Disk Paths • vFC1 – N2P1 • vFC2 – N1P3 • vFC3 – N1P1 • vFC4 – N2P3 SVC node1 SVC node2 SVC IO Group

  21. Design 4 with 4 paths/LUN – VIOS failure Power Server LPAR1 vFC1 vFC2 vFC3 vFC4 • SAN zoning • Zone1: vFC1 WWPN1, N2P1 • Zone2: vFC1 WWPN2, N2P1 • Zone3: vFC2 WWPN1, N1P3 • Zone4: vFC2 WWPN2, N1P3 • Zone5: vFC3 WWPN1, N1P1 • Zone6: vFC3 WWPN2, N1P1 • Zone7: vFC4 WWPN1, N2P3 • Zone8: vFC4 WWPN2, N2P3 • Italic zones are for inactive WWPNs VIOS1 VIOS2 SAN1 SAN2 • Disk Paths • vFC1 – N2P1 • vFC2 – N1P3 • vFC3 – N1P1 • vFC4 – N2P3 SVC node1 SVC node2 SVC IO Group

  22. Design 4 with 4 paths/LUN – SAN fabric failure Power Server LPAR1 vFC1 vFC2 vFC3 vFC4 • SAN zoning • Zone1: vFC1 WWPN1, N2P1 • Zone2: vFC1 WWPN2, N2P1 • Zone3: vFC2 WWPN1, N1P3 • Zone4: vFC2 WWPN2, N1P3 • Zone5: vFC3 WWPN1, N1P1 • Zone6: vFC3 WWPN2, N1P1 • Zone7: vFC4 WWPN1, N2P3 • Zone8: vFC4 WWPN2, N2P3 • Italic zones are for inactive WWPNs VIOS1 VIOS2 SAN1 SAN2 • Disk Paths • vFC1 – N2P1 • vFC2 – N1P3 • vFC3 – N1P1 • vFC4 – N2P3 SVC node1 SVC node2 SVC IO Group

  23. Design 4 with 4 paths/LUN – SVC node failure Power Server LPAR1 vFC1 vFC2 vFC3 vFC4 • SAN zoning • Zone1: vFC1 WWPN1, N2P1 • Zone2: vFC1 WWPN2, N2P1 • Zone3: vFC2 WWPN1, N1P3 • Zone4: vFC2 WWPN2, N1P3 • Zone5: vFC3 WWPN1, N1P1 • Zone6: vFC3 WWPN2, N1P1 • Zone7: vFC4 WWPN1, N2P3 • Zone8: vFC4 WWPN2, N2P3 • Italic zones are for inactive WWPNs VIOS1 VIOS2 SAN1 SAN2 • Disk Paths • vFC1 – N2P1 • vFC2 – N1P3 • vFC3 – N1P1 • vFC4 – N2P3 SVC node1 SVC node2 SVC IO Group

  24. Design 5 with 4 paths/LUN - 2121 Power Server • SAN zoning • Zone1: vFC1 WWPN1, N2Px • Zone2: vFC1 WWPN2, N2Px • Zone3: vFC2 WWPN1, N1Px • Zone4: vFC2 WWPN2, N1Px • Zone5: vFC3 WWPN1, N2Px • Zone6: vFC3 WWPN2, N2Px • Zone7: vFC4 WWPN1, N1Px • Zone8: vFC4 WWPN2, N1Px • Italic zones are for inactive WWPNs LPAR1 vFC1 vFC2 vFC3 vFC4 VIOS1 VIOS2 SAN1 SAN2 • Disk Paths • vFC1 – N2Px • vFC2 – N1Px • vFC3 – N2Px • vFC4 – N1Px SVC node1 SVC node2 Not possible to zone this to use all storage ports SVC IO Group

  25. Design 6 with 4 paths/LUN Power Server LPAR1 vFC1 vFC2 vFC3 vFC4 • SAN zoning • Zone1: vFC1 WWPN1, N2P1 • Zone2: vFC1 WWPN2, N2P1 • Zone3: vFC2 WWPN1, N2P3 • Zone4: vFC2 WWPN2, N2P3 • Zone5: vFC3 WWPN1, N1P1 • Zone6: vFC3 WWPN2, N1P1 • Zone7: vFC4 WWPN1, N1P3 • Zone8: vFC4 WWPN2, N1P3 • Italic zones are for inactive WWPNs VIOS1 VIOS2 SAN1 SAN2 • Disk Paths • vFC1 – N2P1 • vFC2 – N2P3 • vFC3 – N1P1 • vFC4 – N1P3 SVC node1 SVC node2 SVC IO Group

  26. Design 6 with 4 paths/LUN – VIOS failure Power Server LPAR1 vFC1 vFC2 vFC3 vFC4 • SAN zoning • Zone1: vFC1 WWPN1, N2P1 • Zone2: vFC1 WWPN2, N2P1 • Zone3: vFC2 WWPN1, N2P3 • Zone4: vFC2 WWPN2, N2P3 • Zone5: vFC3 WWPN1, N1P1 • Zone6: vFC3 WWPN2, N1P1 • Zone7: vFC4 WWPN1, N1P3 • Zone8: vFC4 WWPN2, N1P3 • Italic zones are for inactive WWPNs VIOS1 VIOS2 SAN1 SAN2 • Disk Paths • vFC1 – N2P1 • vFC2 – N2P3 • vFC3 – N1P1 • vFC4 – N1P3 SVC node1 SVC node2 SVC IO Group

  27. Designs meeting availability requirements • Only designs 3 and 4 provide availability in case of a double failure • There are inferior ways to zone Design 3 Design 4 Power Server Power Server LPAR1 LPAR1 vFC1 vFC2 vFC3 vFC4 vFC1 vFC2 vFC3 vFC4 VIOS1 VIOS2 VIOS1 VIOS2 SAN1 SAN2 SAN1 SAN2 SVC node1 SVC node2 SVC node1 SVC node2 SVC IO Group SVC IO Group

  28. Designs meeting availability requirements • Only designs 3 and 4 meet availability requirements – avoid the others

  29. Power Server Power Server LPAR1 LPAR2 LPAR1 LPAR2 vFC1 vFC2 vFC3 vFC4 vFC1 vFC2 vFC3 vFC4 vFC1 vFC2 vFC3 vFC4 vFC1 vFC2 vFC3 vFC4 VIOS1 VIOS2 VIOS1 VIOS2 4 host ports 8 host ports SAN1 SAN2 SAN1 SAN2 SVC node1 SVC node2 SVC node1 SVC node2 SVC IO Group SVC IO Group Adding more VIOC LPARs Using other FCs on the VIOS Rotate LPARs across port sets Using the same FCs on the VIOS Rotate LPARs across SVC ports • Using a balanced resource approach is cost effective • Where bandwidth of host ports = bandwidth of storage ports • Bandwidth of ports varies depending on the model • Latest HW: SVC port bandwidth of about 40,000 IOPs, Host port bandwidth around 50,000 IOPS • Option on right is more balanced

  30. Adding SVC IO Groups • Allows additional SVC resources to be used • Node processors and cache • LUNs are served up from one IO group only • Still 4 paths/LUN • The zoning changes: Power Server LPAR1 vFC1 vFC2 vFC3 vFC4 VIOS1 VIOS2 • Zone1: vFC1 WWPN1, N1P1, N3P1 • Zone2: vFC1 WWPN2, N1P1, N3P1 • Zone3: vFC2 WWPN1, N2P3,N4P3 • Zone4: vFC2 WWPN2, N2P3, N4P3 • Zone5: vFC3 WWPN1, N2P1, N4P1 • Zone6: vFC3 WWPN2, N2P1, N4P1 • Zone7: vFC4 WWPN1, N1P3, N3P3 • Zone8: vFC4 WWPN2, N1P3, N3P3 • Italic zones are for inactive WWPNs SAN1 SAN2 SVC node3 SVC node4 SVC node1 SVC node2 SVC IO Group SVC IO Group

  31. Adding IO bandwidth to a VIOC LPAR • To add bandwdith to a VIOC, one must add more physical adapter ports • Add one port with 1 additional path or • Add a set of ports Power Server • Zone the new ports like the old ports • Storage administrator assigns LUNs to new host ports, or • Move half or some of the LUNs to the new port set LPAR1 vFC1 vFC2 vFC3 vFC4 vFC5 vFC6 vFC7 vFC8 VIOS1 VIOS2 SAN1 SAN2 SVC node1 SVC node2 SVC IO Group

  32. Error message from SVC • The current SVC code (as of 12/2011) displays a “Degraded” path warning message when a host port is zoned to only one node in an I/O group, even if the second host port is zoned to the other node in the same I/O group providing path redundancy to a volume: • The GUI panel shows the 4 paths to the inactive host WWPNs used by LPM as Offline • This is normal and expected behavior with LPM • The GUI panel also shows the 4 active host paths used for I/O as “Degraded” • This is expected behavior but development is looking at a change in the future • The “# Nodes Logged In” column shows how many nodes in cluster see the host ports

  33. References • Disk Path Design tech doc: http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101914 • SDDPCM manual: http://www-01.ibm.com/support/docview.wss?rs=540&context=ST52G7&uid=ssg1S7000303 • Dynamically adding a FC adapter to a partition tech doc: http://w3-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/TD105218 • vFC adapter properties using NPIV tech doc: http://w3-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/FQ128819 • Systems Hardware information center: Virtual Fibre Channelhttp://publib.boulder.ibm.com/infocenter/powersys/v3r1m5/index.jsp?topic=/iphat/iphblconfigvfc.htm

More Related