1 / 36

Power6 vSCSI to Power7 vFC (NPIV) migration

Power6 vSCSI to Power7 vFC (NPIV) migration . Rajesh Shah UNIX Systems Administrator III. NPIV Overview. Backup prior migration. Important: Ensure all backups of the source client LPAR is in place …before migration . Basic steps when migrating from vSCSI to NPIV.

chenoa
Download Presentation

Power6 vSCSI to Power7 vFC (NPIV) migration

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Power6 vSCSI to Power7 vFC (NPIV) migration Rajesh Shah UNIX Systems Administrator III Classified - Internal use

  2. NPIV Overview Classified - Internal use

  3. Backup prior migration Important:Ensure all backups of the source client LPAR is in place …before migration Classified - Internal use

  4. Basic steps when migrating from vSCSI to NPIV -remove vscsi / vhosts on source (can be done later) On Target -create Virtual fibre channel at VIO -create Virtual FC client at LPAR -vfcmap -Locate the WWPNs for the destination client Fibre Channel adapter from the HMC and remap SAN storage that was originally being mapped for source partition to the WWPNs for the destination partition. Classified - Internal use

  5. STEPS DONE BEFORE AND AFTER MIGRATION on client LPAR During Outage window steps AFTER migration On OLDp6 (source) after p7 lpar is booted up (NEW)target lpar a)TL updates a) Enable NFS/NAS mounts b)Change/verify LUN attributes b)Install SAN multipath codes c) REBOOT SAN vendor specified (EMCODM) d)SYSTEM verification e)APPLICATION verification c)Etherchannel drop ITS BIG OK HERE d)NIC recfgon single en interface g) cleanup e)Disable NFS/NAS mounts f)Reboot g)shutdown NOTE: source P6 client lpar LUN’s serial numbers and target P7 lpar virtual wwpn’s info given to Storage team for LUN masking and rezoning target LUN to virtual wwpns on P7 lpar Classified - Internal use

  6. P7 lpar P5/P6 lpar VIO SAN team in background using SVC Hot Array level migration method migrates ONLINE all real-time DATA from IBM SAN to EMC SAN until real outage window SVC EMC SAN IBM SAN Classified - Internal use

  7. P7 lpar P6 lpar VIO During REAL OUTAGE WINDOW, SAN team rezones EMC SAN storage to the destination p7 lpar’s virtual wwpns SVC EMC SAN IBM SAN Classified - Internal use

  8. Migrated LUN Classified - Internal use

  9. Prerequisites for multipath NPIV configuration • IBM POWER6 or above processor based hardware • 8Gb PCIe Dual-port FC Adapter • SAN switch are NPIV enabled• VIOS version 2.1 or above• AIX 5.3 Technology Level 09 or above• AIX 6.1 Technology Level 02 or above Classified - Internal use

  10. Basic steps when migrating from vSCSI to NPIV -remove source vscsi / vhosts (can be done later) -create Virtual fibre channel at VIO -create Virtual FC client at LPAR -vfcmap -Locate the WWPNs for the destination client Fibre Channel adapter from the HMC and remap SAN storage that was originally being mapped for source partition to the WWPNs for the destination partition. Classified - Internal use

  11. Setting up NPIV containers on p7 (target) prior migration 1)Create a virtual fibre adapter in VIO server 2)Create a virtual fibre adapter in your VIO client 3)Map the virtual adapter in the VIO server to physical fibre adapter using vfcmap command 4)Give the virtual worldwide name (WWN) to your SAN team Classified - Internal use

  12. VIOS configuration You have to create dual VIOS instances on the System p server to avail maximum redundancy for the paths. Each VIOS should own at least 1 number of 8Gb FC Adapter (FC#5735) and one Virtual FC Server Adapter to map it to the client LPAR. NOTE: Before doing any NPIV related configuration, the VIOS and AIX LPAR profile configuration has to be completed with all the other details required for running the VIOS and the AIX LPAR, like proper Virtual SCSI adapter and/or Virtual Ethernet Adapter mapping, physical SCSI/RAID adapter mapping to the VIOS and/or AIX LPAR etc. This is because, for doing a Virtual FC Server Adapter to Client Adapter mapping both the VIOS and AIX LPAR should exist in the system, NOT like Virtual SCSI adapter mapping where you have to provide only the slot numbers and it does not check for the existence of the client. Classified - Internal use

  13. Creating Virtual FC Server Adapters Open the first VIOS’s profile for editing and navigate to the Virtual Adapters tab, click on Actions > Create > Fibre Channel Adapter as shown in the figure below Input the Virtual FC Server Adapter slot number, select the client LPAR from the dropdown which will be connected to this server adapter and input its planned Virtual FC Client Adapter slot number as shown below Classified - Internal use

  14. Classified - Internal use

  15. Same way, edit the profile of the second VIOS and add the Virtual FC Server Adapter details Classified - Internal use

  16. Classified - Internal use

  17. MAP virtual FC server adapters to physical HBA ports Once the Virtual FC Server Adapters are created, it has to be mapped to the physical HBA ports for connecting it to the SAN fabric. This will connect the Virtual FC Client Adapters to the SAN fabric through Virtual FC Server Adapter and physical HBA port.For mapping the Virtual FC Server Adapters to the HBA port, the HBA port should be already connected to the SAN switch enabled with NPIV support. You can run the command ‘lsnports’ in VIOS ‘$’ prompt to know whether the connected SAN switch is enabled for NPIV. If the value for the ‘fabric’ parameter shows as ‘1’ that means that HBA port is connected to a SAN switch supporting the NPIV feature. If the SAN switch does not support NPIV, this value will be ‘0’ and if there is no SAN connectivity there won’t be any output for the ‘lsnports’ command. Classified - Internal use

  18. If the value for ‘fabric’ is ‘1’, we can go ahead mapping the Virtual FC adapter to those HBA ports. In the VIOS, the Virtual FC adapters will be shown as ‘vfchost’ devices. We can run the ‘lsdev –vpd | grep vfchost’ command to know which device represents the Virtual FC adapter on any specific slot. vfcmap -vadapter vfchost10 -fcp fcs0 vfcmap -vadapter vfchost11 -fcp fcs1 Classified - Internal use

  19. Classified - Internal use

  20. Repeat the above mentioned steps in the second VIOS LPAR also. If you have more client LPARs, repeat the steps for all those LPARs in both the VIOS LPARs. Classified - Internal use

  21. AIX LPAR Configuration Now we need to create the Virtual FC Client Adapters in the AIX LPAR Classified - Internal use

  22. Creating Virtual FC Client Adapters Open the LPAR profile for the AIX LPAR for editing and add the Virtual FC Client Adapters by navigating to the Virtual Adapters tab and selecting Actions > Create > Fibre Channel Adapter as shown Classified - Internal use

  23. Input the first Virtual FC Client Adapter slot number details as shown in below slide You need to make sure that the slot numbers entered is exactly matching with the slot numbers entered while creating the Virtual FC Server Adapter in the first VIOS LPAR. This adapter will be the first path for the SAN access in the AIX LPAR Classified - Internal use

  24. Classified - Internal use

  25. Do same mapping for second VIO Create the second Virtual FC Client Adapter. Make sure the slot numbers match with the slot numbers we have entered in the second VIOS LPAR while creating the Virtual FC Server Adapter Classified - Internal use

  26. Classified - Internal use

  27. ACTIVATE TARGET P7 LPAR USING HMC in SMS menu Classified - Internal use

  28. From hmc push all active and inactive wwpns to the switch Use correct cec and lpar name us6thmc01:~> chnportlogin -m CCRUTCEC01 -o login -p lparname -n default Ensure above command completes with Exit Status 0 us6thmc01:~> lsnportlogin -m CCRUTCEC01 --filter "profile_names=default“ |grep lpar Classified - Internal use

  29. Classified - Internal use

  30. AIX TL update before migration Classified - Internal use

  31. shutdown –Fr After reboot of client lpar , ensure oslevel –s is correct level and make sure you can login from network Take OS backup so we have latest modified sysback with EMCODM filesets and new NIC configuration and TL updates .. Disable NFS/NAS mount points shutdown source lpar # shutdown –F now THIS is the start of real outage window time Classified - Internal use

  32. SAN TEAM WILL REZONE VMAX LUN’s to Target virtual wwpns During the actual outage window they will rezone VMAX LUN’s and present to P7 target LPAR’s virtual wwpns P7 TARGET LPAR STEPS Activate the P7 target lpar and let it boot on its own If lpar cannot find the boot disk on its own, then Shutdown lpar Activate lpar in sms menu Select Boot Options Configure Boot Device order Select the 1st Boot Device -> Hard Drive -> SAN -> List all devices Select the first path disk in the list to boot from Then exit the SMS menu and let it boot Once p7 lpar is booted Change/Verify LUN attributes Enable NFS/NAS mounts SYSTEM VERIFICATION . You can change hcheck_interval=60 for all LUN’s REBOOT Classified - Internal use

  33. Classified - Internal use

  34. THEN HERE IT IS BIG OK MIGRATION COMPLETE … Classified - Internal use

  35. CLEANUP on target client lpar Remove old definitions of hdisk, vscsi lsdev –Cc disk , lsdev –Cc adapter will list all. Remove old adapters in defined state Ex: rmdev –dl vscsi0 and same for vscsi1 # cfgmgr -> ensure they are gone NOTE: hdisk numbering will change after migration for rootvg and othervg’s Classified - Internal use

  36. End of Presentation Classified - Internal use

More Related