1 / 46

OpenVMS Storage

hp OpenVMS Storage Technology. Review of V7.3-1 Storage FeaturesNew Storage ProductsNew SAN TechnologyPost V7.3-1 ProjectsItanium

roseanne
Download Presentation

OpenVMS Storage

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


    2. hp OpenVMS Storage Technology Review of V7.3-1 Storage Features New Storage Products New SAN Technology Post V7.3-1 Projects Itanium® Based Systems IO Plans Longer Term Storage Interconnects

    3. hp OpenVMS V7.3-1 Storage Features Disk Failover to the MSCP Served Path Static Multipath Path Balancing Multipath Poller Enhancements Multipath Tape Support Fibre Channel IOLOCK8 Hold Time Reduction Fibre Channel Interrupt Coalescing Distributed Interrupts KZPEA Fastpath Smartarray Support

    4. hp OpenVMS V7.3-1 Storage Features V7.2-1 thorough V7.3 support failover among direct paths (FibreChannel and SCSI) V7.3-1 allows failover to a MSCP served path if all direct paths are down Automatic failback when direct path is restored No failback on a manual pathswitch to the MSCP path Supported for multihost FibreChannel and SCSI connections Multipath failover is enabled via MPDEV_ENABLE =1 (default) MSCP Path failover is enabled via MPDEV_REMOTE = 1 (default)

    5. Typical FibreChannel Configuration

    6. hp OpenVMS V7.3-1 Storage Features To monitor path availability a poller monitors the status of all paths every 60 seconds Only one disk connected to a given path is selected for polling If the “polled” disk goes offline, other disks connected to the same path are polled If all disks on a given path go offline, the poller interval in decreased to 30 seconds “set dev $1$dga100 /nopoll” to disable polling to a specific device Poller is enabled via MPDEV_POLLER = 1 (default) The poller is used to drive failback of MSCP served paths

    7. hp OpenVMS V7.3-1 Storage Features The poller reports broken paths in 3 ways OPCOM messages (the only notification prior to V7.3-1) Show device /full Path PGB0.5000-1FE1-0015-2C5C (CLETA), not responding. Error count 0 Operations completed 1 ($ pipe show dev /full | search sys$input: responding) Show dev /multipath Device Device Error Current Name Status Count Paths path $1$DGA4001: (CLETA) Mounted 0 7/ 9 PGB0.5000-1FE1-0015-2C58

    8. hp OpenVMS V7.3-1 Storage Features Prior to V7.3-1 multipath selected the first path it found as the “primary” and “current” path (often causing disks to switch ports of the storage controller) In V7.3-1, multipath still selects the first path found as “primary”, but it will now possibly switch the “current” path in an attempt to balance disks across “online” paths Initial path balancing occurs at startup At mount time (and when mount verification occurs), the “online” path with the least number of active connections is selected as “current” Path balancing only occurs within a given node (not cluster aware) Path balancing will not select “offline” paths (if possible) There is no failback capability if paths are later switched due to error conditions

    9. hp OpenVMS V7.3-1 Storage Features Basic Fibre Channel tape support via MDR was introduced in V7.3 and backported to V7.2-2 This featured connection via a single FC path Tapes could be configured across the 2 possible MDR FC paths using MDR SSP Tapes served to non FC nodes via TMSCP V7.3-1 supports full tape multipath capability Automatic failover on path error Static path balancing via “set dev $2$mgax /switch/path=xxx” Failover between direct paths and TMSCP paths is not supported MDR/NSR and tape drive is still a single point of failure Tape robot failover doesn’t work and will be fixed via tima kit

    10. Typical FibreChannel Tape Configuration

    11. hp OpenVMS V7.3-1 Storage Features OpenVMS currently synchronizes all IO activity with the systemwide SCS/IOLOCK8 This can become a significant bottleneck on an SMP system. Current Alpha systems will use ~13-18us of IOLOCK per FibreChannel disk IO Max system io rate of ~30K IO/sec….. (if all you do is disk io and have a really good tailwind) FibreChannel driver optimization Reduces IOLOCK8 hold time by 3-6us per IO This optimization combined with interrupt coalescing cuts IOLOCK8 time by 50% and allows a 2x+ increase in maximum IO/second

    12. hp OpenVMS V7.3-1 Storage Features Aggregates IO completion interrupts in the host bus adapter Saves passes through the interrupt handler and reduced IOLOCK8 hold time Initial tests show a 25% reduction of IOLOCK8 hold time (3-4us per IO), resulting in a direct 25% increase in maximum IO/second for high IO workloads Controlled with sys$etc:fc$cp Default of OFF in V7.3-1 Suggested setting 8 IO/s or 1MS before interrupt Only effective with 5K IO/sec on a single KGPSA Can be controlled per KGPSA This feature may negatively impact performance of applications dependant on high speed single stream I/O

    13. hp OpenVMS V7.3-1 Storage Features Allows hardware interrupts to be directly targeted to the “preferred” fastpath CPU Frees up CPU cycles on the primary processor Avoids IP interrupt overhead to direct interrupt to the “preferred” fastpath CPU Automatically enabled on all “fastpath” devices You can only disable distributed interrupts by turning off fastpath (FASTPATH=0)

    14. hp OpenVMS V7.3-1 Storage Features Fastpath capability now available for KZPEA Reduces IOLOCK8 by ~50% Allows secondary processors to run much of the IO stack Enabled via bit 2 = 0 in FASTPATH_PORTS (enabled by default) No plans to support SCSI clusters with KZPEA

    15. hp OpenVMS V7.3-1 Storage Features SmartArray 5300 Backplane RAID adapter 2/4 Ultra3 SCSI channels Up to 56 drives ~15K IO/sec 200MB/sec Configured with console utility or host Web based GUI Available Q3 2002

    16. hp OpenVMS V7.3-1 Storage Features Dynamic per UCB command logging Controlled by SDA Data display/analysis via SDA Intended by use by engineering to solve complicated field issues No data is logged or analyzed

    17. New Storage Products

    18. New Storage Products Enables 2gb front end FC ports 200MB/s to a single volume (from VMS) 400MB/sec+ across the whole array (from VMS) Enables multi-level snapshots SSSU for host based control Supports 15Krpm 36GB disks Requires V2 of the EVA Element Manager Supports CA/DRM for OpenVMS

    19. Higher utilization requires less capacity to be purchased to support applications Higher utilization requires less capacity to be purchased to support applications

    20. New Storage Products VMS support claimed by HP prior to merger Current support is for V7.2-2 Expanded support as required Qualification is very expensive Performance vis-ŕ-vis EVA is totally unknown Significant performance is possible Huge cache 32GB Up to 24 FC ports Up to 1024 spindles Very nice performance tools…… OpenVMS working with Storage to qualify larger cluster configurations

    21. New Storage Products 2gb FibreChannel front-end 4 U160 SCSI backend ports 4u rackmount with 14 drives 28 additional drives with 2 external storage shelves Works in existing SANs Low cost 2 node clusters with embedded 3 port FC-AL hub (V7.3-1 only) Supported with V7.2-2 V7.3 V7.3-1 Available Q2 CY2003

    22. MSA1000 SAN Solution Instructor Notes: Discuss the features and benefits.Instructor Notes: Discuss the features and benefits.

    23. New Storage Products High Performance Architecture 200MB/sec throughput 25,000 I/O per second Redundant controller support Active/Standby in initial product Active/Active in future RAID 0, 1, 0+1, 5 and ADG LUN Masking (SSP) 2Gb/1Gb auto-sense host ports Dual Cache Modules Upgradeable to 512MB (per controller) Serial line config and management

    24. Compaq StorageWorks Enclosures Instructor Notes: Ultra3 with data transfer rates up to 160MB/s Hot-pluggable drives Integrated LCD EMU Longer SCSI cable lengths Wide Ultra2/3 LVD supports SCSI cables lengths up to 39.4 ft (12m) Fast Wide Ultra supports SCSI cables lengths up to 12 ft (3.7m) LED indicator High availability Ultra robust SCA direct connect drive carrier 3U rack height Instructor Notes: Ultra3 with data transfer rates up to 160MB/s Hot-pluggable drives Integrated LCD EMU Longer SCSI cable lengths Wide Ultra2/3 LVD supports SCSI cables lengths up to 39.4 ft (12m) Fast Wide Ultra supports SCSI cables lengths up to 12 ft (3.7m) LED indicator High availability Ultra robust SCA direct connect drive carrier 3U rack height

    25. Low End FC-AL Based Cluster

    26. New Storage Products Smart Array 5300 2/4 U160 SCSI Channels Up to 56 drives (4TB) Raid0/1/5/ADG Up to 256MB cache Supported on V7.3-1 300MB/sec 20K io/sec Available now Doesn’t support forced error commands so full shadowing support is an issue. Shadowing works fine but a member will be ejected if an unrecoverable disk error occurs on one member and an error cannot be forced on the shadow copy

    27. HSG / MSA / EVA Volume Construction

    28. Storage Positioning Instructor Notes: Instructor Notes:

    29. New Storage Products 1U Fibre Channel-to-SCSI router 2Gb FC Support 4 module slots 2x 2gb Fibre Channel 4x LVD/SE SCSI 4x HVD SCSI Web Based Management Embedded product for tape libraries (E2400) Supported in V7.3-1 Tima kit for V7.3 (fibre_scsi v300) Tima kit for V7.2-2 (fibre_scsi v300)

    30. New Storage Products 1U Fibre Channel-to-SCSI router High Performance - 2Gb FC Support 200 MB/s of information throughput 1 2Gb Fibre Channel port 2 U160 LVD SCSI ports Web Based Management Embedded product for tape libraries (E1200) Supported in V7.3-1 Tima kit for V7.3 (fibre_scsi v300) Tima kit for V7.2-2 (fibre_scsi v300)

    31. New Storage Products ESL 9595 Up to 16 SDLT / LTO drives Up to 4 2gb FC ports Up to 595 cartridges 95TB of uncompressed data 900GB/hour uncompressed backup rate SDLT 160/320 1.7TB/hour uncompressed backup rate LTO-2 MSL 5052 4 SDLT / LTO drives 52 cartridges MSL 5026 2 SDLT / LTO drives 26 cartridges

    32. New Storage Products VMS will never support Ultrium 1 drives No support of transfers of an odd number of bytes VMS supports the Ultrium 2 (LTO460) drives Testing completed on all current AlphaServer systems Supported in both direct-attach SCSI and behind the FC Bridges (NSR, MDR) Currently testing these drives with ESL/MSL libraries – support statement on those coming soon

    33. Recent Backup Performance Measurements

    34. New SAN Technology

    35. New SAN Features Full Family of 2Gb switches Brocade SAN Switch 2/8, 2/16, 2/32, 2/64 2Gb firmware 3.0.2f will interoperate with installed base of 1Gb SAN Switch family running v2.6.0c firmware Does not interoperate with original 1Gb switch DSGGA-AA/AB McData Edge Switch 2/16, 2/24, 2/32 SAN Director 2/64, 2/140 McData qualified on VMS, but most testing occurs on Brocade. Interoperability of Brocade/McData not supported

    36. New SAN Features SAN Switch 2/32 32 port monolithic switch 4 port 8Gb/sec ISL trunking available Core Switch 2/64 2 x 64 port blade type switch 16 ports per blade Total of 128 2gb ports 4 port 8Gb/sec ISL trunking available Redundant everything Highest performance, highest $$$ SAN option

    37. New SAN Features Emulex LP9802 will be the next generation Fibre Channel HBA PCI-X capable 1 2gb port 35K IO/sec Mid 2003 availability Positioned as replacement for LP9002 V7.2-2, V7.3, V7.3-1 support

    38. Post V7.3-1 Projects

    39. Post V7.3-1 Projects Storage arrays today can dynamically expand volumes (HSG/HSV/MSA) OpenVMS cannot utilize the expanded volume without re-initializing OpenVMS dynamic volume expansion will allow allocation of extra bitmap space at init time and then later be able to expand volume size while the device is mounted $ init /limit $1$dga100: ! Allocates 1TB bitmap change volume size using storage sub-system commands $ set volume $1$dga100: /size=xxxxxx For volumes initialized prior to Opal, there will be the capability to dismount the volume, expand the size and re-mount (without re-initializing the volume) This same project will also enable shadowing of different sized volumes

    40. Post V7.3-1 Projects In a SAN there are many reasons for “normal” mount verifications Path switch by another cluster node Dropped FC packets (not a norm, but it does happen) Rezone of a SAN (causes in flight IO to be dropped) These result in mount verification messages that alarm the users “Quiet” mount verification will allow infrequent, immediately recovered mount verifications to be suppressed from operator logs Sysgen parameter will allow current behavior

    41. Post V7.3-1 Projects Recent cancellation of HSG80 Write History Logging project redirected the MiniMerge plans Working on design/prototype of host-based MiniMerge solution, based on Write Bit Map technology (from MiniCopy) Will support ALL FC types (HSG, HSV, MSA, XP) Does NOT require any storage firmware assists Schedule to be published in June 2003

    42. Post V7.3-1 Projects Development project underway to prototype SCS traffic over FibreChannel Provides LAN over FC Use PEDRIVER to provide SCS communication Goal is to provide stretch clusters without requiring additional cluster interconnect May also have “cleaner” failure characteristics because SCS and storage will fail as a single unit CI/MC class latency is not a goal (and not possible) It is a non goal to provide general TCP/IP or DECNET over FC links (but we’ll probably get it for free) Prototype working since Sept-2002 Release goal is Opal + Tima kit

    43. Itanium® Based Systems IO Plans

    44. SCSI U160 controller (lvd only) U320 controller (lvd only) No plans for multi-host SCSI Host Bus Adapter RAID Smartarray family FibreChannel 2gb adapter Storage & Cluster Interconnect Storage Arrays HSG / EVA / MSA / XP OpenVMS Itanium® Based Systems IO Plans

    45. Longer Term Storage Interconnects

    46. Long Term Storage Interconnects 10Gb FibreChannel 2004??? Very expensive infrastructure costs at first iSCSI Industry has stagnated somewhat in 2002 Has some promise as a low-cost way to connect PCs to a SAN Host performance overhead is the main issue today Next Generation SCS Cluster Interconnect Infiniband and iWARP are being investigated Looking at interaction of SCS with Storage in these new interconnects Target for these is only Itanium®-based OpenVMS system

More Related