1 / 50

VERITAS Storage Foundation  for Networks

VERITAS Storage Foundation  for Networks. 2004-08-02 Mike Dutch. VERITAS Storage Foundation. Storage Foundation for Oracle RAC. Storage Foundation for Databases. Storage Foundation. File System. Volume Manager. Storage Foundation for Networks. What is VSFN?.

norm
Download Presentation

VERITAS Storage Foundation  for Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. VERITAS Storage Foundation for Networks 2004-08-02 Mike Dutch

  2. VERITAS Storage Foundation Storage Foundation for Oracle RAC Storage Foundation for Databases Storage Foundation File System Volume Manager Storage Foundation for Networks

  3. What is VSFN? • A network-based disk controller • Advantages of in-band and out-of-band virtualization • Leverages stability of proven virtualization technology • Complements host-based volume management • Host focus on applications and installation policies • Network focus on decoupling and offloading • Storage focus on device optimization • Part of storage foundation for utility computing

  4. What problems does VSFN solve? • Freedom to choose disk storage hardware • Decouple technology for better business alignment • Customer is in stronger position to negotiate price • Leverage current storage resources and skills • Single tool to centrally manage multi-vendor storage • Separate administration of servers and storage • Reduce or delay capital expenses • Increase utilization by pooling storage across hosts • Increase utilization by not dedicating storage for snapshots • Reduce scheduling conflicts by sharing physical devices • Enable use of legacy/JBOD storage for more applications

  5. Administrator Interface Management Server LAN Software resides in MDS supervisors and operates in Advanced Services Modules MDS 9500 SAN Servers Storage Disk Arrays JBOD Sample VSFN Configuration

  6. Storage network hardware Cisco MDS 9506/9509 Multilayer Directors Cisco MDS 9216Multilayer Fabric Switch • Dual Supervisors • 1 to 4/7 modules • 16 to 128/224 ports • Supervisor + 16 ports built-in • 0 or 1 module • 16 to 48 ports 9500 SupervisorModule Cisco MDS 9000 Family Modules Caching Services Module (no ports) Fibre Channel Switching Modules (16 or 32 ports) Advanced Services Module (32 ports) IP Storage Services Module (4/8 GbE ports)

  7. Cisco MDS 9509 Multilayer Director Two 16-port FC Switching Modules Two 32-port FC Switching Modules Two Supervisor Modules Two Advanced Services Modules(VSFN operates within each ASM) 8-port IP Storage Services Module Two Power supplies

  8. Distributed processing Binding virtual targets to physical ASM ports tells the fabric which DPP will respond to VT requests. If the HBA is attached to an ASM, the data is sent directly to the physical disks after “strategizing” with the VLUN Owner. Otherwise, the data is sent to the VO. . . . 2 1 LUN 0 Bound Bound VO xP VS xP VE Imported Disk Group Each port is associated with a Data Path Processor (DPP) Each ASM also contains a Control Plane Processor (CPP) In all cases, non-I/O requests aresent to the VS that imported the DG . . .

  9. VLUN I/O Strategy Disk I/O Strategies VLUN I/O Done Disk I/Os Done Data path DPP0 Host (VEP) N-Port DPP0 Disk Access N-Port Host VO Disk0 Disk1 Write Transfer Ready Time Data Transfer Status

  10. Key features of VSFN • Centrally manage multi-vendor storage poolsto improve productivity and media utilization • Simplify networked storage management with flexible and granular LUN configuration • Optimize device sharing by virtualizing enclosures, ports, storage, and command sets • Improve application availability via network-based dynamic multipathing, copy services, and non-disruptive intelligent switch failover

  11. Key opportunities • Mergers and acquisitions • VSFN helps merge disparate infrastructures • Cut the enterprise array tax • e.g., SRDF/TimeFinder licenses and dedicated BCVs • Meta LUN performance penalty • Ongoing IT consolidation • VSFN enables transparent device migration • Datacenter infrastructure servers • VSFN supports Windows without host software

  12. EVERYWHERE Where to Do Virtualization? Intensive I/O Operations Application Integration Performance Storage Array Intelligent Switch Server

  13. Why virtualize at multiple levels? Servers UnbreakableScalableAdaptable Virtual Optimize applications Physical BreaksFixed capacityBounded performance Network Offload / decouple Storage Optimize physical device

  14. Storage virtualization Storage Server Network Mid-range JBOD

  15. Foundation for utility computing Consolidated Management • Support for DAS and SAN • Database & file system integration • Online storage provisioning • Server-based dynamic multipathing • OS and enclosure-based naming • Data mobility (PDC, …) • Service level automation (ISP, …) . . . . . . HOST • Virtual enclosures, targets, LUNs • Network-based RAID (0, 1, 0+1, 1+0) • Network-based copy services • Network dynamic multipathing • Highly available network services NETWORK • Commodity storage (JBOD/RAID) • Caching • Vendor LUN configuration • Array-specific copy services . . . STORAGE

  16. Common questions • Why virtualize resources? • How does this help create a storage utility? • Don’t “exotic” switches cost more than servers? • Why do I need VSF if I already have VSFN? • Why do I need VSFN if I already have VSF? • Why virtualize in more than one place? • Why change VM for storage networks?

  17. Reduce costs • Pool heterogeneous storage for flexible provisioning to heterogeneous servers • Tier storage to align storage service levels with business objectives • Leverage storage management skills across heterogeneous server and storage platforms • Manage complexity while providing quality storage services in an affordable, manageable, and secure manner

  18. Increase revenues and profits • Eliminate single points of failure to increase availability of revenue generating activities • Insulate applications from disruptive storage events and the impact of errors or misdeeds • Offload data intensive operations to the network, freeing application servers to provide more transactions and performance • Increase the timeliness and marketability of information by frequently refreshing business intelligence data without restricting the IT architecture to specific storage platforms

  19. Complement strategic initiatives • Enhance data protection with tiered storage and more affordable copy services • Champion regulatory compliance directives with enhanced networked storage security and centralized control of corporate assets • Encourage the rapid integration of disparate infrastructures after mergers and acquisitions • Automate flexible data center wide storage management policies

  20. Eliminate single points of failure Access a virtual LUN from: Multiple servers Multiple HBA ports Multiple virtual targets Multiple virtual fabrics Mirror a virtual LUN across: Multiple enclosures Multiple ports/enclosure Service groups allow transparent recovery from network failures

  21. Service groups • Automatic failover • ASM or software (VEC/VES/VSHA/vxconfigd) failures • ASM rebooted • All disk groups contained in service group disabled • Data in a disabled disk group is unavailable • 99% of the time a disk group is disabled due to disk failure • Disk group is enabled when imported (reads private regions) • Manual failover • Concurrent maintenance • Customer-initiated

  22. 9i Real Application Cluster Example: run RAC on legacy disk Storage Foundation for Networks Shared Virtual LUNwith SCSI-3 PGR Shared Virtual LUNwith SCSI-3 PGR Physical arrays/JBODwithout SCSI-3 PGR HDS 7700E Unisys ESM700

  23. Example: off-host copy services OLTP BI Backup Recovery Site GOLD QoSS SILVERQoSS Snapshot Replication Resynch

  24. Deployment guidelines • VSF and VSFN • Most large heterogeneous environments • VSF only • Single host • Multiple hosts access a single storage array • Host does not access storage over a storage network • Small configurations where storage management is not complex • VSFN not supported on storage network platform • Do not want to install and manage intelligent storage networks • Want to wait for intelligent storage networks to mature • VSFN only • VSF not supported on OS platform • old releases, unsupported platforms, NAS gateways • Still need to qualify configuration from a support perspective • Do not want to install and manage VSF on host • Affinity to competitive host software • Service provider not allowed to touch client hosts • Flexible LUN management meets customer requirements

  25. Common management (GCS) capabilities • Virtualization can be managed in a common way across heterogeneous servers and networks (and disks with xVM) • Integrates with CommandCentral enterprise resource management • Manage data center-wide policies(as opposed to single host policies) • “Set and forget” physical device management

  26. VSF capabilities • Pool storage across multiple storage systems from multiple vendors • Tier storage to match storage capabilities with business requirements • Provide visibility into how application storage maps to physical device • Database and file system integration • Application-coordinated copy services • Host-based dynamic multipathing (virtual I/O path from the server) • Support for DAS (internal/external) and SAN • Online storage provisioning • Optimize use of resources (striping, re-size, re-layout, hot spot detection) • Online media protection (mirroring, hot sparing, hot relocation) • OS and enclosure-based volume naming • Data mobility • Volume Replicator • Portable Data Containers • Service level automation • Relocation policies (Quality of Storage Service) • Intelligent Storage Provisioning

  27. Capabilities added by host SAN-VM • Offload configuration tasks from application hosts to management servers (separate management path from data and control paths) • Increase media utilization by letting volumes from multiple hosts aggregate blocks ("use") the disks in the same disk group • Offload data movement from application hosts (the VCs) to another server (the VSs or XCOPY engine or VSFN) • Minimize application host upgrades (since features can be upgraded on VS rather than all the VCs that use the features

  28. SAN-VM technology Storage Foundation Storage Foundation with SAN-VM ApplicationHost ApplicationHost ApplicationHost ManagementHost Access Manage Access Access Manage Disk Group Disk Group Volume Volume Volume

  29. VSFN capabilities • Common point between network-attached servers and storage • Simultaneous support for heterogeneous servers and storage • No hardware agenda • Create virtual devices (fabrics, enclosures, ports, logical units, SCSI command set) for interoperability and leveraging current hardware • Common implementation of SCSI used by applications • Simulate errors for testing and audit readiness purposes • Create virtual devices to allow parallel use of physical resources • Insulate server administration from storage administration (enhanced security) • Free the application server from data intensive operations • Eliminating single points of failure (including service groups for transparent switch failover) • Network-based RAID (0, 1, 0+1, 1+0) for availability and performance • Network-based dynamic multipathing • Flexible LUN configuration performed by administrator rather than the vendor • Integrated SAN Management • In the future: SAN file system

  30. Storage management software market 2003 Market Share: 40.2% 37.7% 22.1% Storage Infrastructure Data Management Enterprise SRM 2002-2007 CAGR = 7.6% 2002-2007 CAGR = 4.1% 2002-2007 CAGR = 12.8% Source: Gartner Dataquest April 2004 (Report #120422) April 2003 (Report #114628) Distributed Systems: 81.7%, Mainframe: 18.3%

  31. VSFN (Support) VERITAS VSFN (Support) VSFN (Support) VSFN (Support) MDS/ASM SmartNet MDS/ASM SmartNet MDS, ASM, VSFN (Support) VSFN Routes To Market Distributor END USER ATP Partner/VAR/SI CISCO MDS MDS MDS OSM MDS

  32. Focus customers Bell Canada Bell South Partner Orange SBC Telcordia Astra Zeneca Glaxo Novartis Santa Clara Hospital CGI GVS ITXC T-Online DMDC DISA SOCOM Allianz Banca Intesa BCI Deutsche Bank Downey Savings Fairbanks Capital HSBC Lehman Brothers Morgan Stanley National Australian Bank NYFIX UBS Air Products Cisco IT Exxon Wal-Mart

  33. VSFN 1.1 pricing W SKUs for Basic Support (-000112) and Extended Support (1/2/3 years: -000212/ -000224/ -000236)

  34. VERITAS virtualization roadmap 2004 2005 4.0 4.1+ Intelligent storage management Next generation virtualization Storage Foundation™ . . . 1.1 2.0 Highly available network-based virtualization Network-based enhanced copy services Storage Foundation™for Networks . . .

  35. VSFN, Cisco roadmap 414.148 417.203 519.311 5xx.xxx VSFN (VERITAS) 1.2(1.4) 1.3.1 1.3.4 2.0(1) SAN-OS (Cisco) Internal: 11/10/03Announce: 11/24/03 VSFN 2.0 RTS: 9/30/03 12/01/03 1H04 2H04 2005 1.0 MP1 • Replication • Instant Snapshot • Consolidated Mgmt • Non-disruptive migration • Host toolkit/Stack int. • Enhancements/Fixes MAINTENANCE PATH 1.1 FP1 FEATUREPATH VSFN 1.0 VSFN 1.1 ~110 Enh/Bug fixes • RAID, sparing, relocation • Striping, re-size, re-layout • Dynamic multipathing in fabric • Split mirror snapshots • Virtual controller services • Global Configuration Services • Service Groups • Integrated SAN Management • Online Help • vxvm CLI • Enh./Fixes • Midrange Disk Support • IPS Module qualification • Dual Fabric support • Enhancements/Fixes

  36. Additional Features in 1.1 FP1 • Optimize Volume Recovery • Reduce Data Traffic Over ISL New capabilities in VSFN 1.1 FP1 iSCSI VLUN access Mid-range disk support IP Storage Services Module (4/8 GbE ports) CX Thunder FAStT

  37. PLUN 2 PLUN 1 VLUN VLUN Intelligent switch A/P configuration Host VSAN Disk VSAN xP Host DMP 2 Network DMP 1 xP/VO VS/VES VT2 VT3 Active/Passive Disk Array PLUN = LU in physical enclosure VLUN = LU in virtual enclosure Configuration VLUN VLUN Control Data

  38. Dual fabric support

  39. Replication IP Consistency Groups SCSI SCSI SCSI Synchronous Asynchronous/Periodic Server or Network-based

  40. Instant snapshot • Split-mirror snapshots • Up to 32 mirrors / VLUN • Split after synchronization • Resynch original/replica • Instant snapshots • Copy-on-write • Full data copy • Space-optimized prepare clear create . . . reattach Snapshot abort

  41. Consolidated management • Centrally manage all Storage Foundation products without disruption of environment • Single point of administration • Enhanced scalability to manage large numbers of objects • Distributed and common CLI • Unified licensing model • Single sign-on support • Authorization and access control • Centralized package/patch distribution and updates • Quick glance of data center storage environment • Single location to view all alerts and events

  42. Licensing changes for VSFN 2.0 • Currently license • Host ports, disk ports, fabric-level functionality • Restrict access from host ports to 32 * ASMs • Can become issue for iSCSI hosts • GCS1 manages a single fabric • Proposed changes • License each ASM • Options for RAID, Snapshot, Replication, Mobility • GCS2 manages any number of fabrics • GCS2 manages all Storage Foundation products

  43. Non-disruptive migration • Volume encapsulation / tunneling • Allow storage managed by Storage Foundation to be managed by Storage Foundation for Networks • Allow storage management by Storage Foundation for Networks to be managed by Storage Foundation • Data remains on original physical storage device • LUN migration • Do not move data through application server • Data moved to a different physical storage device

  44. Host toolkit • SAN management host agents • Host dynamic multipathing for VLUNs • VSS/VDS providers (Windows) • Transparent and secure CLI for snapshots

  45. Host stack integration • CommandCentral integration • CC 4.1 ships GCS2 Web GUI (for VxVM only) • Storage Foundation • quiesce/resume support for VLUN snapshots • storage mapping for VLUNs/VLUN snapshots • volumes on physically separate VLUNs • NBU integration • VxFIS, VxAQ, VxMS, and VxFI infrastructure

  46. Enhancement examples • Enhanced DMP load balancing algorithms • Balanced path (A/A) • Round-robin (A/P C) • Single Active (A/P) • Minimum queue length (JBOD) • Priority-Based (customer policy) • Adaptive Priority (varying I/O loads) • Enhance compatibility • Broaden qualified server environments • Broaden qualified storage devices • Support for evolving standards (SMI-S, FDMI, …) • Bug fixes • Template-based allocation

  47. VSFN, Cisco Futures • Fabric Application Interface Standard (FAIS) • Active/Active ASM HA • File Services • Temporal volumes (any point-in-time access) • Ongoing improvements and tighter integration 2006

  48. VSFN, Brocade roadmap 2005 All VSFN platforms plan same content(as hardware permits) VSFN 2.0 VSFN 2.1+ Brocade SilkWorm 24000 with Fabric Application blades Brocade SilkWorm AP7420Fabric Application Platform

  49. VERITAS Storage Foundation™ for Networks, Brocade Volume Server and GCS may be shared with VSF Redundant components for High Availability VSFN ApplicationServers Management Server Brocade SilkWorm AP7420Fabric Application Platform Tiered Storage Future: 24000Fabric Application Blade

  50. Why VERITAS? • Market leadership in open systems software • Heterogeneous - No Hardware Agenda • Committed to utility computing

More Related