1 / 22

Introduction to EVA Keith Parris Systems/Software Engineer HP Services

Introduction to EVA Keith Parris Systems/Software Engineer HP Services Multivendor Systems Engineering Budapest, Hungary May 2003 Presentation slides on this topic courtesy of: chet jacobs senior technical consultant, and karen fay senior technical consultant. HSV110 storage system.

bianchi
Download Presentation

Introduction to EVA Keith Parris Systems/Software Engineer HP Services

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to EVA • Keith Parris • Systems/Software Engineer • HP Services • Multivendor Systems Engineering • Budapest, Hungary • May 2003 • Presentation slides on this topic courtesy of: • chet jacobssenior technical consultant, andkaren faysenior technical consultant

  2. HSV110 storage system virtualization techniques

  3. HSV110 virtualization :subjects covered • distributed virtual RAID versus conventional RAID. • disk group characteristics. • virtual disk ground rules. • virtual disk Leveling. • distributed sparing. • redundant storage sets. • Snapshot and SnapClone implementation. • configuration remarks.

  4. performance limited by # of disk drives in StorageSet possible to find customer data if one knows the LBN and chunksize. load balancing required of application and databases over available backend (SCSI) busses I/Os balanced across StorageSet performance limited by # of disk drives in disk group customer data distributed across all disks in a group eliminate load balancing procedures for applications and databases. I/Os balanced across disk group HSV110 virtualization :distributed versus conventional RAID conventional RAID distributed virtual RAID

  5. workload evenly distributed across all spindles HSV110 DVR R A I D 5 v o l u m e R A I D 0 v o l u m e R A I D 1 v o l u m e HSV110 virtualization : conventional versus distributed virtual RAID SCSI Bus 1 SCSI Bus 2 SCSI Bus 3 SCSI Bus 4 SCSI Bus 5 SCSI Bus 6 RAID 5 volume HSG80 RAID sets RAID 0 volume RAID 1 volume

  6. HSV110 virtualization:disk group characteristics • minimum: 8 physical disk drives • VRAID5 requires 5 physical disk spindles, minimum (no problem) • VRAID1 uses even number of spindles • maximum : # of physical disk drives present • will automatically choose spindles across shelves (in V2) • maximum # of disk groups per subsystem: 16 • net capacity • TBD (as disk capacities grows, it will change) • contains the spare disk space • 0, 1, or 2 disk failures • called “none, single or double” in element manager • chunk size • 2 MB (fixed), PSEG

  7. HSV110 virtualization:virtual disk ground rules • virtual disk redundancy: • VRAID0 (none): data is striped across all physical disks in the disk group. • VRAID5 (moderate): data is striped with parity across all physical disks in the disk group. always 5 (4+1) physical disks per stripe are used. • VRAID1 (high): data is striped mirrored across all physical disks (even number of them) in the disk group. established pairs of physical disks mirror each other.

  8. conventional RAID5 algorithm virtual disk address space LBN 000-299 LBN 300-599 LBN 600-999 LBN 1000-1299 LBN 1300-1599 Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Parity Parity Parity Parity Parity disk 0 disk 1 disk 2 disk 3 disk 4 CHUNK 00 CHUNK 01 CHUNK 02 CHUNK 03 Parity 00,01,02,03 CHUNK 05 CHUNK 06 CHUNK 07 CHUNK 04 Parity 04,05,06,07 CHUNK 10 CHUNK 11 CHUNK 08 CHUNK 09 Parity 08,09,10,11 etc... etc... etc... etc... etc...

  9. VRAID5 algorithm LBN 000-399 LBN 400-799 LBN 800-1199 LBN 1200-1599 LBN 1600-1999 virtual disk address space Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Parity Parity Parity Parity Parity disk 0 disk 1 disk 2 disk 3 disk 4 CHUNK CHUNK CHUNK CHUNK 04 CHUNK CHUNK CHUNK 01 CHUNK 03 CHUNK CHUNK Parity 05 CHUNK CHUNK CHUNK CHUNK 02 etc... etc... etc... etc... CHUNK etc... • always 4+1 RAID5 • guaranteed to have each PSEG on a separate spindle in disk group

  10. VRAID1 algorithm virtual disk address space LBN 000-299 LBN 300-599 LBN 600-999 LBN 1000-1299 LBN 1300-1599 Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data disk 0 disk 1 disk 2 disk 3 disk 4 etc... etc... etc... etc... etc...

  11. HSV110 virtualizationvirtual disk leveling • goal is to provide proportional capacity leveling across all disk drives within the disk group. • example 1: disk group = 100 drives, all 18G • All disks will contain 1% of the virtual disk. • example 2: disk group = 100 drives , 50*72G, 50*36G • each 72G disk will contain > 1% of the V.D. • approximately double the share of the 36GB drives, because it is double the capacity • each 36G disk will contain < 1% of the V.D. • load balancing is achieved through capacity leveling

  12. HSV110 virtualization :virtual disk leveling • dynamic pool capacity changes • pool capacity can be added in small increments (1 disk minimum) add more spindles need more capacity or performance in a disk group R A I D 5 V o l u m e + R A I D 0 V o l u m e R A I D 1 V o l u m e = disks running at optimum throughput (dynamic load balancing) available for expansion R A I D 5 V o l u m e R A I D 0 V o l u m e R A I D 1 V o l u m e

  13. HSV110 virtualization :distributed sparing • note: we no longer spare in separate spindles • chunks allocated, but not dedicated as spares, on all disk drives of disk group to survive 1 or 2 disk drive failures. • allocation algorithm • single (1) = capacity of 2 * largest spindle in disk group • double (2) = capacity of 4 * largest spindle in disk group • hint: spindles have a semi-perm paired relationship for VRAID1… • thats why 2 times

  14. HSV110 virtualization :distributed sparing • example # 3: • disk group : 8 * 36G & 6 * 72G, protection level : single (1). • total disk group size ? • 720 GB • spare allocation ? • 144 GB • maximum size for total virtual disks in disk group ? • 576 GB note: minus overhead for metadata & formatted disk reduction & binary to decimal conversion

  15. moderate redundant volume (RAID 5) HSV110 virtualization :distributed sparing • virtual disk blocks automatically regenerated to restore redundancy. redundancy temporarily compromised available storage space (virtual space = 2 disks) high redundant volume (RAID 1) • redundancy automatically regenerated • data regenerated and distributed across the virtual pool available storage space (virtual space <1 disk) moderate redundant volume (RAID 5) high redundant volume (RAID 1)

  16. best practices for disk groups • when using mostly VRAID1; use even spindle counts in disk groups • if you need to isolate performance or disk failure impacts, use separate groups; example: log file for a database should be in a different group than the data area. • try keeping disk groups to like disk capacities and speeds • but…bring unlike drive capacities into disk group in pairs

  17. HSV110 storage system point-in-time copy techniques

  18. HSV110 virtualization :Snapshot and SnapClone implementation • Snapshot : data is copied from virtual disk to Snapshot on demand (before its modified on parent volume). • space efficient - “virtually capacity free” : • chunks will be allocated in the disk group on demand. • Snapshot removed if disk group becomes full. • space guaranteed - “standard” : • chunks are allocated in the disk group at moment of Snapshot creation. • Snapshot allocation remains available if disk group becomes full. • 7 active snapshots per parent volume (in V2) • must live in the same disk group as parent • “preferred” pathed by the same controller as the parent volume

  19. HSV110 virtualization :Snapshot and SnapClone implementation yuck! • SnapClone – “virtually instantaneous SnapClone” (COPY) : data is copied from virtual disk to SnapClone in background • chunks are allocated in the disk group at moment of SnapClone creation. • can be presented to a host and used immediately • any group may be the home for the SnapClone (in V2) • SnapClone’s raid level will match parent volume (for now) • independent volume when fully realized • may be preferred pathed to either controller

  20. HSV110 virtualization : space guarantee Snapshot creation and utilization contents different contents different contents identical volume “A” snap of “A” (contents as of noon) volume “A” snap of “A” (contents as of noon) volume “A” snap of “A” updates T3 updates T1 updates T1 volume “A” receives more updates volume “A” receives updates time 12:05 12:15 12:10 12:00 noon 12:20

  21. HSV110 virtualization : space efficientSnapshot creation and utilization contents different contents different contents identical volume “A” snap of “A” (contents as of noon) volume “A” volume “A” snap of “A” snap of “A” (contents as of noon) updates T3 updates T1 updates T1 volume “A” receives more updates volume “A” receives updates time 12:05 12:15 12:10 12:00 noon 12:20

  22. HSV110 virtualization:Snapshot versus Snapclone

More Related