1 / 9

XtremIO Data Protection (XDP) Explained

XtremIO Data Protection (XDP) Explained. View this presentation in Slide Show mode. XDP Benefits. Combines the best traits of traditional RAID with none of its drawbacks Ultra-low 8% fixed capacity overhead No RAID levels, stripe sizes, chunk sizes, etc. High levels of data protection

early
Download Presentation

XtremIO Data Protection (XDP) Explained

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. XtremIO Data Protection (XDP) Explained View this presentation in Slide Show mode

  2. XDP Benefits • Combines the best traits of traditional RAID with none of its drawbacks • Ultra-low 8% fixed capacity overhead • No RAID levels, stripe sizes, chunk sizes, etc. • High levels of data protection • Sustains up to two simultaneous failures per DAE* • Multiple consecutive failures (with adequate free capacity) • “Hot Space” - spare capacity is distributed (no hot spares) • Rapid rebuild times • Superior flash endurance • Predictable, consistent, sub-millisecond performance *v2.2 encodes data for N+2 redundancy and supports a single rebuild per DAE. A future XIOS release will add double concurrent rebuild support.

  3. XDP Stripe – Logical View 2 Parity columns Q C1 C2 C3 C4 C5 C6 C7 P P1 Q1 6 Data rows P – is a column that contains parity per row The following slides show a simplified example of XDP. In reality, XDP uses a (23+2) x 28 stripe. Q – is a column that contains parity per diagonal. Q2 P2 Q3 P3 Q4 P4 Q5 P5 Q6 P6 Q7 4K 7Data columns Every block in the XDP stripe is 4KB in size.

  4. Although each column is represented in this diagram as a logical block, The system has the ability to read or write in granularity of 4KB or less Physical View C1 C2 C3 C4 C5 C6 P C7 C Q Stripe’s columns are randomly distributed across the SSDs to avoid hot spots and congestion Each SSD contains the same numbers of P and Q columns

  5. SSD Failure Remaining data blocks are recovered using the diagonal parity, blocks previously read and stored in the controller memory, along with minimal reads from SSD The data is recovered, using the Q parity and data blocks from C2 and C3 that are already in the Storage Controller memory If the SSD where C1 is stored has failed, let’s see how XDP efficiently recovers the stripe XDP always reads the first two rows in a stripe and recovers C1’s blocks using row parity stored at P XDP minimizes reads required to recover data by 25% (30 vs. 42) increasing rebuild performance compared with traditional RAID. The system reads the rest of the diagonal data (columns C5, C6 and C7), and computes the value of C1 Next, XDP recovers data using the diagonal parity Q. It first reads the parity information from row Q Expedited recovery process completes with fewer reads and parity compute cycles. C2 C3 C4 C5 C6 C7 P Q C1 Controller Memory P Q C1 C2 C3 C4 C5 C6 C7 P 2 6 5 1 3 4 7 26 22 15 19 18 14 27 30 24 23

  6. XDP Rebuilds & Hot Space Allows SSDs to fail-in-place • Rapid rebuilds • No performance impact after rebuild completes for up to five failed SSDs per X-Brick 3 failed SSDs ~330K IOPS 4 failed SSDs ~330K IOPS 5 failed SSDs ~330K IOPS

  7. Stripe update at 80% utilization % Free Blocks • Example shows new I/Os overwriting addresses with existing data – there is no net increase in capacity consumed (space frees up in other stripes) • At least one stripe is guaranteed to be 40% empty => hosts benefit from the performance of a 40% empty array vs. a 20% empty array • Re-ranking stripes according to % of free blocks • Subsequent updates are performed using this algorithm • Diagram shows an array that is 80% full Stripe number 40% 0% 0% • The system ranks stripes according to utilization level • Always writes to the stripe that is most free • Writes to SSD as soon as enough blocks arrive to fill the entire emptiest stripe in the system (in this example 17 blocks are required) 40% 20% 20% 40% 40% 0% 0% 20%

  8. XDP Stripe - the Real Numbers Parity 23 data columns Q P RAID Overhead (of P,Q) = 57/701 = 8% 28 data rows 25 SSDs

  9. Update Overhead Compared

More Related