1 / 18

Scalable Logging System for DRAM based Storage

Scalable Logging System for DRAM based Storage. Seoul National Univ. DCSLab 최찬호. Motivation. DRAM based Storage 의 등장 RAMcloud Kaminario K2 JSM. Problem. Power Crash 발생 시 Data 손실 우려 SSD 사용 ? 가격 상승 Log File System 사용 ? G.C 시 속도 저하. Objectives.

nowles
Download Presentation

Scalable Logging System for DRAM based Storage

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Scalable Logging System for DRAM based Storage Seoul National Univ. DCSLab 최찬호

  2. Motivation • DRAM based Storage의 등장 • RAMcloud • Kaminario K2 • JSM

  3. Problem • Power Crash 발생 시 Data 손실 우려 • SSD 사용? 가격 상승 • Log File System 사용? G.C시 속도 저하

  4. Objectives • Commodity HDD를 활용하여 지속적으로 DRAM SSD만큼의 대역폭을 유지 • Scalable • Throughput when Garbage Collection

  5. Design • Eliminate Random I/O from HDD • Log Structured Mega I/O • Block Versioning • 시간대별 블록 저장 • Garbage Collection using DRAM SSD • G.C를 위한 Read Action은 모두 DRAM에서

  6. Design Example Disk Pool Current Working Block WRITE DRAM SSD READ

  7. G.C Detail Garbage Collection Target DRAM SSD Current Working Block Write Valid Data Valid Data Table

  8. Process R/W /dev/mapper/mylog Log Path Merger G.C Path W R/W Mega Block DRAM SSD W Distributer Update Valid Data Table FreeBlock<3 Block Invalid Counter Disk Pool Garbage Collector R R

  9. Mega Block I/O(1MB) Design • Metadata block(4KB) • Global Timestamp(8bytes) • Local Timestamp(8bytes) • Block Size(8bytes) • Pointer Array(Max 255개) • Address(4bytes), Size(4bytes) • Data block(1020KB) • Only Data MAX 255 MAX 255

  10. Block Size Decision • 적절한 수의 Block 나누기 • DRAM SSD 용량: D, 디스크 수: M, 각 디스크 용량: S, 대역폭 감소 비율: P, 디스크의 최소 분할 수: x

  11. Evaluation • Experiment Setup • Intel i7 CPU 3.07GHz (Quad-core) • 8GB Memory • TeajinInfotech DRAM SSD 67.6GB • Six 2TB Seagate HDD 7200RPM SATA3 • Linux kernel 2.6.32 • Benchmark • iozone3

  12. Evaluation • Logging System Setup • Block Size: 8GB • Number of Blocks: 8 • Test set Size: 32GB

  13. Evaluation • Throughput (pure) – iozone 5% overhead

  14. Evaluation • Throughput (G.C) – iozone 0.7% overhead

  15. Evaluation • Scalability

  16. Recovery • Plain recovery • 가장 오래된 블록부터 전체를 Replay • Fast recovery • Mega Block의 Head만을 읽고 유효 데이터만 Read

  17. Conclusion • Logging System for DRAM based Storage • Performance Degradation: Average 5% • G.C overhead < 0.7%

  18. Q&A

More Related