1 / 23

Ceph: de factor storage backend for OpenStack

Ceph: de factor storage backend for OpenStack. OpenStack Summit 2013 Hong Kong. Whoami.

noelle-cote
Download Presentation

Ceph: de factor storage backend for OpenStack

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Ceph: de factor storage backend for OpenStack OpenStack Summit 2013 Hong Kong

  2. Whoami 💥 Sébastien Han 💥 French Cloud Engineer working for eNovance 💥 Daily job focused on Ceph and OpenStack 💥 Blogger Personal blog: http://www.sebastien-han.fr/blog/ Company blog: http://techs.enovance.com/ Worldwide offices coverage We design, build and run clouds – anytime - anywhere

  3. Ceph What is it?

  4. The project • Unified distributed storage system • Started in 2006 as a PhD by Sage Weil • Open source under LGPL license • Written in C++ • Build the future of storage on commodity hardware

  5. Key features • Self managing/healing • Self balancing • Painless scaling • Data placement with CRUSH

  6. Controlled replication under scalable hashing • Pseudo-random placement algorithm • Statistically uniform distribution • Rule-based configuration

  7. Overview

  8. Building a Ceph cluster General considerations

  9. How to start? ➜ Use case • IO profile: Bandwidth? IOPS? Mixed? • Guaranteed IOs : how many IOPS or Bandwidth per client do I want to deliver? • Usage: do I use Ceph in standalone or is it combined with a software solution? ➜ Amount of data (usable not RAW) • Replica count • Failure ratio - How much data am I willing to rebalance if a node fail? • Do I have a data growth planning? ➜ Budget :-)

  10. Things that you must not do ➜ Don't put a RAID underneath your OSD • Ceph already manages the replication • Degraded RAID breaks performances • Reduce usable space on the cluster ➜ Don't build high density nodes with a tiny cluster • Failure consideration and data to re-balance • Potential full cluster ➜ Don't run Ceph on your hypervisors (unless you're broke)

  11. State of the integration Including best Havana’s additions

  12. Why is Ceph so good? It unifies OpenStack components

  13. Havana’s additions • Complete refactor of the Cinder driver: • Librados and librbd usage • Flatten volumes created from snapshots • Clone depth • Cinder backup with a Ceph backend: • backing up within the same Ceph pool (not recommended) • backing up between different Ceph pools • backing up between different Ceph clusters • Support RBD stripes • Differentials • Nova Libvirt_image_type = rbd • Directly boot all the VMs in Ceph • Volume QoS

  14. Today’s Havana integration

  15. Is Havana the perfect stack? …

  16. Well, almost…

  17. What’s missing? • Direct URL download for Nova • Already on the pipe, probably for 2013.2.1 • Nova’s snapshots integration • Ceph snapshot https://github.com/jdurgin/nova/commits/havana-ephemeral-rbd

  18. Icehouse and beyond Future

  19. Tomorrow’s integration

  20. Icehouse roadmap • Implement “bricks” for RBD • Re-implement snapshotting function to use RBD snapshot • RBD on Nova bare metal • Volume migration support • RBD stripes support « J » potential roadmap • Manila support

  21. Ceph, what’s coming up? Roadmap

  22. Firefly • Tiering - cache pool overlay • Erasure code • Ceph OSD ZFS • Full support of OpenStack Icehouse

  23. Many thanks! Questions? Contact: sebastien@enovance.com Twitter: @sebastien_han IRC: leseb

More Related