Session ID: O24 - PowerPoint PPT Presentation

slide1 n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Session ID: O24 PowerPoint Presentation
Download Presentation
Session ID: O24

play fullscreen
1 / 42
Session ID: O24
600 Views
Download Presentation
richard_edik
Download Presentation

Session ID: O24

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. Session ID: O24 Enterprise Volume Management System for Linux Steve Dobbelstein Lake Buena Vista, FL September 8-12, 2003 © IBM Corporation 2003

  2. Trademark Notes • The following terms are trademarks of International Business Machines Corporation in the United States and/or other countries: • IBM • AIX • OS/2 • System/390 • Linux is a trademark of Linus Torvalds • Other company, product, and service names may be trademarks or service marks of others.

  3. Agenda • Volume management basics • Overview of volume management schemes • Enterprise Volume Management System (EVMS) • Overview • Demonstration

  4. What is Volume Management? • Provide a logical abstraction of the physical storage devices • File systems and applications do not need to know about the organization of the physical devices

  5. Where To Use Volume Management • Systems with lots of physical storage • Lots of disks (tens, hundreds, thousands) • Combine many disks into a single pool of storage • Increased total storage space • Redundancy to protect against hardware failures • Systems with little physical storage • Single disk (most PCs, laptops) • Divide up disk to provide logically separate storage for different uses

  6. Disk Partitioning • Divide a disk into one or more logical sections • Simple, widely used • Fixed sizes, difficult to resize • No redundancy

  7. RAID • Redundant Array of Inexpensive Disks • Combine several disks • Increase total storage space • Provide redundancy • Improve performance • Can be done in hardware or software

  8. RAID Linear • Linear concatenation of several disks • Increased total storage space • No redundancy or performance improvement

  9. RAID 0 • Striping • Data are interleaved across all disks • Increased total storage space • Improved performance with parallel I/O • No redundancy

  10. RAID 1 • Mirroring • Redundancy • Multiple copies of all data • No extra storage space • Device size is equal to a single disk • Improved read performance • Reduced write performance

  11. RAID 4 • Striping with parity • Redundancy, but less than with mirroring • One "chunk" of parity bits per stripe • Increased storage space (minus size of one disk) • Improved performance, but at cost of CPU overhead

  12. RAID 5 • RAID 4 creates a bottleneck on the parity disk • Spread parity among all disks for better performance • Same total size as RAID 4

  13. Volume Groups • A collection of devices (disks, partitions, RAID) • The space of all devices is combined in the group, but not directly available as a device.

  14. Volume Groups • Combined space is divided into fixed-sized chunks • Physical Extents (PEs) • Similar to memory page frames

  15. Volume Groups • Create volumes from free-space in the group • Volumes consist of Logical Extents (LEs) • Each LE maps to a PE

  16. Volume Groups • Simple resizing of groups • Add new devices to the group to expand total available free-space • Remove devices that aren't used by any volumes

  17. Volume Groups • Simple resizing of volumes • Add or remove extents at the end of the volume

  18. Snapshots • Frozen image of a live volume • Useful for performing consistent backups without needing to take file system off-line • Copy-On-Write to save old data • "Origin" volume is always up-to-date • Snapshot capacity can be smaller than origin • Multiple simultaneous snapshots of same origin

  19. Snapshots • Origin volume and snapshot storage are divided into equal sized “chunks”

  20. Snapshots • Write to the origin volume

  21. Snapshots • Write to the origin volume • Write request is queued to be finished later • Chunk is read from the origin

  22. Snapshots • Write to the origin volume • Write chunk to snapshot storage

  23. Snapshots • Write to the origin volume • New COW table entry: Map chunk 4 to chunk 1

  24. Snapshots • Write to the origin volume • Release all I/Os waiting on this copy

  25. Snapshots • Read from the snapshot volume • Unmapped chunk: get data from origin volume

  26. Snapshots • Read from the snapshot volume • Re-mapped chunk: get data from snapshot

  27. Bad Block Relocation • Detect I/O errors • Remap bad blocks to a reserved area of the device

  28. Enterprise Volume Management System • Modular, extensible system for managing storage on Linux • Integrates all aspects of volume management into a single package • Disk Partitioning (fdisk, Disk Druid) • Volume Groups (LVM) • Software RAID (MD) • File Systems (mkfs, fsck, resizefs) • More

  29. EVMS Architecture

  30. EVMS Architecture • Engine • Core library • Provides API for user interfaces • Coordinates all activities with the plug-ins • Defines common set of possible tasks • Creation • Deletion • Resize • Configuration changes

  31. EVMS Engine Architecture

  32. EVMS Engine Architecture • Volume discovery bubbles up through the layers • Local Disk Manager plug-in discovers all disk devices • Each plug-in examines current list of devices • Claims a device by removing from the list • Creates new devices and adds to the list

  33. EVMS Engine Architecture • Plug-ins • Each plug-in recognizes a specific volume format • Partitions • Volume Groups • Software RAID • EVMS Features • File Systems

  34. EVMS Plug-ins

  35. EVMS User Interfaces • Communicate with Engine through well-defined API • Graphical User Interface (evmsgui) • Point-and-click • Intuitive displays • Text-Mode (evmsn) • Graphical-like for terminal windows • Same look and feel of the GUI • Command-Line (evms) • Create scripts for common tasks

  36. Clustering Support in EVMS • EVMS can operate in a cluster of machines • Assign ownership to a group of disks • Private to one cluster node • Shared by all cluster nodes • Reassign ownership during fail-over • Remote Administration • Ability to administer other nodes in the cluster from a single machine

  37. Clustering Support in EVMS • Cluster Managers • Provide membership and communication services • Used to specify fail-over policies • Linux-HA • Open-source, High-Availability cluster manager • RSCT • Reliable Scalable Cluster Technology • OpenGFS (in development) • Open-source cluster file system • Mount a file system on all cluster nodes simultaneously

  38. EVMS Platform Support • EVMS has been tested on Linux running on: • xSeries • x86 • IA64 • pSeries • PPC • PPC64 • zSeries • s390 • s390x

  39. Summary of EVMS Capabilities • EVMS integrates all aspects of disk, partition, and volume management into a single, easy-to-use system. • EVMS has no design limits on the number of disks, partitions, or volumes that it can handle. • EVMS minimizes down time due to configuration changes. • EVMS is very extensible, due to its modular design and support of plug-in modules. • (continued)

  40. Summary of EVMS Capabilities • EVMS is designed to be scalable, including use in clustering environments. • EVMS can read, write, and manipulate volumes created by other volume managers (given the proper plug-ins). • EVMS can reduce the costs associated with migrating data to Linux.

  41. EVMS Project Information • Hosted on SourceForge • http://evms.sf.net • Live CVS tree • Mailing lists • evms-announce@lists.sf.net • evms-devel@lists.sf.net • Installation instructions • Documentation • http://evms.sf.net/docs/EVMS-xSeries-2003.ppt • IRC: irc.freenode.net, #evms

  42. EVMS Demonstration