Upgrade to Pro — share decks privately, control downloads, hide ads and more …

The Ceph Distributed Storage System

The Ceph Distributed Storage System

Slides of my introduction to the Ceph Distributed Storage System for the SAGE@GUUG meetup in Hamburg, Germany - http://guug.de/lokal/hamburg/index.html

Lenz Grimmer

May 24, 2018
Tweet

More Decks by Lenz Grimmer

Other Decks in Technology

Transcript

  1. 2 Ceph Overview A distributed storage system Object, block, and

    file storage in one unified system Designed for performance, reliability and scalability
  2. 3 Ceph Motivating Principles Everything must scale horizontally No single

    point of failure (SPOF) Commodity (off-the-shelf) hardware Self-manage (whenever possible) Client/cluster instead of client/server Avoid ad-hoc high-availability Open source (LGPL)
  3. 4 Ceph Architectural Features Smart storage daemons • Centralized coordination

    of dumb devices does not scale • Peer to peer, emergent behavior Flexible object placement • “Smart” hash-based placement (CRUSH) • Awareness of hardware infrastructure, failure domains No metadata server or proxy for finding objects Strong consistency (CP instead of AP)
  4. 6 MONs Tracks & Monitor the health of the cluster

    Maintains a master copy of the cluster map Consensus for distributed decision making MONs DO NOT serve data to clients
  5. 7 OSDs Object Storage Deamon Store the actual data as

    objects on physical disks Serve stored data to clients Replication mechanism included Minimum of 3 OSDs recommended for data replication
  6. 9 Placement Groups Helper to balance the data across the

    OSDs One PG typically spans several OSDs One OSD typically serves many PGs Recommended ~150 PGs per OSD
  7. 10 CRUSH Map Controlled Replication Under Scalable Hashing MONs maintain

    the CRUSH map Topology of any environment can be modeled (row, rack, host, dc...)
  8. 14 Ceph Object Store Features RESTful Interface S3- and Swift-compliant

    APIs S3-style subdomains Unified S3/Swift namespace User management Usage tracking Striped objects Cloud solution integration Multi-site deployment Multi-site replication
  9. 15 Ceph Block Device Features Thin-provisioned Images up to 16

    exabytes Configurable striping In-memory caching Snapshots Copy-on-write cloning Kernel driver support KVM/libvirt support Back-end for cloud solutions Incremental backup Disaster recovery (multisite asynchronous replication)
  10. 16 Ceph Filesystem Features POSIX-compliant semantics Separates metadata from data

    Dynamic rebalancing Subdirectory snapshots Configurable striping Kernel driver support FUSE support NFS/CIFS deployable Use with Hadoop (replace HDFS)