Slide 1

Slide 1 text

Kubernetes volume provisioning using Ceph distributed storage GCPUG.TW Kyle bai R&D @ inwinSTACK www.inwinstack.com © 2017 Kyle Bai Theme. All Rights Reserved. kairen([email protected]) https://kairen.github.io/

Slide 2

Slide 2 text

About Me Kyle Bai Job R&D@ inwinSTACK Description 早期在校主要撰寫 java 與 objc 程式語⾔言,並專注於 Mobile 應⽤用程式開發,具備四年年開發經驗。次要研讀 Hadoop 與 Spark 資料運算框架,以及 Linux 相關技術。 研替⼯工作期間則專注於 OpenStack、Ceph 與 Container 相關等 Cloud Native 技術與開源專案,閒暇之餘會參參與相 關專案社區及貢獻,並利利⽤用部落落格、GitHub 與 GitBook 做筆記。 Drink Coffee Mobile Love Good! #7727 Buttocks

Slide 3

Slide 3 text

Thursday - 7:30 PM Ceph on Kubernetes Deploy a Ceph cluster on to a Kubernetes cluster. Ceph in Kubernetes Kubernetes integration with Ceph storage. Thursday - 7:35 PM Thursday - 7:40 PM Thursday - 7:55 PM Ceph storage Ceph uniquely delivers object, block, and file storage in one unified system. Let’s Go Agenda Today I will talk about

Slide 4

Slide 4 text

Ceph storage Ceph uniquely delivers object, block, and file storage in one unified system.

Slide 5

Slide 5 text

Ceph distributed storage • “A Scalable, High-Performance Distributed File System” • “performance, reliability, and scalability” • A distributed object, block, and file storage platform • All components scale horizontally • No single point of failure • Hardware agnostic, commodity hardware • No vendor lock-in! • Open source (LGPL) Total downloads 160,015,454 Commits 77,799 Contributors 700+

Slide 6

Slide 6 text

Ceph RADOS http://events.linuxfoundation.org/sites/events/files/slides/2015-03-13-vault_0.pdf

Slide 7

Slide 7 text

Ceph cluster http://events.linuxfoundation.org/sites/events/files/slides/2015-03-13-vault_0.pdf

Slide 8

Slide 8 text

Ceph Monitor • ~5 per cluster (small odd number) • Maintain cluster membership and state • Consensus for decision making • Not part of data path

Slide 9

Slide 9 text

Ceph OSD • 10s to 1000s per cluster • One per disk (HDD, SSD, NVMe) • Serve data to clients • Intelligently peer for replication & recovery

Slide 10

Slide 10 text

Ceph MDS • Manages metadata for a POSIX- compliant shared filesystem • MDS stores metadata in RADOS • Clients stripe le data in RADOS • Only required for shared filesystem

Slide 11

Slide 11 text

Ceph data placement http://events.linuxfoundation.org/sites/events/files/slides/2015-03-13-vault_0.pdf

Slide 12

Slide 12 text

Ceph on Kubernetes Deploy a Ceph cluster on to a Kubernetes cluster.

Slide 13

Slide 13 text

K8S Cluster with Local Storage

Slide 14

Slide 14 text

K8S Cluster with External Storage

Slide 15

Slide 15 text

K8S Cluster with Converged Storage

Slide 16

Slide 16 text

K8S Cluster with External K8S Storage

Slide 17

Slide 17 text

Ceph on Kubernetes • ceph-docker:Docker files and images to run Ceph in containers. • openstack-helm:Helm charts for deploying OpenStack on Kubernetes. • rook.io: File, Block, and Object Storage Services for your Cloud-Native Environment.

Slide 18

Slide 18 text

Ceph running in the Kubernetes

Slide 19

Slide 19 text

Ceph in Kubernetes Kubernetes integration with Ceph storage. https://github.com/kairen/ceph-in-k8s

Slide 20

Slide 20 text

Block vs Share file • Stripe images across entire cluster (pool) • Copy-on-write clones • Read-only snapshots • Incremental backup (relative to snapshots) • Back-end for cloud solutions • KVM/libvirt support • POSIX-compliant semantics • Separates metadata from data • Dynamic rebalancing • FUSE support • NFS/CIFS deployable • Use with Hadoop (replace HDFS)

Slide 21

Slide 21 text

Volumes

Slide 22

Slide 22 text

Volumes

Slide 23

Slide 23 text

Persistent Volumes for CephFS

Slide 24

Slide 24 text

Persistent Volumes for RBD

Slide 25

Slide 25 text

Storage Classes for RBD

Slide 26

Slide 26 text

Storage Classes for CephFS

Slide 27

Slide 27 text

27 有任何問題與想法,可以⼀一起討論唷。 Thank You!!