Slide 1

Slide 1 text

our journey to persistent Kubernetes storage with Rook our journey to persistent Kubernetes storage with Rook

Slide 2

Slide 2 text

Hi! I'm Andri Hi! I'm Andri Ops One AG, Zurich as department in the market since 2007, spin-off 2016 Managed Server (PaaS) for PHP, Node, Python, Ruby, Java, ... Managed Applications (SaaS) Nextcloud, Matomo, Discourse, Gitlab, Atlassian Stack, ... Managed Kubernetes (CaaS) what we'll talking about soon :)

Slide 3

Slide 3 text

Background Background

Slide 4

Slide 4 text

Platform Overview Platform Overview Cockpit customer facing interface CRUD for servers and their config Hardware inhouse Nutanix cluster or: any cloud provider supporting Terraform or: manual on-premise installation Managed Server VM configuration managed by Puppet 7th evolution, perfected ever since

Slide 5

Slide 5 text

Nutanix Nutanix hyperconverged infrastructure supermicro based appliance with service & support distributed compute & storage high availability concurrent to some Kubernetes features

Slide 6

Slide 6 text

Docker Docker in production since 2015 images provided by vendors like Gitlab or Discourse own images for exotic needs tightly integrated into our Managed Servers

Slide 7

Slide 7 text

Kubernetes Kubernetes started playing around in 2016 production since 2018 beginner courses since 2019 see

Slide 8

Slide 8 text

Approach Approach started with simple and uncritical workloads played with distributions (Rancher, OpenShift, CoreOS) and other (NixOS) setups decided to get it running on our existing infrastructure started with single node clusters evaluated network, storage & load balancer learned the meaning behind "Kubernetes The Hard Way" Tutorial:

Slide 9

Slide 9 text

Storage Evaluation Storage Evaluation Container Storage Interface (CSI) Driver available for Nutanix and all major cloud providers does not work with bare metal setups (on-premise or maybe our own someday) vendor lock-in Cloud-Native Storage Solution looked at Rook, Longhorn, StorageOS decided to go with Rook because the building blocks behind (Ceph, NFS) where proven already

Slide 10

Slide 10 text

No content

Slide 11

Slide 11 text

Rook: Overview Rook: Overview cloud-native storage orchestrator framework for 3rd party storage providers and solutions « Rook turns distributed storage systems into self-managing, self-scaling, self- healing storage services. It automates the tasks of a storage administrator: deployment, bootstrapping, configuration, provisioning, scaling, upgrading, migration, disaster recovery, monitoring, and resource management. »

Slide 12

Slide 12 text

Rook: Project Rook: Project open source (Apache 2.0) CNCF incubating project top contributors Cloudical, Nexenta, Red Hat, SUSE, Upbound

Slide 13

Slide 13 text

Rook: Details Rook: Details extends Kubernetes with custom types and controllers controlled through existing operator pattern Rook Operator brain behind any storage CRUD, upgrades, rebalancing, health & monitoring not on the data path - can be offline for some time Storage Providers made available by their respective creators Ceph, EdgeFS, Cassandra, CockroachDB, Minio, NFS, YugabyteDB

Slide 14

Slide 14 text

Rook: Ceph Dashboard Rook: Ceph Dashboard can be enabled in the cluster CRD will expose HTTP port as a service

Slide 15

Slide 15 text

Rook: Ceph Monitoring Rook: Ceph Monitoring built in metrics collectors for Prometheus Grafana dashboards provided

Slide 16

Slide 16 text

Ceph example Ceph example

Slide 17

Slide 17 text

Rook: Ceph example Rook: Ceph example CustomResourceDefinition and operator git clone cd rook/cluster/examples/kubernetes/ceph kubectl create -f common.yaml kubectl create -f operator.yaml

Slide 18

Slide 18 text

Rook: Ceph example Rook: Ceph example define CephCluster apiVersion: kind: CephCluster metadata: name: rook-ceph namespace: rook-ceph spec: cephVersion: image: ceph/ceph:v14.2.4-20190917 mon: count: 3 dashboard: enabled: true storage: useAllNodes: true useAllDevices: true

Slide 19

Slide 19 text

Rook: Ceph example Rook: Ceph example Ceph agent, mgr, mon & osd running $ kubectl -n rook-ceph get pod NAME READY STATUS RESTARTS AGE rook-ceph-agent-4zkg8 1/1 Running 0 140s rook-ceph-mgr-a-d9dcf5748-5s9ft 1/1 Running 0 77s rook-ceph-mgr-a-dashboard-5s9ft 1/1 Running 0 77s rook-ceph-mon-a-7d8f675889-nw5pl 1/1 Running 0 105s rook-ceph-mon-b-856fdd5cb9-5h2qk 1/1 Running 0 94s rook-ceph-mon-c-57545897fc-j576h 1/1 Running 0 85s rook-ceph-operator-6c49994c4f-9csfz 1/1 Running 0 141s rook-ceph-osd-0-7cbbbf749f-j8fsd 1/1 Running 0 23s rook-ceph-osd-1-7f67f9646d-44p7v 1/1 Running 0 24s rook-ceph-osd-2-6cd4b776ff-v4d68 1/1 Running 0 25s rook-ceph-osd-prepare-node1-vx2rz 0/2 Completed 0 60s rook-ceph-osd-prepare-node2-ab3fd 0/2 Completed 0 60s rook-ceph-osd-prepare-node3-w4xyz 0/2 Completed 0 60s rook-discover-dhkb8 1/1 Running 0 140s

Slide 20

Slide 20 text

Rook: Ceph example Rook: Ceph example CephBlockPool & StorageClass as of now, Rook will satisfy any PersistentVolumeClaims apiVersion: kind: CephBlockPool metadata: name: replicapool namespace: rook-ceph spec: replicated: size: 2 --- apiVersion: kind: StorageClass metadata: name: rook-ceph-block provisioner: parameters: blockPool: replicapool clusterNamespace: rook-ceph

Slide 21

Slide 21 text

Summary Summary Rook is a controller for existing software like Ceph today, we're using Rook with Ceph and NFS EdgeFS sounds promising for multi cloud setups intro: more than happy about the project and its process still, ponder over pros and cons of using CSI

Slide 22

Slide 22 text

Thank you! Thank you!