Rook: Intro and Deep Dive With Ceph
Satoru Takeuchi, Cybozu, Inc.
June 17th, 2025
2
Slide 3
Slide 3 text
Title
Agenda
● Introduction to Rook and Ceph
● Block and Filesystem Storage
● Object Storage
● Other Features
● Project Health
3
Slide 4
Slide 4 text
Introduction to Rook
and Ceph
4
Slide 5
Slide 5 text
Title
What is Rook?
● An open source K8s operator to manage Ceph storage
● For Admins
○ Deploy, manage, upgrade Ceph cluster by CR
● For Users
○ Consume Ceph by PVC and OBC CR
5
Slide 6
Slide 6 text
Title
What is Ceph?
● All-in-one open source distributed storage platform
6
Name Type
RBD Block Storage
CephFS Large-scale share filesystem storage
RGW S3-compatible object storage
CephNFS Export CephFS and S3 object as NFS
Slide 7
Slide 7 text
Remote replications
7
Storage Type Feature
RBD RBD mirroring
CephFS CephFS mirroring
RGW RGW multisite
Ceph cluster
Region A Region B
Ceph cluster
Replicate
Slide 8
Slide 8 text
Ceph’s Architecture
● OSD daemons
○ Manage data
● MON daemons
○ Manage cluster’s state
● MGR daemons
○ Provide additional features
8
Ceph Cluster
Mons
Mons
MON
Mons
MGR
Storage (e.g.RBD)
OSD OSD OSD
disk disk disk
Storage (e.g.RBD)
Network storages
(e.g. RBD)
Storage
pool
…
…
Slide 9
Slide 9 text
Ceph’s Architecture
● OSD daemons
○ Manage data
● MON daemons
○ Manage cluster’s state
● MGR daemons
○ Provide additional features
9
Ceph Cluster
Mons
Mons
MON
Mons
MGR
Storage (e.g.RBD)
OSD OSD OSD
disk disk disk
Storage (e.g.RBD)
Network storages
(e.g. RBD)
Storage
pool
…
…
Slide 10
Slide 10 text
Ceph’s Architecture
● OSD daemons
○ Manage data
● MON daemons
○ Manage cluster’s state
● MGR daemons
○ Provide additional features
10
Ceph Cluster
Mons
Mons
MON
Mons
MGR
Storage (e.g.RBD)
OSD OSD OSD
disk disk disk
Storage (e.g.RBD)
Network storages
(e.g. RBD)
Storage
pool
…
…
Slide 11
Slide 11 text
Ceph’s Architecture
● OSD daemons
○ Manage data
● MON daemons
○ Manage cluster’s state
● MGR daemons
○ Provide additional features
11
Ceph Cluster
Mons
Mons
MON
Mons
MGR
Storage (e.g.RBD)
OSD OSD OSD
disk disk disk
Storage (e.g.RBD)
Network storages
(e.g. RBD)
Storage
pool
…
…
Slide 12
Slide 12 text
Title
● High scalability
○ Real example: ~1800 OSDs, over 5 PiB
● High durability
○ Replication or Erasure Coding
○ Configurable failure domains (e.g. rack)
● High availability
○ e.g. Add/remove/replace OSDs online
Ceph’s Characteristics
12
Slide 13
Slide 13 text
Title
Rook’s Architecture
● Rook operator
○ Manage Rook/Ceph clusters
○ Provision Pod for each Ceph daemons
● Ceph CSI
○ A CSI Driver for Ceph
○ Provisions storage from Ceph
13
Rook/Ceph Cluster
Mons
Mons
MON
Pod
Mons
MGR
Pods
Storage (e.g.RBD)
OSD
Pod
OSD
Pod
OSD
Pod
disk disk disk
Storage (e.g.RBD)
Network storages
(e.g. RBD)
Storage
pool
…
…
MGR
Pod
Ceph CSI
manage
Rook
operator
manage
Slide 14
Slide 14 text
Example: Provisioning and expanding cluster
1. Deploy a minimum Ceph cluster
2. Expand the cluster
disk1
14
Rook
operator
disk0
Supported Configurations
21
Storage Volume Mode Access Mode
RBD Block, Filesystem RWO, RWOP, ROX
CephFS Filesystem RWX, RWO, ROX, RWOP
CephNFS The same as above The same as above
Slide 22
Slide 22 text
Title
Additional Features
22
Storage Volume Expansion,
snapshot, and cloning
Static provisioning QoS
RBD ✅ ✅ ✅
CephFS ✅ ✅
CephNFS ✅
Slide 23
Slide 23 text
Example: Consuming a Block Volume
1. Create an RBD pool
2. Consume a block volume
23
Rook
operator
Rook/Ceph
cluster
Ceph CSI
How to provision and consume RGW?
● Use ObjectBucket(OB) and ObjectBucketClaim(OBC) CR
○ Similar to PV and PVC for block and filesystem storage
29
PV
PVC
Block or filesystem
storage
RGW bucket
OB
OBC
Create
User User
Create
Slide 30
Slide 30 text
Example: Consuming a Bucket
1. Create an RGW pool
2. Create a bucket
3. Consume the created bucket
30
Rook/Ceph
cluster
Rook
operator
Rook/Ceph
cluster
Step3: Consume the Created Bucket
RGW pool
(3 replicas)
35
User
1. Create
Bucket
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- envFrom:
- configMapRef:
name: my-bucket
- secretRef:
name: my-bucket
Secret
(Access Key
Secret Key)
ConfigMap
(URL)
Rook
operator
Slide 36
Slide 36 text
Rook/Ceph
cluster
Step3: Consume the Created Bucket
RGW pool
(3 replicas)
36
User
Bucket
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- envFrom:
- configMapRef:
name: my-bucket
- secretRef:
name: my-bucket
Secret
(Access Key
Secret Key)
ConfigMap
(URL)
My-app
pod
3. Access
2. Use as envvars
Rook
operator
Slide 37
Slide 37 text
Another Interface to Access RGW
● OB and OBC are not the K8s official way
● Rook supports Container Object Storage Interface (COSI)
○ The K8s official way
○ Similar to CSI for block and filesystem storage
● COSI will replace OB and OBC in the future
37
Slide 38
Slide 38 text
Other Features
38
Slide 39
Slide 39 text
Non-K8s environment
External Cluster
39
● Consume external Ceph clusters from a Kubernetes cluster
Ceph cluster
K8s cluster
PVC, OBCs
Application
Pods
Rook&Ceph CSI Other K8s clusters
Rook/Ceph cluster
Slide 40
Slide 40 text
Remote Replications
40
Ceph Feature Custom Resource
RBD mirroring CephRBDMirror
CephFS mirroring CephFilesystemMirror
RGW multisite CephObjectRealm
Rook/Ceph cluster
K8s cluster in region A K8s cluster in region B
Rook/Ceph cluster
kind: CephRBDMirror
…
kind: CephRBDMirror
…
RBD pool RBD pool
Replicate
Slide 41
Slide 41 text
● Create PDBs for each failure domains
● Only one failure domain is allowed to be down at once
● e.g: when the failure domani is “node”
Managed PDB Configuration
41
node0 node1
Rook
Admin
apiVersion: policy/v1
kind: PodDisruptionBudget
spec:
maxUnavailable: 1
selector:
matchLabels:
app: rook-ceph-osd
…
1. Create
OSD0
Pod
OSD1
Pod
2. Drain 3. Drain (blocked)
Slide 42
Slide 42 text
Title
Administration Tools
● Toolbox Pod
○ A Pod for running arbitrary Ceph commands
● Kubectl rook-ceph krew plugin
○ Running handy Ceph operations
42
Admin
All Ceph features
Features covered by Rook
By Rook’s CRs
By toolbox pod & kubect rook-ceph
Slide 43
Slide 43 text
Project Health
43
Slide 44
Slide 44 text
Title
Philosophy
● Support latest Ceph and K8s
● Make Ceph the best storage platform for K8s!
44
Slide 45
Slide 45 text
Title
Stability
● Marked as stable 6 years ago
● Many upstream users running in production
● Many downstream deployments running in production
45
Slide 46
Slide 46 text
Title
Release Cycle
● Major version: Always “1” for now
● Minor version: Once per 4 months
● Patch version: Biweekly or on demand
46
Slide 47
Slide 47 text
Title
Active Community
● GitHub and slack channel
● 500+ contributors to the GitHub project
○ e.g. Clyso, Cybozu, IBM/Red Hat, and Upbound
● Monthly community meeting
● CNCF Graduated project
47
GitHub https://github.com/rook/rook
Slack https://slack.rook.io
Slide 48
Slide 48 text
Containers and
Helm charts
Docker Hub, Quay.io,
GitHub Container Registry (GHCR)
Website and Docs https://rook.io
48
Try Rook!