to node and storage availability • Not suitable for all use cases! • Data gravity (co-locating data and application) • Distributed datastores and filesystems (Cassandra, GlusterFS, etc) • Large caches Cost • Increase disk utilization in baremetal environments • Reduce operator cost for managing distributed storage systems and supporting infrastructure (networking hardware, etc) Performance • Local SSDs in cloud environments Local Persistent Storage Motivations
metadata: name: sleepypod spec: nodeName: node-1 volumes: - name: data hostPath: path: /mnt/some-disk containers: - name: sleepycontainer image: gcr.io/google_containers/busybox command: - sleep - "6000" volumeMounts: - name: data mountPath: /data readOnly: false Portability • Path (and data) is specific to the node • Need to manually schedule pods to specific nodes, bypassing scheduler • Paths can change across clusters and different environments
separate storage details from pod consumption Accounting • Only one Persistent Volume Claim can be bound to a Persistent Volume • API objects with managed lifecycles Security • Only administrators can create Persistent Volumes Local Persistent Volumes Solution
only be used as a Persistent Volume • Scheduler is aware of volume’s node constraints External static provisioner for local volumes • Run as a DaemonSet on every node • Discovers local volumes mounted under configurable directories • Automatically create, cleanup and destroy local Persistent Volumes 1.7 Alpha Details
scheduling • Doesn’t consider pod resource and scheduling requirements (ie, CPU, pod affinity, etc) • Cannot specify multiple local volumes in a single pod spec • External provisioner cannot correctly detect volume capacity for new volumes created after provisioner has started 1.7 Limitations
source, and for pod consumption • Local volume health monitoring, taints and tolerations • Inline PV (use local disk as ephemeral storage) • Dynamic provisioning Roadmap