openSUSE Kubic is the SUSE Container as a Service Platform based on openSUSE Tumbleweed. In this talk, we want to present what this is, how this works, how people can get involved.
requirements • VMs add significant resources and management overhead • Running apps in containers is cool, but we need new tools Introducing Project Kubic Why Project Kubic?
• Minimal images designed for one special Use Case • Transactional Updates • Focused on large deployments • Reduced end-user interactions • Ready to run • Rolling release • Based on openSUSE Tumbleweed RPMs • Own installer, different setup
(all features including Kubernetes) • Containers for Administration Dashboard are still missing • Installation images are building • Images for KVM, XEN,VMWare are still missing • Integration in openQA is missing • Documentation: Available only for SUSE CaaS Platform
• No simple bash login prompt where admins have to configure everything! • Btrfs with snapshots and rollback for transactional updates • Read-only filesystem with overlayfs for /etc • Cloud-init for initial configuration (Network, Accounts, Salt) • SALT for full system configuration • Administration Node with dashboard to manage cluster
installation ISO image for Administration or Cluster Node • RPM for automatic mass installation of Cluster Nodes (PXE, USB) • Provides /srv/tftpboot files for PXE boot and installation • Can be used to create bootable USB stick or DVD with syslinux • Cloud and virtualization: Provide ready to run images • KVM, VMWare, XEN
Well known format used by several major distributions • Proofed working toolchain from building RPMs to customer delivery • Lot of customers have already policies and toolchain for RPM updates • Signed, easy to verify • Verification of installed system possible • Delta-RPM to save bandwidth
as far as possible • Best case: no additional configuration by Sysadmin necessary • Cloud-init to configure the Base OS • Network • SALT • SSH, host key and user keys • Password • Timezone • Keyboard • NTP Can be done via SALT instead
and snapshots are read-only • Subvolumes to store data are read-write • Example: /var/log, /var/cache, /var/crash and similar directories • Use overlayfs for /etc (for cloud-init and salt) • Introduce /var/lib/overlay/{work,etc} for overlayfs • Full stateless system: • Remove content of /var/lib/overlay/etc/ and /var/lib/cloud => Not recommended, since even /etc/machine-id can go lost!
subvolumes → /cloud-init-config - Configuration files for cloud-init → /var/lib/docker - Storage for container → /var/{cache,crash,log} - Storage for system data → /var/lib/stateless/{work,etc} - Storage for overlayfs → /.snapshots/1/snapshot - Initial installation of Base OS → /.snapshots/2/snapshot - Base OS after first update → /.snapshots/3/snapshot - Base OS after second update
be disabled • Maintenance Window • Policy defined updates • Standard RPM with zypper, snapper and btrfs progs • Delivery in the same way as for openSUSE Tumbleweed • Process to get updates, fixes, features is the same as for Tumbleweed • SMT as local proxy
that: • Is atomic • Either fully applied or not at all • The update does not influence your running system • Can be rolled back • If the upgrade fails or if the update is not compatible, you can quickly restore the situation as it was before the upgrade
to boot the system • Necessary to configure and run the stack • No general purpose OS • New packages will be introduced if needed • Packages will be removed if no longer needed • No guarantee for a stable ABI • Additional RPMs are installable, but can break automatic update => Customer workload has to run in containers
or Leap • Installer useable for Administration and Cluster Nodes • RPM based • Autoinstallation for Cluster Nodes • Create autoyast.xml on or download from the Administration Node • Have minimal image doing autoyast installation by • booting from USB disk • PXE boot
with SLES 12 • Only visible if no automatic proposal is possible • Create btrfs filesystem with subvolumes • No other root filesystem supported • Install Base OS into first snapshot • Configure and install bootloader
kdump initrd • No YaST module available, configuration during installation or manual • AutoYaST • Needs to run in first stage only mode • Configure options from second stage are not available