Slide 1

Slide 1 text

openSUSE Kubic SUSE Container as a Service Platform on openSUSE Tumbleweed Alexander Herzig – [email protected] Federica Teodori – [email protected] Thorsten Kukuk – [email protected]

Slide 2

Slide 2 text

Application delivery supply chain has changed Introducing Project Kubic Why Project Kubic?

Slide 3

Slide 3 text

Micro-Services are fun ● Monolithic approaches do not meet our requirements ● VMs add significant resources and management overhead ● Running apps in containers is cool, but we need new tools Introducing Project Kubic Why Project Kubic?

Slide 4

Slide 4 text

Setting up a cluster should be easy So should maintaining it be, as well Introducing Project Kubic Why Project Kubic?

Slide 5

Slide 5 text

● Container based architectures are growing ● Containers change IT infrastructure ● Containers speed up application delivery ● Containers provide portability across environments ● Containers make applications scale faster Introducing Project Kubic Why Project Kubic?

Slide 6

Slide 6 text

● re-designing the operation system to Container needs – Kubic rethinks the OS design ● combining a powerful stack based on: – openSUSE Tumbleweed – SALTSTACK – Docker project – Kubernetes Introducing Project Kubic Why Project Kubic?

Slide 7

Slide 7 text

Join Project Kubic Build the next generation Container OS with us

Slide 8

Slide 8 text

8 openSUSE Kubic

Slide 9

Slide 9 text

9 openSUSE Kubic • What is this? ● SUSE Container as a Service Platform based on openSUSE Tumbleweed

Slide 10

Slide 10 text

10 In a Nutshell: • OS focused only on containers • Minimal images designed for one special Use Case • Transactional Updates • Focused on large deployments • Reduced end-user interactions • Ready to run • Rolling release • Based on openSUSE Tumbleweed RPMs • Own installer, different setup

Slide 11

Slide 11 text

11 As Picture I get all the required support from the OS I’m small but powerful to run as many containers as needed

Slide 12

Slide 12 text

12 System-Infrastructure Overview Cluster Nodes (etcd + Kubernetes Master) SCC/CDN SMT Log Server Administration Node

Slide 13

Slide 13 text

13 How far are we? • SUSE MicroOS is ported (all features including Kubernetes) • Containers for Administration Dashboard are still missing • Installation images are building • Images for KVM, XEN,VMWare are still missing • Integration in openQA is missing • Documentation: Available only for SUSE CaaS Platform

Slide 14

Slide 14 text

14 Design

Slide 15

Slide 15 text

15 Highlights ● Ready to run out of the box ● No simple bash login prompt where admins have to configure everything! ● Btrfs with snapshots and rollback for transactional updates ● Read-only filesystem with overlayfs for /etc ● Cloud-init for initial configuration (Network, Accounts, Salt) ● SALT for full system configuration ● Administration Node with dashboard to manage cluster

Slide 16

Slide 16 text

16 Delivery (planned) ● Installer for Bare Metal: ● Standard installation ISO image for Administration or Cluster Node ● RPM for automatic mass installation of Cluster Nodes (PXE, USB) ● Provides /srv/tftpboot files for PXE boot and installation ● Can be used to create bootable USB stick or DVD with syslinux ● Cloud and virtualization: Provide ready to run images ● KVM, VMWare, XEN

Slide 17

Slide 17 text

17 Package Format Stay with RPM ● RPM advantages ● Well known format used by several major distributions ● Proofed working toolchain from building RPMs to customer delivery ● Lot of customers have already policies and toolchain for RPM updates ● Signed, easy to verify ● Verification of installed system possible ● Delta-RPM to save bandwidth

Slide 18

Slide 18 text

18 Two Step System Configuration ● System should be pre-configured as far as possible ● Best case: no additional configuration by Sysadmin necessary ● Cloud-init to configure the Base OS ● Network ● SALT ● SSH, host key and user keys ● Password ● Timezone ● Keyboard ● NTP Can be done via SALT instead

Slide 19

Slide 19 text

19 Btrfs as filesystem – Layout (1/2) ● Base OS and snapshots are read-only ● Subvolumes to store data are read-write ● Example: /var/log, /var/cache, /var/crash and similar directories ● Use overlayfs for /etc (for cloud-init and salt) ● Introduce /var/lib/overlay/{work,etc} for overlayfs ● Full stateless system: ● Remove content of /var/lib/overlay/etc/ and /var/lib/cloud => Not recommended, since even /etc/machine-id can go lost!

Slide 20

Slide 20 text

20 Btrfs as filesystem – Layout (2/2) /@/ - Standard subvolumes → /cloud-init-config - Configuration files for cloud-init → /var/lib/docker - Storage for container → /var/{cache,crash,log} - Storage for system data → /var/lib/stateless/{work,etc} - Storage for overlayfs → /.snapshots/1/snapshot - Initial installation of Base OS → /.snapshots/2/snapshot - Base OS after first update → /.snapshots/3/snapshot - Base OS after second update

Slide 21

Slide 21 text

21 Update ● Transactional Updates ● Automatic updates ● Can be disabled ● Maintenance Window ● Policy defined updates ● Standard RPM with zypper, snapper and btrfs progs ● Delivery in the same way as for openSUSE Tumbleweed ● Process to get updates, fixes, features is the same as for Tumbleweed ● SMT as local proxy

Slide 22

Slide 22 text

22 Definition A “transactional update” is a kind of update that: ● Is atomic ● Either fully applied or not at all ● The update does not influence your running system ● Can be rolled back ● If the upgrade fails or if the update is not compatible, you can quickly restore the situation as it was before the upgrade

Slide 23

Slide 23 text

23 Packages ● openSUSE Kubic contains all packages: ● Necessary to boot the system ● Necessary to configure and run the stack ● No general purpose OS ● New packages will be introduced if needed ● Packages will be removed if no longer needed ● No guarantee for a stable ABI ● Additional RPMs are installable, but can break automatic update => Customer workload has to run in containers

Slide 24

Slide 24 text

24 Installer – Only for Bare Metal

Slide 25

Slide 25 text

25 Different kind of Setups ● Administration Node with Dashboard ● Cluster Nodes ● No services ● Installed RPMs are always the same ● Difference are the running services and containers

Slide 26

Slide 26 text

26 Installation Media ● Installation Media similar to openSUSE Tumbleweed or Leap ● Installer useable for Administration and Cluster Nodes ● RPM based ● Autoinstallation for Cluster Nodes ● Create autoyast.xml on or download from the Administration Node ● Have minimal image doing autoyast installation by ● booting from USB disk ● PXE boot

Slide 27

Slide 27 text

27 One-Page Installer ● Partition harddisk ● same behavior as with SLES 12 ● Only visible if no automatic proposal is possible ● Create btrfs filesystem with subvolumes ● No other root filesystem supported ● Install Base OS into first snapshot ● Configure and install bootloader

Slide 28

Slide 28 text

28

Slide 29

Slide 29 text

29 Differences compared to Tumbleweed

Slide 30

Slide 30 text

30 Differences to openSUSE Tumbleweed (1/2) ● Update of packages ● Read-only root filesystem ● transactional-update [up|dup|patch] instead of zypper ● Recommended: transactional-update dup ● Additional Packages or PTFs ● transactional-update ptf [install|remove] ... ● Rollback ● transactional-update rollback [number] instead of snapper rollback [number]

Slide 31

Slide 31 text

31 Differences to openSUSE Tumbleweed (2/2) ● transactional-update will generate kdump initrd ● No YaST module available, configuration during installation or manual ● AutoYaST ● Needs to run in first stage only mode ● Configure options from second stage are not available

Slide 32

Slide 32 text

32 Where to get it? ISO Image: http://download.opensuse.org/tumbleweed/iso/ Documentation: https://www.suse.com/betaprogram/caasp-beta/#documentation Git: https://github.com/kubic-project Else standard openSUSE devel projects

Slide 33

Slide 33 text

33 Thank you. Questions?

Slide 34

Slide 34 text

No content