Upgrade to Pro — share decks privately, control downloads, hide ads and more …

From VM to Container To Serverless

From VM to Container To Serverless

From VM to Container To Serverless

william el kaim

December 11, 2016
Tweet

More Decks by william el kaim

Other Decks in Technology

Transcript

  1. This Presentation is part of the Enterprise Architecture Digital Codex

    http://www.eacodex.com/ Copyright © William El Kaim 2016 2
  2. Plan What is Virtualization? • From Virtual Machine to Container

    • What is Docker? • Docker Tools Ecosystem • The Open Container Initiative • From Container to Unikernel • What about Microcontainer? • The Rise of Serveless Computing • Conclusion Copyright © William El Kaim 2016 3
  3. The Need for Virtual Machine • Deployment of server applications

    is getting increasingly complicated since software can have many types of requirements: • Dependencies on installed software and libraries • Dependencies on running services • Dependencies on a specific operating systems • Dependencies on Resources • minimum amount of available memory ("requires 1GB of available memory") • ability to bind to specific ports ("binds to port 80 and 443") • To solve these issues, the main technical answer was: run each individual application on a separate virtual machine. Source: infoQ Copyright © William El Kaim 2016 4
  4. What is Virtualization? • Virtualization offers a hardware abstraction layer

    that can adjust to the specific CPU, memory, storage, and network needs of applications on a per server basis • Without virtualization, the application and operations architecture teams design, acquire and install the servers, storage and networking needed for each application Copyright © William El Kaim 2016 5
  5. What is Virtualization? Operating System Virtualization • Only one OS

    at a time • Reduces OS sprawl • Reduces in-memory consumption • Best for • Applications that do not coexist well with others • Individual workloads • SaaS • Examples • Parallels Virtuozzo Containers • Sun Solaris Containers • OpenVZ • Unix chroot command • Linux V-Server OS Management OS Virtualization Technology Virtual Machine Virtual Machine Application(s) Application(s) x86 Server SAN / NAS / DAS Copyright © William El Kaim 2016 6
  6. What is Virtualization? Bare Metal Hypervisor • Best for heterogeneous

    environments • Development and testing environments • Virtual desktop • legacy server consolidation • Virtualizes access to hardware (CPU, Memory, Storage) • assisted by Intel and AMD • Each VM has a guest OS • Reduces server sprawl • Examples • Citrix XENserver (Linux) • KVM • Parallels Server • VMARE ESX • Microsoft Hyper-V • Xen Bare Metal Virtualization Technology (Hypervisor) Virtual Machine Virtual Machine Guest OS Guest OS Application(s) Application(s) x86 Server SAN / NAS / DAS Copyright © William El Kaim 2016 7
  7. Hypervisor Type 1 vs. Type 2 • In their 1974

    article, Formal Requirements for Virtualizable Third Generation Architectures, Gerald J. Popek and Robert P. Goldberg classified two types of hypervisor: • Type-1, native or bare-metal hypervisors • These hypervisors run directly on the host's hardware to control the hardware and to manage guest operating systems. For this reason, they are sometimes called bare metal hypervisors. • The first hypervisors, which IBM developed in the 1960s, were native hypervisors. • Ex: Citrix XenServer, Microsoft Hyper-V and VMware ESX/ESXi. • Type-2 or hosted hypervisors • These hypervisors run on a conventional operating system just as other computer programs do. A guest operating system runs as a process on the host. • Type-2 hypervisors abstract guest operating systems from the host operating system. • Ex: VMware Workstation, VMware Player, VirtualBox, Parallels Desktop for Mac and QEMU. Copyright © William El Kaim 2016 8
  8. Virtualization Technologies • OS virtualization • Linux-Vserver, LXC, OpenVZ •

    Virtualization Software • KVM, QEMU, VMware, VirtualBox, VirtualP C, Xen et Bochs • Bare Metal Hypervisor • Citrix XenServer, Hyper-V, KVM, Parallels RAS, Proxmox VE, Vmware ESX (vSphere, vCloud), Xen Copyright © William El Kaim 2016 9 Magic Quadrant for x86 Server Virtualization Infrastructure
  9. Hyperconverged Integrated System • Integrated systems are combinations of server,

    storage and network infrastructure, sold with management software that facilitates the provisioning and management of the combined unit. • Integrated Stack System (ISS): Server, storage and network hardware integrated with application software to provide appliance or appliance like functionality. • Examples include IBM PureApplication System, Oracle Exadata Database Machine and Teradata. • Integrated Infrastructure System (IIS): Server, storage and network hardware integrated to provide shared compute infrastructure. • Examples include VCE Vblock, HP ConvergedSystem and Lenovo Converged System (formerly PureFlex). • HyperConverged Integrated System (HCIS): Tightly coupled compute, network and storage hardware that dispenses with the need for a regular storage area network (SAN). • Examples include Gridstore, Nimboxx, Nutanix, Pivot3, Scale Computing and SimpliVity. Copyright © William El Kaim 2016 10 Magic Quadrant for Integrated Systems
  10. Plan • What is Virtualization? From Virtual Machine to Container

    • What is Docker? • Docker Tools Ecosystem • The Open Container Initiative • From Container to Unikernel • What about Microcontainer? • The Rise of Serveless Computing • Conclusion Copyright © William El Kaim 2016 11
  11. Virtual Machine Are Expensive in two ways • Money •

    You need to predict the instance size you will need, because if you need more resources later, you need to stop the VM to upgrade it (or over-pay for resources you don't end up needing. • Unless you use Solaris Zones, like on Joyent, which can be resized dynamically. • Time • Many operations related to virtual machines are typically slow! • booting takes minutes, snapshotting can take minutes, creating an image takes minutes • Enter the container. Source: infoQ Copyright © William El Kaim 2016 12
  12. What Are Containers? • Containers, or otherwise known as operating-system-level

    virtualization, are a lightweight approach to virtualization that only provides the bare minimum that an application requires to run and function as intended. • Super minimalist virtual machines that are not running on a hypervisor. • Items usually bundled into a container include: • Application, dependencies, libraries, binaries and configuration files • Containerizing an application enables it to run reliably in different environments by abstracting away the operating system and the physical infrastructure. • Containerized applications are sharing the kernel of the host operating system with other containers and the shared part of the OS is read only. • Inside a container, there is often a single executable service or microservice. Copyright © William El Kaim 2016 13
  13. What Are Containers? • Containers are the products of operating

    system virtualization. • Lightweight virtual environment that groups and isolates a set of processes and resources such as memory, CPU, disk, etc., from the host and any other containers. • The isolation guarantees that any processes inside the container cannot see any processes or resources outside the container. • Only one App (or microservice) at a time • Run in isolated process on the host operating system • Portable and efficient • Reduces in-memory consumption • Best for • Multi-tenant application • Elastic applications (automatic scaling) • Examples • Docker, CoreOS, JeOS, RancherOS, Snappy Ubuntu Core, RedHat Atomic, Mesosphere DCOS, Vmware Photon Container OS Container Container Application(s) Application(s) x86 Server SAN / NAS / DAS Copyright © William El Kaim 2016 14
  14. Containers vs. Virtual Machines • Virtual machines contain a complete

    operating system and applications. Hypervisor-based virtualization is resource intensive, a VM can take up several GB depends on the guest the OS. • Virtual machines use hypervisors to share and manage hardware while containers share the kernel of the host OS to access the hardware. • Virtual machine have their own kernel and they don’t use and share the kernel of the host OS, hence they isolated from each other at a deep level. • Virtual machines residing on the same server can run different operating systems. One VM can run Windows while the VM next door might be running Ubuntu. • Containers are bound by the host operating system, containers on the same server use the same OS. • Containers are virtualizing the underlying operating system while virtual machines are virtualizing the underlying hardware. Copyright © William El Kaim 2016 16 Source: Flow-CI
  15. Why Using Containers? • The average container size is within

    the range of tens of MB while VMs can take up several gigabytes. • Therefore a server can host significantly more containers than virtual machines. • Running containers is less resource intensive then running VMs so you can add more computing workload onto the same server. • Provisioning containers only take a few seconds or less, therefore, the data center can react quickly to a spike in user activity. • Containers can enable you to easily allocate resources to processes and to run your application in various environments. • Using containers can decrease the time needed for development, testing, and deployment of applications and services. • Testing and bug tracking also become less complicated since you there is no difference between running your application locally, on a test server, or in production. • Containers are a very cost effective solution. They can potentially help you to decrease your operating cost (less servers, less staff) and your development cost (develop for one consistent runtime environment). • Container-based virtualization are a great option for microservices, DevOps, and continuous deployment. Copyright © William El Kaim 2016 18 Source: Flow-CI
  16. Containers Risks • Security. • Containers share the kernel, other

    components of the host operating system, and they have root access. This means that containers are less isolated from each other than virtual machines, and if there is a vulnerability in the kernel it can jeopardize the security of the other containers as well. • Virtual Machines only share the hypervisor which has less functionality and less prone to attacks than the shared kernels of the containers. The system hardware is presented to the VMs in a virtualized form so intrusions, viruses, and other malicious activities cannot spread over to other VM. • Less flexibility in operating systems. • You need to start a new server to be able to run containers with different operating systems. • While virtual machines with any kind of OS can live next to each other on the same server. This might not be a problem for hosting providers, but for complex enterprise application this can be a serious constrain. • Networking. • Deploying containers in a sufficiently isolated way while maintaining an adequate network connection is complex. Copyright © William El Kaim 2016 19
  17. Containers Providers • Unix/Linux Based • Docker is the most

    popular container technology, • LXC • LXD • Solaris Zones • RKT • BSD Jails • Microsoft provides two type of container solutions • Windows Server Containers and Hyper-V containers. • The main difference between two that Windows Server Containers, just like Docker, share the kernel with the container host and the other containers while Hyper-V Containers do not. Copyright © William El Kaim 2016 21
  18. Container Orchestration • Container orchestration platforms empower users to easily

    deploy, manage, and scale multi-container based applications in large clusters without having to worry about which server will host a particular container. • Container cluster orchestration is still a very competitive space: • Amazon ECS • Apache Mesos • Azure Container Service • CoreOS Fleet • Diego • Docker Swarm • Hashicorp Nomad • Kubernetes from Google now Open Source • Marathon • Open Stack Magnum Copyright © William El Kaim 2016 23
  19. Container Orchestration The Cloud Native Computing Foundation (CNCF) • The

    Cloud Native Computing Foundation is a Linux foundation project • Container packaged: In order to improve the overall developer experience, foster code reuse and simplify operations • Dynamically managed: Actively scheduled and managed by a central orchestrating process to radically improve machine efficiency • Micro-services oriented: Loosely coupled with dependencies explicitly described through service endpoints for overall agility, maintainability of applications • Just as the OCI targets container image portability, the CNCF targets cloud application portability… Copyright © William El Kaim 2016 24 Source: CNCF
  20. Container Orchestration The Cloud Native Computing Foundation (CNCF) • Projects

    under management • Kubernetes: an open-source system for automating deployment, scaling, and management of containerized applications. • Prometheus: an open-source systems monitoring and alerting toolkit originally built at SoundCloud. Copyright © William El Kaim 2016 25
  21. Container Networking • There are currently two proposed standards for

    configuring network interfaces for Linux containers: • The Container Network Model (CNM) is a specification proposed by Docker, adopted by projects such as libnetwork, with integrations from projects and companies such as Cisco Contiv, Kuryr, Open Virtual Networking (OVN), Project Calico, VMware and Weave. • The Container Network Interface (CNI) is a container networking specification proposed by CoreOS and adopted by projects such as Apache Mesos, Cloud Foundry, Kubernetes, Kurma and rkt. There are also plugins created by projects such as Contiv Networking, Project Calico and Weave. • These models promote modularity, composability and choice by fostering an ecosystem of innovation by third-party vendors who deliver advanced networking capabilities. • The orchestration of network micro-segmentation can become simple API calls to attach, detach and swap networks. • Interface containers can belong to multiple networks, and each container can publish different services in different networks. Copyright © William El Kaim 2016 26 Source: TheNewStack
  22. Container Networking Container Network Model • Libnetwork is the canonical

    implementation of the CNM specification. • Libnetwork provides an interface between the Docker daemon and network drivers. • The network controller is responsible for pairing a driver to a network. • Each driver is responsible for managing the network it owns, including services provided to that network. • With one driver per network, multiple drivers can be used concurrently with containers connected to multiple networks. • Drivers are defined as being either native (built-in to libnetwork or Docker supported) or remote (third party plugins). • The native drivers are none, bridge, overlay and MACvlan. • Remote drivers may bring any number of capabilities. Drivers are also defined as having a local scope (single host) or global scope (multi-host). Copyright © William El Kaim 2016 27 Source: TheNewStack
  23. Container Networking Container Network Interface • CNI was created as

    a minimal specification, built alongside a number of network vendor engineers to be a simple contract between the container runtime and network plugins. • A JSON schema defines the expected input and output from CNI network plugins. • Multiple plugins may be run at one time with a container joining networks driven by different plugins. • Networks are described in configuration files, in JSON format, and instantiated as new namespaces when CNI plugins are invoked. • CNI plugins support two commands to add and remove container network interfaces to and from networks. • Add gets invoked by the container runtime when it creates a container. • Delete gets invoked by the container runtime when it tears down a container instance. Copyright © William El Kaim 2016 28 Source: TheNewStack
  24. Container Networking CNI vs. CNM • Both democratize the selection

    of which type of container networking may be used • both are driver-based models, or plugin-based, for creating and managing network stacks for containers. • Both allows multiple network drivers to be active and used concurrently • each provides a one-to-one mapping of the network to that network’s driver. • Both models allow containers to join one or more networks and allow the container runtime to launch the network in its own namespace, segregating the application/business logic of connecting the container to the network to the network driver. • Both models provide separate extension points, aka plugin interfaces, for network drivers (to create, configure and connect networks) and IPAM (to configure, discover, and manage IP addresses). Copyright © William El Kaim 2016 29 Source: TheNewStack
  25. Container Networking CNI vs. CNM • CNM • does not

    provide network drivers access to the container’s network namespace. • The benefit here is that libnetwork acts as a broker for conflict resolution. • is designed to support the Docker runtime engine only. With CNI’s simplistic approach, it’s been argued that it’s comparatively easier to create a CNI plugin than a CNM plugin. • CNI • does provide drivers with access to the container network namespace. • supports integration with third-party IPAM and can be used with any container runtime. Copyright © William El Kaim 2016 30 Source: TheNewStack
  26. Plan • What is Virtualization? • From Virtual Machine to

    Container What is Docker? • Docker Tools Ecosystem • The Open Container Initiative • From Container to Unikernel • What about Microcontainer? • The Rise of Serveless Computing • Conclusion Copyright © William El Kaim 2016 33
  27. Docker: Package Once Deploy Anywhere • Docker is an open

    platform for developers and sysadmins to build, ship, and run distributed applications. • Consists of • Docker Engine, a portable, lightweight runtime and packaging tool • Docker Hub, a cloud service for sharing applications and automating workflows. • Docker enables apps to be quickly assembled from components and eliminates the friction between development, QA, and production environments. Copyright © William El Kaim 2016 34
  28. Docker: Two Existing Technologies Bundled • LXC: Linux Containers, which

    allow individual processes to run at a higher level of isolation than regular Unix process. • The term used for this is containerization: a process is said to run in a container. Containers support isolation at the level of: • File system: a container can only access its own sandboxed file system (chroot-like), unless specifically mounted into the container's file system. • User namespace: a container has its own user database (i.e. the container's root does not equal the host's root account) • Process namespace: within the container only the processes part of that container are visible • Network namespace: a container gets its own virtual network device and virtual IP (so it can bind to whatever port it likes without taking up its hosts ports). • AUFS: advanced multi layered unification filesystem, which can be used to create union, copy-on-write file systems. Source: infoQ Copyright © William El Kaim 2016 36
  29. Docker: Asset Of The Devops • Docker allows each development

    team to implement services using whatever language, framework or runtime they deem appropriate. • The only requirement they have to get their service to production is to provide a Docker image (plus some basic run configuration in a YAML file) to the Ops Team. Source: Tom Leach Copyright © William El Kaim 2016 37
  30. Docker: Asset Of The Devops • The Ops Team’s responsibilities

    are now restricted to simply building and maintaining a pipeline for deploying Docker containers without needing to concern themselves with what code each container actually contains. • The contents of the Docker image are solely the responsibility of the development team. • This allows the Ops Team to focus on core deployment problems • Moreover, this arrangement allows engineering teams to scale. • You can add more and more development teams and, as long as we adhere to the rule that every shippable service must be bundled in a Docker image, we add no additional cognitive load to the Ops Team. Source: Tom Leach Copyright © William El Kaim 2016 38
  31. Docker Pros and Cons • Docker Advantages • Assets are

    baked into an immutable image at build time • No deploy-time dependencies on 3rd party repository • Docker registry is simple and easy to scale • Dependencies simple, explicit and direct • Rollback is trivial • Docker Misconceptions (source: LockerDome) • If I learn Docker then I don't have to learn the other systems stuff! • You should have only one process per Docker container! • If I use Docker then I don't need a configuration management (CM) tool! • I have to use Docker in order to get these speed and consistency advantages! Source: Tom Leach Copyright © William El Kaim 2016 39
  32. Docker Advantages • It's very lightweight. • Booting up a

    Docker container has very little CPU and memory overhead and is very fast. Almost comparable to starting a regular process. • Not only running a container is fast, building an image and snapshotting the file system is as well. • Amazon Lambda is built on Docker and usage of the service is billed every 100 ms. • It works in already virtualized environments. • You can run Docker inside an EC2 instance, a Rackspace VM or VirtualBox. • On Mac and Windows use Vagrant. • Docker containers are portable to any operating system that runs Docker. • Whether it's Ubuntu or CentOS, if Docker runs, your container runs. Source: infoQ Copyright © William El Kaim 2016 40
  33. Plan • What is Virtualization? • From Virtual Machine to

    Container • What is Docker? Docker Tools Ecosystem • The Open Container Initiative • From Container to Unikernel • What about Microcontainer? • The Rise of Serveless Computing • Conclusion Copyright © William El Kaim 2016 41
  34. Microsoft and Docker • Hyper-V Containers will ensure code running

    in one container remains isolated and cannot impact the host operating system or other containers running on the same host • powered by Hyper-V virtualization • While Hyper-V containers offer an additional deployment option between Windows Server Containers and the Hyper-V virtual machine, you will be able to deploy them using the same development, programming and management tools you would use for Windows Server Containers Source: Azure Blog Copyright © William El Kaim 2016 50
  35. Microsoft and Docker • Nano Server: The Nucleus of Modern

    Apps and Cloud • OS for the primary purpose of powering born-in-the-cloud applications. • The result is Nano Server, a minimal footprint installation option of Windows Server that is highly optimized for the cloud, including containers. • This small footprint makes Nano Server an ideal complement for Windows Server Containers and Hyper-V Containers, as well as other cloud-optimized scenarios. • Nano Server focuses on two scenarios: • Born-in-the-cloud applications – support for multiple programming languages and runtimes. (e.g. C#, Java, Node.js, Python, etc.) running in containers, virtual machines, or on physical servers. • Microsoft Cloud Platform infrastructure – support for compute clusters running Hyper-V and storage clusters running Scale-out File Server. • You can read more about the technology on the Windows Server blog. Source: Azure Blog Copyright © William El Kaim 2016 51
  36. TIBCO BusinessWorks Container Edition Copyright © William El Kaim 2016

    52 http://www.tibco.com/products/businessworks-ce
  37. Other Docker Tools • Docker Load Balancing • Open Source:

    Nginx and HAProxy • Free: Netscaler CPX • Commercial: Appcito CAFE for Docker, Netscaler CPX, Nginx Plus • Cloud Computing Stack • Openstack / Eucalyptus • Docker Tools • Clocker is an open source project which lets you spin up a Docker Cloud. Copyright © William El Kaim 2016 53
  38. Other Docker Tools • Dockersh: Dockersh lets multiple users connect

    to a given box, with each user running a shell spawned from a separate Docker container. • https://github.com/Yelp/dockersh • DockerUI: Web front end allows you to handle many tasks normally managed from the command line of a Web browser. • https://github.com/crosbymichael/dockerui • Shipyard: Shipyard uses the Citadel cluster management toolkit to facilitate management of Docker container clusters that span multiple hosts. • https://github.com/shipyard/shipyard Source: InfoWrold Copyright © William El Kaim 2016 54
  39. Other Docker Tools • Kitematic: makes Docker useful as a

    desktop-environment developer’s tool for OS X-based programmers. Bought by Docker. • https://github.com/kitematic/kitematic • Other solutions for Mac: DVM, Docker OS X, and OS X Installer • Logspout: route container-app logs to a single central location, such as a single JSON object or a streamed endpoint available through an HTTP API. • https://github.com/progrium/logspout • Autodock: Deploys new containers as fast as possible by determining which servers in a given Docker cluster have the least load. • https://github.com/cholcombe973/autodock • DIND (Docker-in-Docker): A way for you to run Docker within Docker containers. • https://github.com/jpetazzo/dind Copyright © William El Kaim 2016 55 Source: InfoWrold
  40. Resources: The New Stack eBook Series Copyright © William El

    Kaim 2016 56 http://thenewstack.io/ebookseries/
  41. Plan • What is Virtualization? • From Virtual Machine to

    Container • What is Docker? • Docker Tools Ecosystem The Open Container Initiative • From Container to Unikernel • What about Microcontainer? • The Rise of Serveless Computing • Conclusion Copyright © William El Kaim 2016 57
  42. Open Container Initiative • The Open Container Initiative (OCI) •

    Is a lightweight, open governance structure (project) • Formed under the auspices of the Linux Foundation • For the express purpose of creating open industry standards around container formats and runtime. • Was launched on June 22nd 2015. Copyright © William El Kaim 2016 58
  43. Open Container Initiative • OCI aims to meld ecosystems towards

    an open standard: • Users should be able to package their application once and have it work with any container runtime • The standard should fulfill the requirements of the most rigorous security and production environments • The standard should be vendor neutral and developed in the open • The OCI currently contains two specifications • Runtime Specification (runtime-spec): outlines how to run a "filesystem bundle" that is unpacked on disk. • Image Specification (image-spec). At a high-level an OCI implementation would download an OCI Image then unpack that image into an OCI Runtime filesystem bundle. • At this point the OCI Runtime Bundle would be run by an OCI Runtime. Copyright © William El Kaim 2016 59 https://www.opencontainers.org/
  44. To Fork or Not to Fork Docker • Discussions about

    a split from Docker are now underway among several Docker ecosystem vendors and end users. • Expressing frustration of Docker’s management of Docker Engine, the technologists with the companies are exploring ways to address various issues around supporting enterprise Docker deployments. • Several options are under consideration, including the possibility of forking the open source Docker engine altogether. • According to a number of sources close to recent discussions, representatives are involved from companies such Red Hat, Google, CoreOS, Huawei and two large end-user customers. Copyright © William El Kaim 2016 61 Source: TheNewStack
  45. The Issue: Docker Product Mgt. • A worry we’ve heard

    repeated from the Docker ecosystem is how Docker’s aggressive release schedule puts third-party system providers at odds with their own customer base. • Docker consistently breaks backend compatibility • Docker uses its Docker Engine as a product, not as a component that the community uses to build out its own services • This product-based approach means that the operator is forced to fix backend compatibility that breaks when Docker introduces new innovations or merges components like Swarm into the Docker Engine. • The long-simmering frustration has culminated with Docker recent inclusion of Docker Swarm in Docker 1.12. • With Swarm, Docker Engine allows users to manage complex containerized applications without additional software, using the same command line structure and syntax that developers are familiar with using the Docker containers. • The Docker orchestration capabilities are opt-in; they must be activated by the user. Though not opting in may lead to backward compatibility issues down the road. Copyright © William El Kaim 2016 62 Source: TheNewStack
  46. Redhat CRI-O (ex.OCID) • Redhat CRI-O • Previously named Open

    Container Initiative Daemon (OCID) • Is a set of projects that provides Kubernetes with the ability to obtain and run container images by way of a version of the core of the Docker runtime ("runC" project) that has been modified to fit Kubernetes' needs. • is an implementation of the Kubernetes standard container runtime interface. In order to run containers, the daemon needs to be able to pull, store and execute the container images. Copyright © William El Kaim 2016 63
  47. Redhat CRI-O (ex.OCID) • CRI-O could become a reference implementation

    of a container engine • provide a free and open source option for running OCI containers at scale. • It will include runc, the container runtime based on libcontainer, which Docker donated to the OCI for use as a free standard. • It will also include the code necessary for pushing images to and pulling them from repositories hosted by container registries. • It will support the Container Network Interface, built by CoreOS, for modeling plug-ins independently from the engine hosting the containers. • Yet the component that is OCID’s raison d’être is called oci-runtime. • An implementation of the Kubernetes Container Runtime Interface (CRI) that will allow Kubernetes to directly launch and manage Open Container Initiative (OCI) containers. • OCID is not implemented to be able to build an OCI image, so you still need Docker for that. Copyright © William El Kaim 2016 64
  48. Redhat CRI-O (ex.OCID) • At a high level, the scope

    of cri-o is: • Support multiple image formats including the existing Docker image format • Support for multiple means to download images including trust & image verification • Container image management (managing image layers, overlay filesystems, etc) • Container process lifecycle management • Monitoring and logging required to satisfy the CRI • Resource isolation as required by the CRI Copyright © William El Kaim 2016 65
  49. Plan • What is Virtualization? • From Virtual Machine to

    Container • What is Docker? • Docker Tools Ecosystem • The Open Container Initiative From Container to Unikernel • What about Microcontainer? • The Rise of Serveless Computing • Conclusion Copyright © William El Kaim 2016 66
  50. From Container to Unikernel • Cloud computing has been pioneering

    the business of renting computing resources in large data centers to multiple (and possibly competing) tenants. • The basic enabling technology for the cloud was OS Virtualization • Allowing customers to multiplex VMs (virtual machines) on a shared cluster of physical machines (= self-contained computer, booting a standard operating-system kernel and running unmodified applications just as if it were executing on a physical machine). • A key driver of the growth of cloud computing in the early days was server consolidation. • Existing applications were often installed on physical hosts that were individually underutilized, and virtualization made it feasible to pack them onto fewer hosts without requiring any modifications or code recompilation. • While operating-system virtualization was undeniably useful, it added another layer to an already highly layered software stacks • All of these layers sat beneath the application code. Copyright © William El Kaim 2016 67 Source: Amir Chaudhry
  51. From Container to Unikernel • Containers did represent a major

    progression from virtual machines: • Very small VMs allowing much higher server density by removing redundant or unnecessary operating system elements from the VMs themselves. • Nicely packaged VM stacks, which can easily be transferred, replicated, and controlled, ensuring high levels of portability. • small VM software stacks, removing the problem and tedium of building a large stack of version-specific operating systems. • Extremely fast startup times that can facilitate a more flexible infrastructure, allowing greater latitude to respond to the needs of the moment. • Two main issues with Container • Security. The security attack surface of a “shared kernel” strategy has its weakest link in that “shared kernel” itself. If one malicious hacker manages to violate that shared kernel, all instances that employ that shared kernel are potentially compromised. • Container Mgt. The complexities of managing Docker and containers in production is one of the greater challenges that comes with its adoption. Copyright © William El Kaim 2016 68 Source: Russell Pavlicek
  52. From Container to Unikernel • Their was a need for

    an efficient, fast, easy to manage and secure solution: • Disentangling applications from the OS • Breaking-up OS functionality into modular libraries • Linking only the system functionality your app needs • Targeting alternative platforms from a single codebase Copyright © William El Kaim 2016 69 Source: Xen Project
  53. What Are Unikernels? • Unikernels are specialized machine images built

    from a modular stack adding system libraries and configuration to application code • Every application is compiled into its own specialized OS that runs on the cloud or embedded devices • Also called “library operating systems” or “cloud operating systems” • Unikernels implement the bare minimum of the traditional operating system functions; just enough to enable the application it powers. • Combine many of the advantages of Docker-like container systems with the security footprint of hypervisors and a much smaller attack surface within each VM. Copyright © William El Kaim 2016 70
  54. Unikernel Landscape • Arrakis • Merged with BarrelFish • Barrelfish

    • Barrelfish is a new research operating system being built from scratch and released by ETH Zurich in Switzerland, originally in collaboration with Microsoft Research and now partly supported by HP Enterprise Labs, Huawei, Cisco, Oracle, and VMware. • Explore how to structure an OS for future multi- and many-core systems. • ClickOS • high-performance, virtualized software middle box platform based on open source virtualization. • Early performance analysis shows that ClickOS VMs are small (5MB), boot quickly (as little as 20 milliseconds), add little delay (45 microseconds) and more than 100 can be conurrently run while saturating a 10Gb pipe on an inexpensive commodity server. Copyright © William El Kaim 2016 71
  55. Unikernel Landscape • Clive • OS designed to work in

    distributed and cloud computing environments. • There is no software stack in the cloud. Applications and services are compiled along with libraries that permit them to run on the bare hardware. • System interfaces are designed along a CSP-like style. Applications and components talk through channels, and channels are bridged to/from the network, pipes, and any other I/O artifact. • Drawbridge (Microsoft) • A research prototype of a new form of virtualization for application sandboxing. • Drawbridge combines two core technologies: a picoprocess, which is a process-based isolation container with a minimal kernel API surface, and a library OS, which is a version of Windows enlightened to run efficiently within a picoprocess. • Fugue • System for building, optimizing and enforcing cloud infrastructure. • Consists of a modular, statically typed language (Ludwig), a “kernel” for automating cloud infrastructure operations (the Fugue Conductor), and a Command Line Interface (the Fugue CLI) to initiate processes. Copyright © William El Kaim 2016 72
  56. Unikernel Landscape • HaLVM • The Haskell Lightweight Virtual Machine

    (HaLVM) is a port of the Glasgow Haskell Compiler tool suite that enables developers to write high-level, lightweight VMs that can run directly on the Xen Project hypervisor. • IncludeOS • A minimal, service oriented, includeable library operating system for cloud services. Currently a research project for running C++ code on virtual hardware. • LING (also called ErlangOnXen) • is highly compatible with Erlang/OTP and understands .beam files. • Developers can create code in Erlang and deploy LING unikernels. • LING removes the majority of vector files, uses only three external libraries, does not support OpenSSL and provide a read only filesystem. Copyright © William El Kaim 2016 73
  57. Unikernel Landscape • MiniOS • Xen Project provides MiniOS, a

    basic Unikernel provided in source form which can be modified and expanded to jump start your own Unikernel project. • ClickOS and Rump kernels are among the Unikernel systems which leveraged MiniOS to start their own projects. • MirageOS • Incubated by Xen Project • A clean-slate library operating system that constructs unikernels for secure, high-performance network applications across a variety of cloud computing and mobile platforms. • There are now almost 100 MirageOS libraries and a growing number of compatible libraries within the wider OCaml ecosystem. • OSv • A new OS designed specifically for cloud VMs from Cloudius Systems. • Able to boot in less than a second, OSv is designed from the ground up to execute a single application on top of any hypervisor, resulting in superior performance, speed and effortless management. Support for C, JVM, Ruby and Node.js application stacks is available. Copyright © William El Kaim 2016 74
  58. Unikernel Landscape • Rumprun • A software stack which enables

    running existing unmodified POSIX software as a unikernel. • Rumprun supports multiple platforms, including bare hardware and hypervisors such as Xen and KVM. • It is based on rump kernels which provide free, portable, componentized, kernel quality drivers such as file systems, POSIX system call handlers, PCI device drivers, a SCSI protocol stack, virtio and a TCP/IP stack. • Runtime.js • An open-source library operating system for the cloud that runs JavaScript, could be bundled up with an application and deployed as a lightweight and immutable VM image. • It's built on JavaScript engine (V8) and uses event-driven and non-blocking I/O model inspired by Node.js. At the moment KVM is the only supported hypervisor. Copyright © William El Kaim 2016 75
  59. Unikernel Landscape • UniK • UniK (pronounced you-neek) is a

    tool for simplifying compilation and orchestration of unikernels. • Similar to the way Docker builds and orchestrates containers, UniK automates compilation of popular languages (C/C++, Golang, Java, Node.js. Python) into unikernels. • UniK deploys unikernels as virtual machines on Virtualbox, QEMU, AWS, and vSphere. UniK incorporates work from the Rumprun and OSv projects. Copyright © William El Kaim 2016 76
  60. Resources • Unikernel.org • Articles • From Containers To Unikernels

    And Serverless Architectures • “Unikernels Offer a Striped Down Version of Linux” by Nick Hardiman • “Unikernels: Who, What, Where, When and Why” • “The End of the General Purpose Operating System” • “The Next Generation Cloud: The Rise of the Unikernel” • “Containers vs Hypervisors: The Battle Has Just Begun” • Examples • Cassandra on OSV • Towards Heroku for Unikernels: “Part 1 - Automated deployment” and “Part 2 - Self Scaling Systems” Copyright © William El Kaim 2016 77
  61. Plan • What is Virtualization? • From Virtual Machine to

    Container • What is Docker? • Docker Tools Ecosystem • The Open Container Initiative • From Container to Unikernel What about Microcontainer? • The Rise of Serveless Computing • Conclusion Copyright © William El Kaim 2016 78
  62. Microcontainer • Docker enables you to package up your application

    along with all of the application’s dependencies into a nice self-contained image. • You can then use use that image to run your application in containers. • The problem is you usually package up a lot more than what you need so you end up with a huge image and therefore huge containers. • Most people who start using Docker will use Docker’s official repositories for their language of choice, but unfortunately if you use them you’ll end up with images with big size. • You simply don’t need all of the stuff that comes along with those images. Source: Travis Reeder Copyright © William El Kaim 2016 79
  63. Microcontainer • A Microcontainer contains only the OS libraries and

    language dependencies required to run an application and the application itself. Nothing more. • Rather than starting with everything but the kitchen sink, start with the bare minimum and add dependencies on an as needed basis. Source: Travis Reeder Copyright © William El Kaim 2016 80
  64. Microcontainer Advantages • Size • Small footprint • Fast/Easy Distribution

    • Because of its size, much quicker to download the image from a registry and therefore it can be distributed to different machines much quicker. • Improved Security • Less code/less programs in the container means less attack surface. And, the base OS can be more secure • These benefits are similar to the benefits of Unikernels, with none of the drawbacks. Source: Travis Reeder Copyright © William El Kaim 2016 81
  65. How to Build Microcontainer? • The base image for all

    Docker images is the `scratch` image. • It has essentially nothing in it. • This may sound useless, but you can actually use it to create the smallest possible image for your application, if you can compile your application to a static binary with zero dependencies like you can with Go or C. • That’s about as small as you can get. The scratch image + your application binary. • Not everyone is using Go so you’ll probably have more dependencies and you’ll want something with a bit more than the scratch image. • Enter Alpine Linux. • Alpine Linux as a security-oriented, lightweight Linux distribution and a nice package system to add dependencies. • And use Iron.io tiny Docker Images to provide the smallest possible images for every major programming language Source: Travis Reeder Copyright © William El Kaim 2016 82
  66. Plan • What is Virtualization? • From Virtual Machine to

    Container • What is Docker? • Docker Tools Ecosystem • The Open Container Initiative • From Container to Unikernel • What about Microcontainer? The Rise of Serveless Computing • Conclusion Copyright © William El Kaim 2016 83
  67. The Rise of Serveless Computing • In 2012, Ken Fromm

    wrote “Why The Future Of Software And Apps Is Serverless” on ReadWriteWeb • The term “serverless” doesn’t mean servers are no longer involved. It simply means that developers no longer have to think that much about them. • Focus of developers shift from the server level to the task level. • Often called: Functions as a Service or FaaS Copyright © William El Kaim 2016 84 Source: Travis Reeder
  68. Why Now? New Application Architecture • The Front-End revolution •

    The power of the thick client native applications on the mobile devices along with HTML5 based rich browser web applications (using MVC frameworks e.g. AngularJS) has enabled developers to write large scale and complex applications by orchestrating any number of cloud service to replace their traditional server-based application backends. • Today almost any service you can imagine is HTTP enabled and supports standard authentication token exchange protocols for secure consumption. • Backend customization • Unique business requirement requiring custom code or new way to orchestrate some other cloud services from the backend. • Need to be able to add “functions” able to react to event and to realize, complement or extend the “backend” core services. Copyright © William El Kaim 2016 85
  69. How Is Serverless Different Than Paas? • PaaS could be

    considered the first iteration of serverless • you still have to think about the resources you need but you don’t have to manage them. With Serverless you don’t even need to think about how much capacity you need in advance. • With Serverless you break down your app into bite sized pieces, instead of using a monolith app that you could run on a PaaS • You have to break down your app into small self contained programs, or functions. • For instance, each endpoint in your API could be a separate function. These functions are then run on demand, rather than full-time like an app running on a PaaS. • From an ops perspective, the benefit to breaking down an app into functions, is that you can scale and deploy each function separately from each other. Copyright © William El Kaim 2016 86 Source: Travis Reeder
  70. From Microservices to Functions • The idea behind microservices is

    to break down your monolithic application into small services so that you can develop, manage, and scale them independently. • FaaS takes that a step further by breaking things down even smaller. • The trend is pretty clear, the unit of work is getting smaller and smaller. We’ve gone from monoliths to microservices to functions. Copyright © William El Kaim 2016 87 Source: Travis Reeder
  71. Serverless Ecosystem • Cloud Platforms • Amazon Lambda • Google

    Cloud Functions • Hook.io • IBM Bluemix OpenWhisk • Iron.io • Microsoft Functions • WebTask • Frameworks • Serverless: open-source command line tool and standard syntax to easily build serverless architectures on AWS Lambda, Azure Functions, Google Cloud Functions, etc. • APEX: Apex lets you build, deploy, and manage AWS Lambda functions with ease • SPARTA: A Go framework for AWS Lambda microservices • Others: Deployd, and IOpipe Copyright © William El Kaim 2016 88
  72. Serverless Ecosystem • Tools like LeverOS, Stamplay and Syncano show

    a lot of promise • New products providing a serverless microservices orchestration layer, bringing in functionalities like authentication and user permissions, backend scripting, third party API integration, messaging and work queue management, data management, and channel interfaces to create an application. Copyright © William El Kaim 2016 89
  73. Serverless Architecture Benefit • Serverless architecture eliminates the management of

    the server stack and any concerns/planning that have to go into the potential scaling up or down of the stack. • All of these processes are automated and you simply pay for the compute time you use. • Financial benefits • You should theoretically reduce costs by eliminating the need to hire staff to support servers and other infrastructure • You can also save money if your application traffic is bursty since Instead of having to pay for that idleness, a serverless architecture lets you only pay for what CPU cycles you actually consume and code is only run when needed. • Popular programming languages supported • AWS supports JavaScript, Java, Python and JRuby. • Reduces development times • DevOps resources can drop the “ops” and simply focus on the “dev.” Copyright © William El Kaim 2016 93 Source: Andrew Froehlich
  74. Serverless Issues • Serverless architecture is often thought of as

    the next evolution of Platform- as-a-Service • Enabling organizations to focus on application code while the service provider manages everything else, including the back-end stack components. • But while there are many sound benefits with serverless architectures, there are some serious drawbacks that also must be considered. • Simply put, serverless isn’t for everyone Copyright © William El Kaim 2016 94 Source: Andrew Froehlich
  75. Serverless Issues • Vendor lock-in • Be ready to use

    different components and implementation methods depending on which serverless architecture you use. • No easy migration of legacy • Applications that run on serverless platforms need to be redesigned and most of the time recoded. • No clear blueprint or roadmap • Despite the enthusiastic support and rapid growth of serverless architectures • No clear technical standard and limited options • Despite the ability to program with many of the most popular languages, serverless computing still is limited, considering all the available languages that can be coded, compiled and run in traditional environments. Copyright © William El Kaim 2016 95 Source: Andrew Froehlich
  76. Serverless • Articles • Serverless is the new multitenancy •

    The Serverless Start-Up - Down With Servers! • Making Sense of Serverless Computing • AWS Lambda Makes Serverless Applications A Reality • The Road to NoOps: Serverless Computing is Quickly Gaining Momentum • Market scan: API Serverless Architecture • Five Serverless Computing Frameworks To Watch Out For • Examples • A Serverless REST API in Minutes With Serverless Framework • Serverless Reference Architectures with AWS Lambda • How to build a serverless NodeJS microservice on AWS Lambda • Creating A Serverless Etl Nirvana Using Google BigQuery Copyright © William El Kaim 2016 96
  77. Plan • What is Virtualization? • From Virtual Machine to

    Container • What is Docker? • Docker Tools Ecosystem • The Open Container Initiative • From Container to Unikernel • What about Microcontainer? • The Rise of Serveless Computing Conclusion Copyright © William El Kaim 2016 97
  78. Conclusion • Container-based virtualization is a disruptive technology that is

    being adopted at a remarkable pace! • Virtual machines are still considered as a more mature technology with a higher level of security and many teams are more used to working with them. • Virtual machines are generally more suitable for monolithic applications and for scenarios where security concerns are outweighing the needs for a lightweight solution. • Container-based virtualization is a much better solution for microservices architectural style, where features of the application are divided into small well-defined distinctive services. • Containers and VMs are not excluding each other, they can be viewed as complementary solutions. • Excellent example for this is the Netflix cloud where containers are running inside virtual machines. • Serverless seems to be the next step … Copyright © William El Kaim 2016 98
  79. Evolution of Development • From a more universal perspective, this

    progression represents a continuum that will lead to serverless architectures and other more efficient means of managing complexity, namely unikernel technology. • We are witnessing the birth of Transient Microservices • Lifetimes possibly measured in fractions of second • Populations in the thousands per host Copyright © William El Kaim 2016 99