Slide 1

Slide 1 text

Serverless Containers with Knative & Cloud Run Mete Atamel Developer Advocate at Google @meteatamel speakerdeck.com/meteatamel

Slide 2

Slide 2 text

Introduction

Slide 3

Slide 3 text

Operational Model Programming Model No Infra Management Managed Security Pay only for usage Service-based Event-driven Stateless Serverless

Slide 4

Slide 4 text

Containers Any language Any library Ecosystem around containers .js .rb .go .py .sh … 0 1 0 1 0 0 1 1 1

Slide 5

Slide 5 text

Containers Flexibility Serverless Velocity

Slide 6

Slide 6 text

Cloud Run Fully managed, deploy your workloads and don’t see the cluster. Cloud Run on Anthos Deploy into Anthos, run serverless side-by-side with your existing workloads. Knative Everywhere Use the same APIs and tooling anywhere you run Kubernetes with Knative. Serverless containers with Knative and Cloud Run

Slide 7

Slide 7 text

Knative

Slide 8

Slide 8 text

Confidential & Proprietary What is Knative? Kubernetes based open source building blocks for serverless github.com/knative

Slide 9

Slide 9 text

Knative Stack Serving Eventing Kubernetes Platform Products Components Cloud Run Cloud Run on Anthos Gateway Kourier Istio

Slide 10

Slide 10 text

Confidential & Proprietary Knative Serving What is it? Rapid deployment of serverless containers Automatic (0-n) scaling Configuration and revision management Traffic splitting between revisions Pluggable Connect to your own logging and monitoring platform, or use the built-in system Auto-scaler can be tuned or swapped out for custom code

Slide 11

Slide 11 text

Confidential & Proprietary Knative Serving Knative Service High level abstraction for the application Configuration Current/desired state of an application Code & configuration separated (a la 12-factor) Revision Point in time snapshots for your code and configuration Route Maps traffic to revisions

Slide 12

Slide 12 text

Confidential & Proprietary Knative Eventing What is it? For loosely coupled, event-driven services with on/off cluster event sources Bind declaratively event sources, triggers and services Scales from just few events to live streams Uses standard CloudEvents Event type Flow Event source Event type Event type Event consumer(s)

Slide 13

Slide 13 text

Confidential & Proprietary Knative Eventing Delivery Models Simple Delivery Event Source → Service, 1:1 Complex Delivery with optional reply Event Source → Channels → Subscription → Services, 1:N Broker Trigger Delivery Event Source → Broker → Triggeer → Services, 1:N

Slide 14

Slide 14 text

Confidential & Proprietary Simple Delivery

Slide 15

Slide 15 text

Confidential & Proprietary Complex Delivery

Slide 16

Slide 16 text

Confidential & Proprietary Complex Delivery with reply

Slide 17

Slide 17 text

Confidential & Proprietary Broker Trigger Delivery

Slide 18

Slide 18 text

Confidential & Proprietary Knative Eventing Namespace subscribe Trigger Service (Callable) Broker Trigger Service (Callable) subscribe filter= filter= ✓ ✓ ✓ Source Events Source Events ingress ingress publish

Slide 19

Slide 19 text

Confidential & Proprietary Knative Event Sources Name Description Apache Camel Allows to use Apache Camel components for pushing events into Knative Apache Kafka Brings Apache Kafka messages into Knative AWS SQS Brings AWS Simple Queue Service messages into Knative Cron Job Uses an in-memory timer to produce events on the specified Cron schedule. GCP PubSub Brings GCP PubSub messages into Knative GitHub Brings GitHub organization/repository events into Knative GitLab Brings GitLab repository events into Knative. Google Cloud Scheduler Google Cloud Scheduler events in Knative when jobs are triggered Google Cloud Storage Brings Google Cloud Storage bucket/object events into Knative Kubernetes Brings Kubernetes cluster/infrastructure events into Knative https://github.com/knative/docs/tree/master/docs/eventing/sources

Slide 20

Slide 20 text

Confidential & Proprietary Knative Events { "specversion": "0.2", "type": "com.github.pull.create", "source": "https://github.com/cloudevents/spec/pull/123", "id": "A234-1234-1234", "time": "2019-04-08T17:31:00Z", "datacontenttype": "application/json", "data": "{ GitHub Payload... }" } FTP GitHub GCS Broker FTP Receive Adapter GitHub Receive Adapter GCS Receive Adapter CloudEvent

Slide 21

Slide 21 text

Confidential & Proprietary Cloud Storage Events to Vision API Cloud Storage Bucket Cloud Storage -> Cloud PubSub -> Knative -> Vision API Cloud PubSub Topic Knative Eventing Knative Service Cloud Vision API Labels 1 2 3 4 5 6

Slide 22

Slide 22 text

Cloud Run

Slide 23

Slide 23 text

Container to production in seconds Natively Serverless One experience, where you want it Cloud Run Bringing serverless to containers

Slide 24

Slide 24 text

Container to production in seconds Just ‘deploy’ Any stateless container Any language, any library URL in seconds

Slide 25

Slide 25 text

Natively serverless Focus on writing code Scale up fast Scale down to zero Pay for exact usage No servers to manage

Slide 26

Slide 26 text

HTTPS Endpoint Public • Website • API endpoint Private • Internal services • Async tasks • Mobile backend • Webhook

Slide 27

Slide 27 text

Container contract Listen on 0.0.0.0 on port $PORT (default 8080) HTTP server must start < 4 min (timeout → 504) Request time < 15 min (default → 5 min) Stateless (in-memory file system, doesn’t persist) Computation only within request (No background activity)

Slide 28

Slide 28 text

Container resources 1 vCPU per container instance (configurable to 2vCPU) 256 MiB of memory up to a max of 2 GiB (configurable) 80 concurrent requests per container (configurable 1-80) 1000 max containers by default (configurable 1-1000) Access to a Metadata Server Sandboxed by gVisor

Slide 29

Slide 29 text

Pay per use CPU / Memory / Requests 100ms

Slide 30

Slide 30 text

Billable time Instance Billable Time Request 1 Start Request 1 End Request 2 Start Request 2 End Instance Time Billable Non-billable

Slide 31

Slide 31 text

Concurrency: up to 80 concurrent requests concurrency = 1 concurrency = 80

Slide 32

Slide 32 text

Impact of Concurrency Fewer Cold Starts More requests per instance means fewer instances for the same QPS Faster Scale Up Fewer new instances (and cold starts) means faster response to traffic spikes Better Utilization Instances spend less time with idle resources, which is a more efficient use of machine resources Code may need to change! Global scope and race condition cautions are back

Slide 33

Slide 33 text

Pub/Sub triggered internal services Cloud Run Cloud Pub/Sub Queue Queue Queue

Slide 34

Slide 34 text

Storage triggered internal services Cloud Run Cloud Pub/Sub Queue Queue Queue Cloud Storage

Slide 35

Slide 35 text

Scheduled services Command Line Interface (CLI) User Interface (UI) Scheduler API Cloud Run Cloud Scheduler

Slide 36

Slide 36 text

Services part of async tasks Cloud Tasks user_registration user_levelcompleted user_inapppurchase user_statechange Daily activity metrics service User profile service Payment processing service Game state service

Slide 37

Slide 37 text

Confidential & Proprietary Cloud Storage to Cloud Run via Cloud PubSub Cloud Storage Bucket Cloud PubSub Topic Cloud Run

Slide 38

Slide 38 text

Build

Slide 39

Slide 39 text

Confidential & Proprietary Knative Build (Pre 0.8) Tekton Pipelines (Post 0.8)

Slide 40

Slide 40 text

Confidential & Proprietary Tekton Pipelines What is it? Kubernetes style resources for declaring CI/CD-style pipelines Go from source code to container images on repositories Build pipelines can have multiple steps and can push to different registries Builds run in containers in the cluster. No need for Docker locally Primitives Task: Represents the work to be executed with 1 or more steps TaskRun: Runs the Task with supplied parameters Pipeline: A list of Tasks to execute in order ServiceAccount: For authentication with DockerHub etc.

Slide 41

Slide 41 text

@meteatamel speakerdeck.com/meteatamel github.com/meteatamel/knative-tutorial knative.dev github.com/meteatamel/cloudrun-tutorial cloud.google.com/run Thank you!