Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Getting started with Kubernetes on AWS (WebSummit 2017)

Abby Fuller
April 27, 2018
37

Getting started with Kubernetes on AWS (WebSummit 2017)

Abby Fuller

April 27, 2018
Tweet

Transcript

  1. © 2017, Amazon Web Services, Inc. or its Affiliates. All

    rights reserved. Kubernetes • Container orchestration platform that manages containers across your infrastructure in logical groups • Rich API to integrate 3rd parties • Open Source
  2. What are orchestration tools and why should I care? Containers

    are lots of work (and moving pieces)! Orchestration tools help you manage, scale, and deploy your containers.
  3. What platform is right for me? Bottom line: use the

    tool that’s right for you. That means that you should choose whatever makes the most sense for you and your architecture, that you’re comfortable with, and that you can scale, maintain, and manage.
  4. Bottom line: we want to be the best place to

    run your containers, however you want to do it.
  5. Initial set up I’m using a Cloudformation stack provided by

    AWS and Heptio for my initial cluster set up. To see the stack in full, you can look here: https://s3.amazonaws.com/quickstart- reference/heptio/latest/templates/kubernetes-cluster-with- new-vpc.template This will download the full template.
  6. Choosing my parameters The setup template takes a few parameters:

    STACK=k8s-demo TEMPLATEPATH=https://s3.amazonaws.com/quickstart -reference/heptio/latest/templates/kubernetes- cluster-with-new-vpc.template AZ=us-east-1b INGRESS=0.0.0.0/0 KEYNAME=demo
  7. Running the stack abbyfull$ aws cloudformation create-stack --stack- name $STACK

    \ > --template-body $TEMPLATEPATH \ > --capabilities CAPABILITY_NAMED_IAM \ > --parameters ParameterKey=AvailabilityZone,ParameterValue=$AZ \ > ParameterKey=AdminIngressLocation,ParameterValue=$IN GRESS \ > ParameterKey=KeyName,ParameterValue=$KEYNAME
  8. This should return the ARN { "StackId": "arn:aws:cloudformation:us-east- 1:<accountID>:stack/k8s-demo/a8ec95d0-c47e-11e7- b1fb-50a686e4bb1e"

    } ARN is the Amazon Resource Name. This is a unique identifier that can be used within AWS.
  9. Checking the values for my cluster To see more information

    about my cluster, I can look at the Cloudformation stack like this: abbyfull$ aws cloudformation describe-stacks -- stack-name $STACK This will return the values the stack was created with, and some current information.
  10. You can ssh to your instance like this: Run: $aws

    cloudformation describe-stacks --query 'Stacks[*].Outputs[?OutputKey == `GetKubeConfigCommand`].OutputValue' --output text --stack-name And use this output to SSH: $STACKSSH_KEY="demo.pem"; scp -i $SSH_KEY -o ProxyCommand="ssh -i \"${SSH_KEY}\" [email protected] nc %h %p" [email protected]:~/kubeconfig ./kubeconfig
  11. There are some tools available to help manage your K8s

    infrastructure In this demo, we’re using kubectl: https://kubernetes.io/docs/user- guide/kubectl/ There are some other good options out there, like: kubicorn: https://github.com/kris-nova/kubicorn kubeadm: https://kubernetes.io/docs/setup/independent/install-kubeadm/ Or you can find a list of tools here: https://kubernetes.io/docs/tools/
  12. Download and test kubectl I installed kubectl with homebrew: $

    brew install kubectl Next, test it against your cluster: $ kubectl get nodes NAME STATUS ROLES AGE VERSION ip- blah.ec2.internal Ready <none> 3h v1.8.2
  13. I probably don’t want a cluster with just one node

    This gets our cluster token: $ aws cloudformation describe-stacks --stack- name $STACK | grep -A 2 -B 2 JoinNodes This returns a token: $ kubeadm join --token=<token>
  14. You don’t have to update your nodes automatically though You

    can add capacity through the autoscaling group in AWS:
  15. How about some content? We probably want to actually install

    things. You can run applications on Kubernetes clusters a couple of different ways: you can install from helm (helm.sh), which is a package manager for Kubernetes, like this: $ brew install kubernetes-helm ==> Downloading https://homebrew.bintray.com/bottles/kubernetes- helm-2.7.0.el_capitan.bottle.tar.gz
  16. Or, use a YAML file Here’s a YAML file for

    an Nginx deployment: apiVersion: apps/v1beta2 kind: Deployment metadata: name: nginx-deploymentspec: selector: matchLabels: app: nginx replicas: 2 # tells deployment to run 2 pods matching the template template: # create pods using pod definition in this template metadata: # unlike pod-nginx.yaml, the name is not included in the meta data as a unique name is # generated from the deployment name labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80
  17. I can run my deployment like this: $ kubectl apply

    -f https://k8s.io/docs/tasks/run- application/deployment.yaml deployment "nginx-deployment" created I can get more information by running: $ kubectl describe deployment nginx-deployment
  18. Check for running pods from my deployment A pod is

    a group of containers (like an ECS service) with shared network/storage. I can check for pods related to a deployment like this: $ kubectl get pods -l app=nginx For my Nginx example, it returns this (names abbreviated): NAME READY STATUS RESTARTS AGE nginx-deployment-568 1/1 Running 0 13m nginx-deployment-569 1/1 Running 0 13m
  19. Scaling up and down Earlier, we covered how to scale

    our underlying infrastructure with nodes or autoscaling groups. We can also scale our deployments! Remember our YAML file? I can update the value of replicas to scale my deployment up or down. Then, I just reapply the deployment. replicas: 2 $ kubectl apply -f https://k8s.io/docs/tasks/run- application/deployment.yaml
  20. Updating my content I can update my content the same

    way: by changing the YAML file, and re-running my apply command: $ kubectl apply -f https://k8s.io/docs/tasks/run- application/deployment.yaml
  21. You’ll also need a Load Balancer We can run the

    kubectl command for this: $ kubectl expose --namespace=nginx deployment echoheaders --type=LoadBalancer --port=80 -- target-port=8080 --name=echoheaders-public Just like with non-containerized apps, Load Balancers help distribute traffic. In a containerized app, Load Balancers distribute traffic between the containers of a pod.
  22. High Availability in Kubernetes • Generally in AWS, the best

    practice is to run highly available apps. This means that your is designed to work in the event of an Availability Zone or Region failure. If one AZ went down, your application would still function. • This is not quite the same in Kubernetes: rather than run one cluster that spans multiple AZs, you run one cluster per AZ. • You can learn more about high availability in Kubernetes here. • You can manage multiple clusters in Kubernetes with something called “federation”.
  23. Kubernetes and the master node • An important difference between

    Kubernetes and ECS is the master node: a Kubernetes cluster has a master node, which hosts the control plane. This is responsible for deployments, and updates, and if a node is lost.
  24. In case of master node emergency • So what happens

    if the master node goes down? • For AWS, you can use EC2 Auto Recovery. • In a lot of cases, not necessarily to have a highly available master: as long as Auto Recovery can replace the node fast enough, the only impact on your cluster will be that you can’t deploy new versions, or update the cluster until the master is back online.
  25. Kubernetes setup with kops In real life, it’s probably best

    to stick with tools. A popular one is kops, which is maintained by the Kubernetes community. Being used in production by companies like Ticketmaster. Kops will help you out with things like service discovery, high availability, and provisioning.
  26. Some kops specific setup Kops is built on DNS, so

    we need some specific setup in AWS before we get rolling: First, you’ll need a hosted zone in Route 53. This is something like kops.abby.com. You can do this with the CLI (assuming you own the domain!): $ aws route53 create-hosted-zone --name kops.abby.com --caller-reference 1
  27. Next, I’ll need an s3 bucket to store cluster info

    Create the bucket like this: $ aws s3 mb s3://config.kops.abby.com And then: $ export KOPS_STATE_STORE=s3://config.kops.abby.com
  28. Create a cluster configuration with kops To create the config:

    $ kops create cluster --zones=us-east-1c useast1.kops.abby.com To create the cluster resources: $ kops update cluster useast1.dev.example.com -- yes
  29. So let’s recap. • VPC with nodes in private subnets

    (only ELB in public) • Limit ports, access, and security groups • For production workloads, run multiple cluster in different AZs for fault tolerance and high availability • Kubernetes clusters can involve a fair amount of setup and maintenance: highly recommend taking advantage of tools for both setup (CloudFormation or Terraform), and updates/deployments (like kubectl or kubicorn or kops) • Kubernetes has a rich community- take advantage of it!