Upgrade to Pro — share decks privately, control downloads, hide ads and more …

EKSにシュッと入門 〜GKEからの移行〜

EKSにシュッと入門 〜GKEからの移行〜

JAWS-UG コンテナ支部 #12 2018/6/21

sakajunquality

June 21, 2018
Tweet

More Decks by sakajunquality

Other Decks in Technology

Transcript

  1. - SRE at eureka, Inc. - インフラ全般 + MLOps? -

    ErgoDox/Arch Linux @sakajunquality
  2. k8s in eureka - 5 Clusters - 4 GKE Clusters

    - 1 EKS Cluster - 絶賛検証中 - Applications - ML Recommend Model - Moderation Microservice - Slackbot - Redash - Spinnaker
  3. Managed k8s Services - Google Kubernetes Engine - Amazon EKS

    - Azure Kubernetes Service - Oracle Container Service - IBM Cloud Kubernetes Service - etc.
  4. // インフラのgitリポジトリ . └── terraform ├── README.md ├── aws ├──

    datadog └─── gcp terraform - (少し話がそれますが) - エウレカではterrafomでAWSやGCPのリソースを管理しています。
  5. Creating EKS Cluster - resource: aws_eks_cluster - https://www.terraform.io/docs/providers/aws/guides/eks-getting-started. html -

    ワーカーノードを自分で構築しなければならないのため、 - 起動設定やオートスケールグループなどリソースが多いです - 順番に手順に従うと構築できます - 既存VPCを使う場合は、適宜必要なリソースを使用してください
  6. // login $ gcloud auth login // set project $

    gcloud config set project <my project id> // set credential $ gcloud container clusters get-credentials <my cluster name> --zone=asia-northeast1-a Connecting GKE Cluster gcloudのcliを使用して認証情報を取得します。
  7. Connecting GKE Cluster 2 とりあえず、つなげているか確認してみましょう $ kubectl get nodes NAME

    STATUS ROLES AGE VERSION gke-prod-my-cluster-default-pool-55afabbf-5483 Ready <none> 3d v1.10.4-gke.0 gke-prod-my-cluster-default-pool-55afabbf-75pl Ready <none> 3d v1.10.4-gke.0 gke-prod-my-cluster-default-pool-55afabbf-z6nm Ready <none> 3d v1.10.4-gke.0
  8. Connect to EKS Cluster EKSの場合は、kubernetes-clの他にheptio-authenticator-awsが必要 認証が通らないときはここで間違ってる可能性あり $ curl -o heptio-authenticator-aws

    https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-06-05/bin/darwin/amd64/heptio-authenticator-aws $ chmod +x heptio-authenticator-aws // 適当にパスを通す https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html
  9. // ~/.kube/my-eks-config apiVersion: v1 clusters: - cluster: server: <endpoint-url> certificate-authority-data:

    <base64-encoded-ca-cert> name: kubernetes contexts: - context: cluster: kubernetes user: aws name: aws current-context: aws kind: Config preferences: {} users: Connect to EKS Cluster 2 https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html // つづき - name: aws user: exec: apiVersion: client.authentication.k8s.io/v1alpha1 command: heptio-authenticator-aws args: - "token" - "-i" - "<cluster-name>"
  10. Connect to EKS Cluster 3 $ export KUBECONFIG=$KUBECONFIG:~/.kube/my-eks-config $ kubectl

    get nodes No resources found // まだワーカーが認識されていない クラスターのコンフィグを使ってみる https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html
  11. Add Worker to EKS 2 // aws-auth-cm.yaml apiVersion: v1 kind:

    ConfigMap metadata: name: aws-auth namespace: kube-system data: mapRoles: | - rolearn: <ARN of instance role (not instance profile)> username: system:node:{{EC2PrivateDNSName}} groups: - system:bootstrappers - system:nodes https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html
  12. Add Worker to EKS 4 applyすると認識される $ export KUBECONFIG=$KUBECONFIG:~/.kube/my-eks-config $

    kubectl apply -f aws-auth-cm.yaml $ kubectl get nodes NAME STATUS ROLES AGE VERSION ip-10-251-13-223.ec2.internal Ready <none> 1d v1.10.3 ip-10-251-14-141.ec2.internal Ready <none> 1d v1.10.3 // 現れた https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html
  13. Add Worker to EKS 5 ASGでワーカー数増やすと増える https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html $ kubectl get

    nodes NAME STATUS ROLES AGE VERSION ip-10-251-13-14.ec2.internal Ready <none> 1m v1.10.3 ip-10-251-13-223.ec2.internal Ready <none> 1d v1.10.3 ip-10-251-14-141.ec2.internal Ready <none> 1d v1.10.3 ip-10-251-14-62.ec2.internal Ready <none> 2m v1.10.3 // 増えた
  14. SampleApp: App package main import ( "fmt" "net/http" ) func

    handler(w http.ResponseWriter, r *http.Request) { fmt.Fprintf(w, "Hell World") } func main() { http.HandleFunc("/", handler) http.ListenAndServe(":8888", nil) }
  15. SampleApp: Dockerfile FROM golang:1.10.3-alpine as build WORKDIR /go/src/github.com/sakajunquality/hello-go-docker COPY main.go

    main.go RUN go install -v ./... FROM alpine RUN apk add --no-cache ca-certificates COPY --from=build /go/bin/hello-go-docker /usr/local/bin/hello-go-docker CMD ["hello-go-docker"] ※ 本番運用はroot以外のユーザーで動かすなどちゃんとしましょう
  16. // build $ docker build -t hello-go-docker:v1 --rm --no-cache .

    // run $ docker run -d -p 8888:8888 hello-go-docker:v1 // test $ curl localhost:8888 > Hell World // clean up $ docker kill ... Build && Run Locally
  17. // Push to GCR $ docker tag hello-go-docker:v1 gcr.io/<project id>/hello-go-docker:v1

    $ docker push gcr.io/pairs-dev/hello-go-docker:v1 // gcloud docker -- push はdeprecated Push Image GCR
  18. Push Image ECR // Push to ECR $ docker tag

    hello-go-docker:v1 <account number>.dkr.ecr.us-east-1.amazonaws.com/hello-docker-go:v1 $ $(aws ecr get-login --no-include-email --region us-east-1) $ docker push <account number>.dkr.ecr.us-east-1.amazonaws.com/hello-docker-go:v1
  19. apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: name: hello-docker-go name: hello-docker-go

    namespace: my-space spec: replicas: 3 template: metadata: labels: name: hello-docker-go Manifest: deployment.yaml
  20. Deploy to GKE $ kubectl apply -f deployment.yaml $ kubectl

    get deployment -n my-space NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE hello-docker-go 3 3 3 3 2m $ kubectl get pods -n my-space NAME READY STATUS RESTARTS AGE hello-docker-go-74b879d74d-2rvfs 1/1 Running 0 2m hello-docker-go-74b879d74d-gdff4 1/1 Running 0 2m hello-docker-go-74b879d74d-hdzh9 1/1 Running 0 2m
  21. Deploy to GKE $ kubectl apply -f deployment.yaml $ kubectl

    get deployment -n my-space NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE hello-docker-go 3 3 3 3 2m $ kubectl get pods -n my-space NAME READY STATUS RESTARTS AGE hello-docker-go-74b879d74d-2rvfs 1/1 Running 0 2m hello-docker-go-74b879d74d-gdff4 1/1 Running 0 2m hello-docker-go-74b879d74d-hdzh9 1/1 Running 0 2m _人人人人人 _ > これだけ <  ̄YYYYY ̄ 当たり前だけど・・・
  22. - ECRを使うように変更する Deploy to EKS 1 spec: containers: - name:

    web image: < account num >.dkr.ecr.us-east-1.amazonaws.com/hello-docker-go:v1 // 変えるのはここだけ! ports: - name: web containerPort: 8888
  23. Deploy to EKS 2 $ kubectl apply -f deployment.yaml $

    kubectl get deployment -n my-space NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE hello-docker-go 3 3 3 3 16s $ kubectl get pods -n my-space NAME READY STATUS RESTARTS AGE hello-docker-go-78b99465fd-68ghq 1/1 Running 0 19s hello-docker-go-78b99465fd-ksk4z 1/1 Running 0 19s hello-docker-go-78b99465fd-tqh2c 1/1 Running 0 19s
  24. Deploy to EKS 2 $ kubectl apply -f deployment.yaml $

    kubectl get deployment -n my-space NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE hello-docker-go 3 3 3 3 16s $ kubectl get pods -n my-space NAME READY STATUS RESTARTS AGE hello-docker-go-78b99465fd-68ghq 1/1 Running 0 19s hello-docker-go-78b99465fd-ksk4z 1/1 Running 0 19s hello-docker-go-78b99465fd-tqh2c 1/1 Running 0 19s _人人人人人 _ > 全く同じ <  ̄YYYYY ̄ 当たり前だけど・・・
  25. - GCP - L4 TCP LoadBalancer - Service/LoadBalancer - L7

    HTTP LoadBalancer - Service/NodePort + Ingress - AWS - Classic Load Balancer (CLB) - Service/LoadBlancer - Application Load Balancer (ALB) - Service/NodePort + Ingress(期待) - => できるらしいです!ごめんなさい k8s Resource for LoadBalancer
  26. apiVersion: v1 kind: Service metadata: labels: name: hello-docker-go name: hello-docker-go

    namespace: my-space spec: type: NodePort ports: - port: 80 targetPort: 8888 selector: name: hello-docker-go GKE service.yaml
  27. apiVersion: extensions/v1beta1 kind: Ingress metadata: name: hello-docker-go namespace: my-space annotations:

    kubernetes.io/tls-acme: "true" kubernetes.io/ingress.class: "gce" kubernetes.io/ingress.global-static-ip-name: "<my static ip>" // GCP上で用意した固定IPの名前 kubernetes.io/ingress.allow-http: "true" ingress.gcp.kubernetes.io/pre-shared-cert: "< my-ssl-cert >" // GCP上のSSL証明書 spec: backend: serviceName: hello-docker-go servicePort: 80 GKE ingress.yaml
  28. $ kubectl get svc -n my-space NAME TYPE CLUSTER-IP EXTERNAL-IP

    PORT(S) AGE hello-docker-go NodePort 10.59.248.225 <none> 80:32337/TCP 10m $ kubectl get ing -n my-space NAME HOSTS ADDRESS PORTS AGE hello-docker-go * 35.xxx.yyy.zzz 80 10m GKE 確認
  29. EKS service.yaml apiVersion: v1 kind: Service metadata: labels: name: hello-docker-go

    name: hello-docker-go namespace: my-space annotations: service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true" service.beta.kubernetes.io/aws-load-balancer-extra-security-groups: <elb sg ID> service.beta.kubernetes.io/aws-load-balancer-ssl-cert: <SSL Certificate ARN> service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443" service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: "ELBSecurityPolicy-TLS-1-2-2017-01" // ここは柔軟に
  30. EKS service.yaml つづき ... spec: type: LoadBalancer ports: - port:

    443 targetPort: 8888 selector: name: hello-docker-go
  31. なんか作られない・・・ $ kubectl describe svc hello-docker-go -n my-space Events: Type

    Reason Age From Message ---- ------ ---- ---- ------- Normal EnsuringLoadBalancer 6s (x2 over 11s) service-controller Ensuring load balancer Warning CreatingLoadBalancerFailed 6s (x2 over 11s) service-controller Error creating load balancer (will retry): failed to ensure load balancer for service my-space/hello-docker-go: could not find any suitable subnets for creating the ELB
  32. ・・・できた $ kubectl describe svc hello-docker-go -n my-space ・・・ Events:

    Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal EnsuringLoadBalancer 6m service-controller Ensuring load balancer Normal EnsuredLoadBalancer 6m service-controller Ensured load balancer // 今度はエラー出ていない $ kubectl get svc -n my-space -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR hello-docker-go LoadBalancer 172.20.249.16 xxxxxxxxxx.us-east-1.elb.amazonaws.com 80:31378/TCP 6m name=hello-docker-go
  33. Build/Deploy - GCP - ContainerBuilder - AWS - CodeBuild /

    CodePipeline - CircleCI/TravisCI etc. Container Builder CodeBuild CodePipeline