Upgrade to Pro — share decks privately, control downloads, hide ads and more …

AnsibleFest SFO 2014

AnsibleFest SFO 2014

Covers Google Cloud Platform and live demo of Ansible's Google Cloud (gc*) modules. Also a demo on how to wire up other Google Cloud services to create a Continuous Deployment architecture using ansible locally.

Eric Johnson

October 15, 2014
Tweet

More Decks by Eric Johnson

Other Decks in Technology

Transcript

  1. Ansible and Google Cloud Platform Eric Johnson, Technical Program Manager

    Kenneth Rimey, Senior Software Engineer AnsibleFest SFO 2014
  2. Agenda: October 14th, 2014 Whirlwind Tour of Google Cloud Platform

    Google Compute Engine Using Ansible and Compute Engine (demo) 1 2 3
  3. Whirlwind Tour of Google Cloud Platform Google Compute Engine Using

    Ansible and Compute Engine (demo) 1 2 3 Agenda: October 14th, 2014
  4. Storage Cloud Storage Cloud SQL Cloud Datastore Compute Compute Engine

    App Engine App Services BigQuery Cloud Endpoints Google Cloud Platform Cloud DNS
  5. For the past 15 years, Google has been building out

    the world’s fastest, most powerful, highest quality cloud infrastructure on the planet. Images by Connie Zhou Why Google Cloud Platform?
  6. Whirlwind Tour of Google Cloud Platform Google Compute Engine Using

    Ansible and Compute Engine (demo) 1 2 3 Agenda: October 14th, 2014
  7. • Virtual Machines • Linux / Windows / Custom •

    very small <-> VERY LARGE • Persistent Disks • SSD and "standard" • Up to 10TB per VM • Shared r/o across many VMs • Global snapshots • Advanced Networking • Global private network • Load-balancing (L3 and L7) • Custom routes / Firewall rules Google Compute Engine
  8. • Enterprise Ready • 24x7 Support • 99.95% monthly SLA

    • ISO 27001, SSAE-16 SOC 1,2,3 • Accessible Through • Web @ https://cloud.google.com/console • Cloud SDK command-line utility • REST API • Partners (Commercial and FOSS) • Simplified Pricing • Per-minute billing (10m minimum) • Sustained-use discounts (up to 30% reduction) Google Compute Engine
  9. Whirlwind Tour of Google Cloud Platform Google Compute Engine Using

    Ansible and Compute Engine (demo) 1 2 3 Agenda: October 14th, 2014
  10. Demo Ansible playbook that defines, 1. 4 Compute Engine instances

    (2 per zone) 2. Create a Firewall Rule and Load-Balancer 3. Set up a DNS record for the LB IP 4. Deploy "app" and manage software (in a non-typical way...) Region: us-central1 Target Pool (lb-tp) us-central1-a app1 app3 us-central1-b app2 app4 Forwarding Rules tcp:80 ➔ lb-tp GCP APIs $ ansible-playbook gce-demo.yml
  11. GCE Authentication group_vars/all 1 --- 2 # file: group_vars/all 3

    pid: graphite-demos 4 email: [email protected] 5 pem: /home/erjohnso/pkey.pem
  12. The "gce-demo" playbook gce-demo.yml 1 - name: Create Compute Engine

    instances 2 hosts: local 3 gather_facts: False 4 vars: 5 zones: 6 - { zname: "us-central1-a", inames: ["app1", "app3"] } 7 - { zname: "us-central1-b", inames: ["app2", "app4"] } 8 mdff: {"startup-script": "/home/erjohnso/fest/files/startup-script.sh", 9 "agent": "/home/erjohnso/fest/files/ptc_agent.py"} 10 tasks: 11 - name: Bring up the instances, two per zone 12 gce: 13 instance_names: "{{ item.inames }}" 14 zone: "{{ item.zname }}" 15 machine_type: "n1-standard-1" 16 image: "debian-7" 17 tags: ["http-server", "apache", "prod"] 18 metadata_from_file: "{{ mdff }}" 19 scopes: ["projecthosting", "pubsub"] 20 project_id: "{{ pid }}" 21 with_items: zones 22
  13. The "gce-demo" playbook gce-demo.yml 23 - name: Set up networking

    and DNS 24 hosts: local 25 gather_facts: False 26 tasks: 27 - name: Allow HTTP traffic 28 gce_net: 29 fwname: target-http-server 30 name: default 31 allowed: tcp:80 32 target_tags: http-server 33 project_id: "{{ pid }}"
  14. The "gce-demo" playbook gce-demo.yml 34 - name: Create the load-balancer,

    healthcheck, and add members 35 gce_lb: 36 name: lb 37 httphealthcheck_name: lb-hc 38 region: "us-central1" 39 members: 40 - "us-central1-a/app1" 41 - "us-central1-a/app3" 42 - "us-central1-b/app2" 43 - "us-central1-b/app4" 44 httphealthcheck_name: lb-hc 45 httphealthcheck_port: 80 46 httphealthcheck_path: / 47 project_id: "{{ pid }}" 48 register: gcelb
  15. The "gce-demo" playbook gce-demo.yml 49 - name: Create an A

    record for www.erjohn.so 50 gc_dns: 51 command: create 52 name: erjohnso 53 record: www.erjohn.so. 54 ttl: 600 55 record_type: A 56 value: "{{ gcelb.external_ip }}" 57 project_id: "{{ pid }}"
  16. The "gce-demo" playbook gce-demo.yml 82 - name: Deploy apache, mod_headers,

    and custom web page 83 hosts: gce_instances 84 sudo: yes 85 tasks: 86 - name: Install python-apt 87 command: apt-get install python-apt -y 88 - name: Install apache on instances 89 apt: pkg=apache2 state=present 90 - name: Create custom index.html 91 template: src=index.html.j2 dest=/var/www/index.html 92 - name: Set file stats on index.html 93 file: path=/var/www/index.html owner=root group=root mode=0644 94 - name: Deploy modified apache conf 95 copy: src=apache2.conf dest=/etc/apache2/apache2.conf 96 owner=root group=root mode=0644 97 - name: Enable mod_headers 98 file: path=/etc/apache2/mods-enabled/headers.load 99 src=/etc/apache2/mods-available/headers.load state=link 100 - name: Re-load apache 101 service: name=apache2 state=reloaded
  17. Sub-Demo: Continuous Deployment us-central1-a app1 app3 us-central1-b app2 app4 GCP

    APIs pub/sub instance tags CD with Ansible and Google Cloud Platform, 1. PTC Agent deployed via metadata during instance create 2. "App" development with git hosted in Google Cloud Repositories 3. Ansible configures the VM and "app" locally
  18. Recall... gce-demo.yml 1 - name: Create Compute Engine instances 2

    hosts: local 3 gather_facts: False 4 vars: 5 zones: 6 - { zname: "us-central1-a", inames: ["app1", "app3"] } 7 - { zname: "us-central1-b", inames: ["app2", "app4"] } 8 mdff: {"startup-script": "/home/erjohnso/fest/files/startup-script.sh", 9 "agent": "/home/erjohnso/fest/files/ptc_agent.py"} 10 tasks: 11 - name: Bring up the instances, two per zone 12 gce: 13 instance_names: "{{ item.inames }}" 14 zone: "{{ item.zname }}" 15 machine_type: "n1-standard-1" 16 image: "debian-7" 17 tags: ["http-server", "apache", "prod"] 18 metadata_from_file: "{{ mdff }}" 19 scopes: ["projecthosting", "pubsub"] 20 project_id: "{{ pid }}" 21 with_items: zones 22
  19. Sub-Demo app1 Google Cloud PubSub GCE Metadata service Google Cloud

    Repositories PTC Agent (topic) $ git push $ gcloud compute instances add-tags ... 10011 00110 11011 01100 00010 10011 00110 11011 01100 00010 v1 v2
  20. Details • PTC Agent • Listens for code push events

    and instance tag change events • On event, a "start" script in top-level of configuration tree executes ansible-playbook • Configuration - Pure Ansible • Role to deploy Apache, custom web page (our "app"), and a debugging script • Application "versions" deployed are based on instance tags (prod or canary) • Uses a local dynamic inventory script for classification by instance tags • Debugging • /cgi-bin/ptcd
  21. Demo Flow 1. Start the demo: All machines have prod

    instance tag and v1 deployed a. site.yml initially only specifies 'v1.yml' b. v1.yml targets machines with the prod tag (e.g. tag_apache:&tag_prod ) 2. We start the "canary" rollout verification using production traffic a. Select an instance and change its instance tags (prod -> canary) b. Enable both v1.yml and v2.yml (v2.yml targets tag_apache:&tag_canary ) c. Commit and push d. Verify that the canary instance is running v2 and others are still running v1 i. Revert? Yes, flip canary instance tags back to prod (no need to commit/push!) 3. Assuming we like this, proceed with remainder of rollout a. Comment out v1.yml in the site.yml b. Edit v2.yml and set the target to tag_apache:&tag_prod c. Commit and push
  22. Umm... Why? • Pros • New machines come up fully

    configured automatically (auto-scaling, PoG, etc) • Demo used instance metadata, but could use project-metadata • Scalable to very large sets of machines • Good prototype for adapting to your needs (git branches vs versions, environments, etc) • Cons • No all-up view of state of managed machines • Needs thought toward ACLs around update mechanism (who can git push / change tags) • Can be difficult to debug, no immediate feedback • Scopes can be challenging • Alternative: Ansible Tower! • All-up views • ACLs, audit history • Hands-free scheduled updates • Visualization
  23. Available now and more coming! • Existing modules: gce, gce_net,

    gce_pd, gce_lb, gc_storage, inventory plugin • Create, destroy instances + dynamic inventory • Create, destroy networks, firewall rules • Create, destroy Persistent Disks • Create, destroy load balancer, healthchecks • Create, destroy buckets, objects Available Now!! Coming Soon!! • The gc_dns module [PR being reviewed] • Create, destroy managed-zones • Create, update DNS records • New metadata_from_file for GCE module • More things to come...
  24. Google confidential │ Do not distribute $500 in Cloud Platform

    credit to launch your idea! Build. Store. Analyze. On the same infrastructure that powers Google Start building Go to g.co/cloudstarterpack Click ‘Apply Now’ and complete the application with promo code: ansible-con Starter Pack Offer Description 1 2 3
  25. cloud.google.com - Thank you! Read more at • Compute Engine:

    https://cloud.google.com/products/compute-engine • Ansible + Compute Engine: http://docs.ansible.com/guide_gce.html • ... and look for the new gc_dns module soon! [pull-request #110] Get Started on Google Cloud Platform • $500 credit, http://g.co/cloudstarterpack (use promo code: ansible-con) Questions? {'Freenode': 'erjohnso', 'GitHub': 'erjohnso', 'Twitter': 'no!'}