Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Ansible all the things!

Ansible all the things!

Introduction to Ansible

Xabier Larrakoetxea

July 15, 2014
Tweet

More Decks by Xabier Larrakoetxea

Other Decks in Technology

Transcript

  1. http://docs.ansible.com/intro_getting_started.html 192.168.100.46 | success >> { "changed": false, "ping": "pong"

    } $ ansible all -u vagrant -m ping -i '192.168.100.46,' --private- key=~/.vagrant.d/insecure_private_key
  2. Repo (Chef server) Prod servers (Chef nodes) Devops (Chef Knife)

    Pull mode Test servers (Chef nodes) TOO COMPLICATED
  3. Group vars doge0.dogebox.org ansible_ssh_host=192.168.100.40 doge1.dogebox.org ansible_ssh_host=192.168.100.41 doge2.dogebox.org ansible_ssh_host=192.168.100.42 doge3.dogebox.org ansible_ssh_host=192.168.100.43

    [web] doge[0:1].dogebox.org [db] doge2.dogebox.org [lb] doge3.dogebox.org [local:children] web db lb [local:vars] remote_user: "vagrant" ansible_ssh_user: "vagrant" ansible_ssh_private_key_file: ~/.vagrant.d/insecure_private_key hosts
  4. Group vars++ remote_user: "vagrant" #ansible_ssh_user: "vagrant" ansible_ssh_private_key_file: ~/.vagrant.d/insecure_private_key group_vars/local remote_user:

    "ubuntu" ansible_ssh_private_key_file: ~/keys/aws_prod_key my_prod_custom_variable: "such variable, very prod, much scalability" group_vars/prod working_user_auth_ssh_keys: - "~/.ssh/id_rsa_dogekey.pub" - "~/.ssh/id_rsa_dogekey2.pub" group_vars/all
  5. doge0.dogebox.org | success | rc=0 >> Linux dogebox0 3.13.0-24-generic #47-Ubuntu

    SMP Fri May 2 23:30:00 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux doge1.dogebox.org | success | rc=0 >> Linux dogebox1 3.13.0-24-generic #47-Ubuntu SMP Fri May 2 23:30:00 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux doge2.dogebox.org | success | rc=0 >> Linux dogebox2 3.13.0-24-generic #47-Ubuntu SMP Fri May 2 23:30:00 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux doge3.dogebox.org | success | rc=0 >> Linux dogebox3 3.13.0-24-generic #47-Ubuntu SMP Fri May 2 23:30:00 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux ansible local -m shell -a 'uname -a' -i ./hosts
  6. Commands Simple tasks Get information http://docs.ansible.com/intro_adhoc.html Power off N machines

    Copy files Simple deploy Create users and groups Install a package
  7. Commands Simple tasks Get information http://docs.ansible.com/intro_adhoc.html Power off N machines

    Copy files Simple deploy Create users and groups Install a package QUICK & SIMPLE
  8. Single actions For making tasks Out of the box Custom

    http://docs.ansible.com/modules_by_category.html
  9. Hide system info file: /usr/share/landscape/landscape-sysinfo.wrapper mode: rw-r-r Copy message data

    src: files/doge.txt dest: /home/vagrant/doge.txt Create the new motd src: files/30-doge dest: /etc/update-motd.d/30-doge mode: rx-r-r Update motd cmd: run-parts /etc/update-motd.d/
  10. Ansible-playbook PLAY [all] ******************************************************************** GATHERING FACTS *************************************************************** ok: [doge0.dogebox.org] TASK:

    [Deactivate ubuntu information] ***************************************** ok: [doge0.dogebox.org] TASK: [Copy the resource] ***************************************************** ok: [doge0.dogebox.org] TASK: [Create the new motd] *************************************************** ok: [doge0.dogebox.org] PLAY RECAP ******************************************************************** doge0.dogebox.org : ok=4 changed=0 unreachable=0 failed=0 $ ansible-playbook ./main.yml -i./hosts -l doge0* use -l to limit the target Run your playbooks with ansible-playbook cmd
  11. Structure - hosts: devserver tasks: - name: Update repos apt:

    update_cache=yes main.yml devserver.example.org ansible_ssh_host=192.168.100.40 prod.example.org ansible_ssh_host=192.168.100.41 [devserver] devserver.example.org [web] prod.example.org hosts Minimun files are hosts and playbook starting file
  12. Tasks - hosts: devserver tasks: - name: Update repos apt:

    update_cache=yes - name: Install python apt: pkg: python state: present update_cache: yes main.yml The tasks are the unit actions for Ansible
  13. Variables - hosts: devserver vars: # settings for ansible to

    connect to the vagrant machine ansible_ssh_user: "vagrant" ansible_ssh_private_key_file: ~/.vagrant.d/insecure_private_key docker_user: vagrant remote_user: vagrant sudo: true main.yml Variables can be used everywhere
  14. Assign variables - name: test play hosts: all tasks: -

    name: Cat motd shell: cat /etc/motd register: motd_contents main.yml Use register to assign the task output to the variable
  15. Facts doge0.dogebox.org | success >> { "ansible_facts": { "ansible_all_ipv4_addresses": [

    "10.0.2.15", "192.168.100.40" ], "ansible_all_ipv6_addresses": [ "fe80::a00:27ff:fe2e:7e37", "fe80::a00:27ff:fe61:c9a1" ], "ansible_architecture": "x86_64", "ansible_bios_date": "12/01/2006", "ansible_bios_version": "VirtualBox", ... "ansible_distribution": "Ubuntu", "ansible_distribution_release": "trusty", "ansible_distribution_version": "14.04", ... "changed": false } $ ansible doge0* -m setup -i ./hosts Get remote system info into variables with the setup module
  16. Conditionals - name: Check curl is present apt: pkg=curl when:

    ansible_os_family == "Debian" sudo: True main.yml http://docs.ansible.com/playbooks_conditionals.html To use if/else style you need 2 tasks with 1 when in each one
  17. Conditionals - name: Check curl is present apt: pkg=curl when:

    ansible_os_family == "Debian" sudo: True main.yml http://docs.ansible.com/playbooks_conditionals.html To use if/else style you need 2 tasks with 1 when in each one Thats a fact!
  18. Loops The task iterates over the with_items list assignin in

    item var http://docs.ansible.com/playbooks_loops.html - name: add several users user: name={{ item }} state=present groups=wheel with_items: - testuser1 - testuser2 - testuser3 main.yml
  19. Loops ++ You can use Python types, like lists or

    dicts vars: users: doge: name: Doge Shibe telephone: 123-456-7890 grumpy cat: name: Tard Sauce telephone: 987-654-3210 tasks: - name: Print phone records debug: msg="User {{ item.key }} is {{ item.value.name }} ({{ item.value.telephone }})" with_dict: users main.yml You can register a dict or list result
  20. Loops ++ You can use Python types, like lists or

    dicts vars: users: doge: name: Doge Shibe telephone: 123-456-7890 grumpy cat: name: Tard Sauce telephone: 987-654-3210 tasks: - name: Print phone records debug: msg="User {{ item.key }} is {{ item.value.name }} ({{ item.value.telephone }})" with_dict: users main.yml You can register a dict or list result Is a hash of hashes! Variables in strings use {{ }} Variables outside strings don’t use notation
  21. Notifications tasks: - name: template configuration file template: src=template.j2 dest=/etc/foo.conf

    notify: - restart memcached - restart apache handlers: - name: restart memcached service: name=memcached state=restarted - name: restart apache service: name=apache state=restarted main.yml Trigger tasks with notify to exec the handlers section tasks Only notified when the task has changed something in the remote host
  22. Includes - name: placeholder foo command: /bin/foo - name: placeholder

    bar command: /bin/bar tasks/foo.yml You can pass varaibles in include statements http://docs.ansible.com/playbooks_roles.html tasks: - include: tasks/foo.yml main.yml
  23. ansible.cfg Change Ansible settings as you wish http://docs.ansible.com/intro_configuration.html [defaults] jinja2_extensions=jinja2.ext.do

    host_key_checking = False roles_path=../../ansible-roles/roles:../../ansible-private/roles:../../ansible-roles/ ansible.cfg
  24. Port our commands to an Ansible playbook main.yml Run in

    all the hosts ansible-playbook ./main.yml -i./hosts https://github.com/slok/devops-course/tree/master/03
  25. Selection by OS - name: Install the nginx packages yum:

    name={{ item }} state=present with_items: redhat_pkg when: ansible_os_family == "RedHat" - name: Install the nginx packages apt: name={{ item }} state=present update_cache=yes with_items: ubuntu_pkg when: ansible_os_family == "Debian" main.yml Good practice is to group tasks by OS and use include
  26. Service restart tasks: - name: template configuration file template: src=template.j2

    dest=/etc/foo.conf notify: - restart memcached - restart apache handlers: - name: restart memcached service: name=memcached state=restarted - name: restart apache service: name=apache state=restarted main.yml Good practice is to use handlers for final tasks (if makes sense)
  27. Bulk pkg install - name: Install dependencies apt: pkg={{ item

    }} state=latest with_items: - linux-image-extra-{{ ansible_kernel }} - python-pycurl - git - python - python-dev main.yml
  28. Change true/false - name: Create virtualenvs shell: if test -d

    {{ virtualenv_prefix }}/{{ item }}; then echo "false"; else virtualenv {{ virtualenv_prefix }}/{{ item }}; fi register: virtualenv_out changed_when: "virtualenv_out.stdout.find('false') == -1" with_items: virtualenvs_to_create main.yml Modules are prepared for change. Shell, command … no Use register, bash scripting and changed_when for custom cmds
  29. Tags - name: Install virtualenv pip: name=virtualenv version={{ virtualenv_version }}

    sudo: true tags: - python - virtualenv - install main.yml Use --tags with ansible-playbook to execute specific tasks http://docs.ansible.com/playbooks_tags.html
  30. Debugging - debug: msg="System {{ inventory_hostname }} has uuid {{

    ansible_product_uuid }}" - name: Display all variables/facts known for a host debug: var=hostvars[inventory_hostname] main.yml Use debug and register to watch results Use -vvvv with ansible-playbook to watch results http://docs.ansible.com/debug_module.html
  31. Exec task with user - name: Create virtualenvs shell: if

    test -d {{ virtualenv_prefix }}/{{ item }}; then echo "false"; else virtualenv {{ virtualenv_prefix }}/{{ item }}; fi register: virtualenv_out changed_when: "virtualenv_out.stdout.find('false') == -1" with_items: virtualenvs_to_create # this is needed to execute the command as different user sudo: true sudo_user: "{{ virtualenv_user }}" main.yml Use sudo and sudo_user at task level to run with an specific user
  32. Exec task with env var - name: Collect static files

    django_manage: command: collectstatic app_path: "{{ item.value.app_dir }}" virtualenv: "{{ item.value.virtualenv }}" environment: SYSTEM_ENV: "{{ system_env_vars['SYSTEM_ENV'] }}" with_dict: deployment_apps main.yml enviroment module only applies var per task, they are ephimeral
  33. delegate_to http://docs.ansible.com/playbooks_delegation.html - name: Check DNS sudo: no delegate_to: “{{

    checkerserver }}” action: shell dig +short "{{ ansible_hostname |lower }}".company.tld register: in_dns main.yml Used for deployments for local actions use local_action don’t use 127.0.0.1 as host
  34. wait_for http://docs.ansible.com/wait_for_module.html - name: Wait for webserver to come up

    wait_for: host=127.0.0.1 port=80 state=started timeout=80 main.yml Used for deployments
  35. serial - name: test play hosts: webservers serial: 3 main.yml

    Used for deployments http://docs.ansible.com/playbooks_delegation.html
  36. max_fail_porcentage - hosts: webservers max_fail_percentage: 30 serial: 10 main.yml Used

    for deployments http://docs.ansible.com/playbooks_delegation.html By default, ansible will contiue executing until all host fail
  37. http://jinja.pocoo. org/docs/ # TYPE DATABASE USER ADDRESS METHOD # Default:

    {% for connection in postgresql_pg_hba_default %} # {{connection.comment}} {{connection.type}} {{connection.database}} {{connection.user}} {{connection.address}} {{connection.method}} {% endfor %} # Password hosts {% for host in postgresql_pg_hba_passwd_hosts %} host all all {{host}} password {% endfor %} # Trusted hosts {% for host in postgresql_pg_hba_trust_hosts %} host all all {{host}} trust {% endfor %} # User custom {% for connection in postgresql_pg_hba_custom %} # {{connection.comment}} {{connection.type}} {{connection.database}} {{connection.user}} {{connection.address}} {{connection.method}} {% endfor %} templates/pg_hba.conf.j2
  38. Vault password: Confirm Vault password: $ ansible-vault create test.yml $ANSIBLE_VAULT;1.1;AES256

    36643432343437396632343561616463653433313165653965306538303264663761353238316331 3364373564653735633966336137643262393036356637640a646365653839333762373361663365 66343061386137396138656166303238333730323133343730636531633831663534613533373465 3135323962376634630a333030613461643761653132653462666466306161363438313633653638 3865 $ cat ./test.yml Vault password: Decryption successful $ ansible-vault decrypt test.yml Python rules! $ cat ./test.yml
  39. / /inventory /roles group_vars common tasks files vars templates defaults

    handlers host_vars meta http://docs.ansible.com/playbooks_best_practices.html
  40. ├── creation.yml ├── inventory │ ├── digital_ocean.ini │ ├── digital_ocean.py

    │ ├── group_vars │ │ ├── all │ │ └── local │ ├── local │ ├── production │ └── stage ├── main.yml ├── provision.yml ├── roles │ ├── common │ │ └── tasks │ │ └── main.yml │ ├── creation │ │ ├── defaults │ │ │ └── main.yml │ │ ├── tasks │ │ │ └── main.yml │ │ └── vars │ │ ├── instances.yml │ │ └── sshkeys.yml │ ├── nginx │ │ ├── handlers │ │ ├── tasks │ │ │ └── main.yml │ │ └── templates │ │ └── nginx.conf.j2 │ ├── uwsgi │ │ └── tasks │ │ └── main.yml │ └── virtualenv │ ├── defaults │ │ └── main.yml │ ├── tasks │ │ └── main.yml │ └── vars │ └── main.yml └── Vagrantfile $ tree ./sharestack-orchestration
  41. ├── creation.yml ├── inventory │ ├── digital_ocean.ini │ ├── digital_ocean.py

    │ ├── group_vars │ │ ├── all │ │ └── local │ ├── local │ ├── production │ └── stage ├── main.yml ├── provision.yml ├── roles │ ├── common │ │ └── tasks │ │ └── main.yml │ ├── creation │ │ ├── defaults │ │ │ └── main.yml │ │ ├── tasks │ │ │ └── main.yml │ │ └── vars │ │ ├── instances.yml │ │ └── sshkeys.yml │ ├── nginx │ │ ├── handlers │ │ ├── tasks │ │ │ └── main.yml │ │ └── templates │ │ └── nginx.conf.j2 │ ├── uwsgi │ │ └── tasks │ │ └── main.yml │ └── virtualenv │ ├── defaults │ │ └── main.yml │ ├── tasks │ │ └── main.yml │ └── vars │ └── main.yml └── Vagrantfile $ tree ./sharestack-orchestration Inventory Roles
  42. Call roles - name: Apply common configuration hosts: main roles:

    - common tags: - configuration - name: Install common tools hosts: main roles: - tools tags: - common tools main.yml
  43. Ansible Galaxy downloading role 'mysql', owned by Ansibles no version

    specified, installing v1.0.2 - downloading role from https://github.com/Ansibles/mysql/archive/v1.0.2.tar.gz - extracting Ansibles.mysql to /home/slok/. virtualenvs/ansible/playbooks/roles/Ansibles.mysql Ansibles.mysql was installed successfully $ ansible-galaxy install Ansibles.mysql Find, share and reuse roles with ansible-galaxy Use the downloaded role as a regular one passing vars https://galaxy.ansible.com
  44. config.vm.provision "ansible" do |ansible| ansible.playbook = "main.yml" ansible.inventory_path = "hosts-dev"

    ansible.verbose = "v" ansible.host_key_checking = false #ansible.extra_vars = { # private_key: "/my/ssh/key" # Default ~/.ssh/id_rsa #} end Vagrantfile extra_vars is where we pass our custom variables
  45. Structure . ├── ansible.cfg ├── inventory │ ├── group_vars │

    │ ├── db │ │ ├── local │ │ └── web │ └── hosts-dev ├── main.yml └── roles ├── common │ └── tasks │ └── main.yml └── virtualenvwrapper ├── defaults │ └── main.yml └── tasks └── main.yml $ tree .
  46. Ansible config [defaults] # Don't add to known_hosts (will be

    changing all the time) host_key_checking = False ansible.cfg Don’t use “host_key_checking = False” in non devboxes Not checking and storing the server keys is faster
  47. Hosts # We know this from the Vagrantfile, but we

    could do dynamically devbox.favorshare.org ansible_ssh_host=192.168.100.50 [web] devbox.favorshare.org [db] devbox.favorshare.org # Our env group is local [local:children] web db inventory/hosts-dev The groups will be used for limiting and variable grouping
  48. Group vars remote_user: "vagrant" ansible_ssh_user: "vagrant" ansible_ssh_private_key_file: ~/.vagrant.d/insecure_private_key inventory/group_vars/local #

    Virtualenv role settings (https://galaxy.ansible.com/list#/roles/347) virtualenv_version: 1.11.6 inventory/group_vars/web # Postgresql role settings (https://galaxy.ansible.com/list#/roles/512) postgresql_databases: - {'name': "favorshare"} inventory/group_vars/db Grouping the variables in independent files is a good practice
  49. main - name: Common tasks in all servers hosts: all

    sudo: true roles: - common tags: - pre - common - name: Install virtualenv hosts: web sudo: true roles: - brad.virtualenv tags: - python - virtualenv - name: Install virtualenvwrapper hosts: web sudo: true roles: - virtualenvwrapper tags: - python - virtualenvwrapper - name: Install Postgres hosts: db sudo: true roles: - Ansibles.postgresql tags: - db - postgresql main.yml
  50. main - name: Common tasks in all servers hosts: all

    sudo: true roles: - common tags: - pre - common - name: Install virtualenv hosts: web sudo: true roles: - brad.virtualenv tags: - python - virtualenv - name: Install virtualenvwrapper hosts: web sudo: true roles: - virtualenvwrapper tags: - python - virtualenvwrapper - name: Install Postgres hosts: db sudo: true roles: - Ansibles.postgresql tags: - db - postgresql main.yml Groups
  51. Common - name: Update repos apt: update_cache=yes when: ansible_os_family ==

    "Debian" - name: Bulk dependency installation apt: pkg: "{{ item }}" state: present when: ansible_os_family == "Debian" with_items: - build-essential - python-dev - python - python-pip roles/common/tasks/main.yml Is good to have a common role as the first role of the playbook
  52. Virtualenv downloading role 'virtualenv', owned by brad no version specified,

    installing master - downloading role from https://github.com/brad/ansible-virtualenv/archive/master.tar.gz - extracting brad.virtualenv to /home/slok/.virtualenvs/ansible/playbooks/roles/brad. virtualenv brad.virtualenv was installed successfully $ ansible-galaxy install brad.virtualenv https://galaxy.ansible.com/list#/roles/347 # Virtualenv role settings (https://galaxy.ansible.com/list#/roles/347) virtualenv_version: 1.11.6 inventory/group_vars/web
  53. Virtualenwrapper - name: Install virtualenvwrapper pip: name=virtualenvwrapper - name: Add

    virtualenvwrapper to bashrc lineinfile: dest: "{{ item.0.location }}" line: "{{ item.1 }}" with_subelements: - virtualenvwrapper_configuration - config roles/virtualenvwrapper/tasks/main.yml virtualenvwrapper_configuration: - location: "/home/{{ remote_user }}/.bashrc" config: - "export WORKON_HOME=$HOME/.virtualenvs" - "export PROJECT_HOME=$HOME/projects" - "source /usr/local/bin/virtualenvwrapper.sh" roles/virtualenvwrapper/defaults/main.yml with_subelements loop style is very powerful
  54. Postgresql downloading role 'postgresql', owned by Ansibles no version specified,

    installing v1.0.3 - downloading role from https://github.com/Ansibles/postgresql/archive/v1.0.3.tar.gz - extracting Ansibles.postgresql to /home/slok/.virtualenvs/ansible/playbooks/roles/Ansibles. postgresql Ansibles.postgresql was installed successfully $ ansible-galaxy install brad.virtualenv https://galaxy.ansible.com/list#/roles/512 # Postgresql role settings (https://galaxy.ansible.com/list#/roles/512) postgresql_databases: - {'name': "favorshare"} inventory/group_vars/db
  55. Vagrant + Ansible # Ansible all the things! config.vm.provision "ansible"

    do |ansible| ansible.playbook = "provisioning/main.yml" ansible.inventory_path = "provisioning/inventory" ansible.verbose = "v" # Check this https://github.com/mitchellh/vagrant/issues/3096 ansible.limit = 'local' end Vagrantfile Pay attention to “limit” param in Vagrant >= 1.5
  56. Inventory ./inventories/ ├── production ├── sandbox │ ├── group_vars │

    │ ├── all │ │ └── webservers │ ├── hosts │ └── host_vars │ ├── 1.db.master.sandbox.favorshare.org │ │ ├── db.yml │ │ ├── main.yml │ │ └── sensible_data.yml │ ├── 1.lb.sandbox.favorshare.org │ │ ├── main.yml │ │ └── sensible_data.yml │ ├── 1.web.sandbox.favorshare.org │ │ ├── main.yml │ │ └── sensible_data.yml │ ├── 2.web.sandbox.favorshare.org │ │ ├── main.yml │ │ └── sensible_data.yml │ └── readme.txt └── staging $ tree ./inventories One inventory per environment
  57. Inventory ./inventories/ ├── production ├── sandbox │ ├── group_vars │

    │ ├── all │ │ └── webservers │ ├── hosts │ └── host_vars │ ├── 1.db.master.sandbox.favorshare.org │ │ ├── db.yml │ │ ├── main.yml │ │ └── sensible_data.yml │ ├── 1.lb.sandbox.favorshare.org │ │ ├── main.yml │ │ └── sensible_data.yml │ ├── 1.web.sandbox.favorshare.org │ │ ├── main.yml │ │ └── sensible_data.yml │ ├── 2.web.sandbox.favorshare.org │ │ ├── main.yml │ │ └── sensible_data.yml │ └── readme.txt └── staging $ tree ./inventories One inventory per environment Environments
  58. Vars(env level) ./inventories/sandbox/ ├── group_vars │ ├── all │ └──

    webservers ├── hosts └── host_vars ├── 1.db.master.sandbox.favorshare.org │ ├── db.yml │ ├── main.yml │ └── sensible_data.yml ├── 1.lb.sandbox.favorshare.org │ ├── main.yml │ └── sensible_data.yml ├── 1.web.sandbox.favorshare.org │ ├── main.yml │ └── sensible_data.yml ├── 2.web.sandbox.favorshare.org │ ├── main.yml │ └── sensible_data.yml └── readme.txt $ tree ./inventories/sandbox Each environment will have different vars, for example: db pass
  59. Vars(global level) ./group_vars/ ├── all ├── dbservers ├── lbservers ├──

    readme.txt └── webservers $ tree ./group_vars All the envs will share the same vars, for example: user public keys
  60. Steps Create and authorize main user Create and authorize system

    users Create service runner user Set bash stuff for users
  61. Play - name: Common authorization for all servers hosts: all

    sudo: True gather_facts: False roles: - authorization tags: - all - authorization ./provision.yml This play will call the authorization roles for all the hosts
  62. Role - include: create_main_user.yml vars: remote_user: "{{ first_remote_user }}" ansible_ssh_user:

    "{{ first_ansible_ssh_user }}" ansible_ssh_private_key_file: "{{ first_ansible_ssh_private_key_file }}" - include: give_access_to_users.yml - include: create_service_runner_user.yml - include: set_bash_stuff.yml ./roles/authorization/tasks.yml Check vars in group_vars/all Check vars in inventories/*/group_vars/all
  63. Role - include: create_main_user.yml vars: remote_user: "{{ first_remote_user }}" ansible_ssh_user:

    "{{ first_ansible_ssh_user }}" ansible_ssh_private_key_file: "{{ first_ansible_ssh_private_key_file }}" - include: give_access_to_users.yml - include: create_service_runner_user.yml - include: set_bash_stuff.yml ./roles/authorization/tasks.yml Check vars in group_vars/all Check vars in inventories/*/group_vars/all Only the first time to create and authorize the main user
  64. task (main user) - name: Create main user user: name:

    "{{ main_user }}" shell: "/bin/bash" - name: Add main user to sudoers template: src: sudoer.j2 dest: /etc/sudoers.d/{{ item.name }} owner: root group: root mode: 0440 with_items: - {'name': "{{ main_user }}" } - name: Copy main user public keys authorized_key: user: "{{ main_user }}" key: "{{ item }}" with_file: ansible_ssh_public_key_files ./roles/authorization/create_main_user.yml
  65. task (main user) - name: Create main user user: name:

    "{{ main_user }}" shell: "/bin/bash" - name: Add main user to sudoers template: src: sudoer.j2 dest: /etc/sudoers.d/{{ item.name }} owner: root group: root mode: 0440 with_items: - {'name': "{{ main_user }}" } - name: Copy main user public keys authorized_key: user: "{{ main_user }}" key: "{{ item }}" with_file: ansible_ssh_public_key_files ./roles/authorization/create_main_user.yml Small hack to reuse the template in future tasks
  66. task (main user) {{ item.name }} ALL=(ALL) NOPASSWD:ALL ./roles/authorization/templates/sudoer.j2 #

    This user will be the main user main_user: favorshare remote_user: "{{ main_user }}" ansible_ssh_user: "{{main_user }}" ansible_ssh_private_key_file: "~/.ssh/favorshare" # This are local based in roles/authorizartion/files/public_keys ansible_ssh_public_key_files: - "public_keys/favorshare.pub" #- "public_keys/id_rsa2.pub" #- "public_keys/id_rsa3.pub" ./group_vars/all
  67. task (main user) {{ item.name }} ALL=(ALL) NOPASSWD:ALL ./roles/authorization/templates/sudoer.j2 #

    This user will be the main user main_user: favorshare remote_user: "{{ main_user }}" ansible_ssh_user: "{{main_user }}" ansible_ssh_private_key_file: "~/.ssh/favorshare" # This are local based in roles/authorizartion/files/public_keys ansible_ssh_public_key_files: - "public_keys/favorshare.pub" #- "public_keys/id_rsa2.pub" #- "public_keys/id_rsa3.pub" ./group_vars/all This will be shared accoss all environments
  68. task (system users) - name: Create the users in the

    machines user: name: "{{ item.name }}" state: "{{ 'present' if item.name in authorized_users else 'absent' }}" shell: "/bin/bash" with_items: system_users - name: Activate granted sudo users template: src: sudoer.j2 dest: /etc/sudoers.d/{{ item.name }} owner: root group: root mode: 0440 when: item.name in sudo_users with_items: system_users - name: Deactivate denied sudo users file: path: /etc/sudoers.d/{{ item.name }} state: "{{ 'file' if item.name in sudo_users else 'absent' }}" with_items: system_users - name: Copy system users public keys authorized_key: user: "{{ item.0.name }}" key: "{{ lookup('file', item.1) }}" when: item.0.name in authorized_users with_subelements: - system_users - public_keys ./roles/authorization/give_access_to_users.yml
  69. task (system users) - name: Create the users in the

    machines user: name: "{{ item.name }}" state: "{{ 'present' if item.name in authorized_users else 'absent' }}" shell: "/bin/bash" with_items: system_users - name: Activate granted sudo users template: src: sudoer.j2 dest: /etc/sudoers.d/{{ item.name }} owner: root group: root mode: 0440 when: item.name in sudo_users with_items: system_users - name: Deactivate denied sudo users file: path: /etc/sudoers.d/{{ item.name }} state: "{{ 'file' if item.name in sudo_users else 'absent' }}" with_items: system_users - name: Copy system users public keys authorized_key: user: "{{ item.0.name }}" key: "{{ lookup('file', item.1) }}" when: item.0.name in authorized_users with_subelements: - system_users - public_keys ./roles/authorization/give_access_to_users.yml Check who is sudo Check who is authorized Each host has sudo_users and authorized_users
  70. task (system users) system_users: - name: "user1" public_keys: - "public_keys/user1.pub"

    - name: "user2" public_keys: - "public_keys/user2.pub" - name: "user3" public_keys: - "public_keys/user3.pub" ./group_vars/all.yml # The authorized users in this machine authorized_users: - user1 - user2 - user3 # The sudoer users sudo_users: - user1 - user2 ./inventories/sandbox/host_vars/*/main.yml
  71. task (system users) system_users: - name: "user1" public_keys: - "public_keys/user1.pub"

    - name: "user2" public_keys: - "public_keys/user2.pub" - name: "user3" public_keys: - "public_keys/user3.pub" ./group_vars/all.yml # The authorized users in this machine authorized_users: - user1 - user2 - user3 # The sudoer users sudo_users: - user1 - user2 ./inventories/sandbox/host_vars/*/main.yml Each host will have different vars (host level vars)
  72. task (service user) - name: Create service runner group group:

    name: "{{ service_runner_group }}" state: present system: True - name: Create service runner user user: name: "{{ service_runner_user }}" state: "present" shell: "/usr/sbin/nologin" system: True group: "{{ service_runner_group }}" createhome: False ./roles/authorization/tasks/create_service_runner_user.yml # The user that will run the services in the machines like supervisor service_runner_user: "nobody" service_runner_group: "nogroup" ./group_vars/all
  73. task (bash stuff) - name: Set bash_profile to the users

    copy: src: bash_profile dest: "/home/{{ item.name }}/.bash_profile" owner: "{{ item.name }}" group: "{{ item.name }}" mode: 0644 when: item.name in authorized_users with_items: system_users - name: Set bashrc to the users template: src: bashrc.j2 dest: "/home/{{ item.name }}/.bashrc" owner: "{{ item.name }}" group: "{{ item.name }}" mode: 0644 when: item.name in authorized_users with_items: system_users ./roles/authorization/tasks/set_bash_stuff.yml
  74. task (bash stuff) # If not running interactively, don't do

    anything [[ $- != *i* ]] && return ###################################################################################### #User prompt with colors in user and host PS1="\u@${{ bashrc_promp_color }}\h:\W$NC\$ " #Alias alias ls='ls --color' alias grep='grep -i --color' ./roles/authorization/templates/bashrc.j2 [[ -f ~/.bashrc ]] && . ~/.bashrc ./roles/authorization/files/bash_profile
  75. Steps Set the motd (Not doge xD) Set the hostname

    Install common dependencies Set environment vars
  76. Play - name: Common stuff for all the server hosts:

    all sudo: True gather_facts: False roles: - common tags: - all - common ./provision.yml This play will call the common role for all the hosts
  77. task (motd) - name: Set motd template: src: 60-warning.j2 dest:

    /etc/update-motd.d/60-warning owner: root group: root mode: 0755 ./roles/common/tasks/main.yml #!/bin/bash echo -e "{{ motd_warning_color }}{{ motd_character * motd_length }}{{ motd_warning_reset }}" echo -e "{{ motd_warning_color }}{{ motd_character }}{{ ' ' * (motd_length|int - 2)|int }}{{ motd_character }}{{ motd_warning_reset }}" echo -e "{{ motd_warning_color }}{{ motd_character }}{{ ' ' * (motd_length|int - 2)|int }}{{ motd_character }}{{ motd_warning_reset }}" echo -e "{{ motd_warning_color }}{{ motd_character }}{{ motd_message|center(motd_length|int - 2) }}{{ motd_character }}{{ motd_warning_reset }}" echo -e "{{ motd_warning_color }}{{ motd_character }}{{ ' ' * (motd_length|int - 2)|int }}{{ motd_character }}{{ motd_warning_reset }}" echo -e "{{ motd_warning_color }}{{ motd_character }}{{ ' ' * (motd_length|int - 2)|int }}{{ motd_character }}{{ motd_warning_reset }}" echo -e "{{ motd_warning_color }}{{ motd_warning_color }}{{ motd_character * motd_length }}{{ motd_warning_reset }}" ./roles/common/templates/60-warning.j2
  78. task (hostname) - name: Set hostname template: src: hostname.j2 dest:

    /etc/hostname owner: root group: root mode: 0644 - name: Set hostname to hosts lineinfile: dest: /etc/hosts line: "127.0.0.1 {{ machine_hostname }}" owner: root group: root mode: 0644 notify: Save the hostname ./roles/common/tasks/main.yml {{ machine_hostname }} ./roles/common/templates/hostname.j2 - name: Save the hostname hostname: name: "{{ machine_hostname }}" ./roles/common/handlers/main.yml
  79. task (dependencies) - name: Update package repositories apt: update_cache: True

    - name: Install common dependencies apt: name: "{{ item.name }}={{ item.version }}" state: present with_items: common_dependencies ./roles/common/tasks/main.yml common_dependencies: - name: "python" version: "2.7.5-5ubuntu3" - name: "python-dev" version: "2.7.5-5ubuntu3" - name: "python-pip" version: "1.5.4-1" - name: "git" version: "1:1.9.1-1" - name: "python-virtualenv" version: "1.11.4-1" ./group_vars/all
  80. task (env vars) - name: Set environment variables lineinfile: dest:

    /etc/environment line: "{{ item.key }}=\"{{ item.value }}\"" notify: Export environment variables with_dict: system_env_vars ./roles/common/tasks/main.yml - name: Export environment variables shell: "export {{ item.key }}=\"{{ item.value }}\"" with_dict: system_env_vars ./roles/common/handlers/main.yml system_env_vars: SYSTEM_ENV: "sandbox" ./inventories/*/group_vars/all
  81. Play - name: Install NTP hosts: all sudo: True gather_facts:

    False roles: - ntp tags: - ntp - install ./provision.yml This play will call the NTP role for all the hosts
  82. task (install) - name: Install NTP package apt: name: "ntp"

    - name: Configure NTP template: src: ntp.conf.j2 dest: "/etc/ntp.conf" owner: "root" group: "root" mode: 0644 notify: Restart ntp service ./roles/ntp/tasks/main.yml http://en.wikipedia.org/wiki/Network_Time_Protocol
  83. driftfile /var/lib/ntp/ntp.drift statistics loopstats peerstats clockstats filegen loopstats file loopstats

    type day enable filegen peerstats file peerstats type day enable filegen clockstats file clockstats type day enable {% for server in ntp_servers %} server {{ server }} {% endfor %} server ntp.ubuntu.com restrict -4 default kod notrap nomodify nopeer noquery restrict -6 default kod notrap nomodify nopeer noquery restrict 127.0.0.1 restrict ::1 ./roles/ntp/templates/ntp.conf.j2 task (install) ntp_servers: - 0.pool.ntp.org - 1.pool.ntp.org - 2.pool.ntp.org - 3.pool.ntp.org ./inventories/*/group_vars/all
  84. tasks (time zone) - name: Change timezone setting template: src:

    timezone.j2 dest: "/etc/timezone" owner: "root" group: "root" mode: 0644 notify: Save timezone ./roles/ntp/tasks/main.yml {{ servers_timezone }} ./roles/ntp/templates/timezone.j2 servers_timezone: "Europe/Madrid" ./group_vars/all
  85. Play - name: supervisord hosts: all sudo: True gather_facts: False

    roles: - supervisord tags: - supervisord - install - conf ./provision.yml This play will call the supervisor role for all the hosts http://supervisord.org/
  86. task (install) - name: Install supervisord pip: name: "supervisor" version:

    "{{ supervisord_version }}" tags: - install - supervisord ./roles/supervisord/tasks/main.yml # Supervisord configuration supervisord_version: "3.0" supervisord_conf_path: "/etc" supervisord_confs_path: "/etc/supervisord.d" supervisord_logs_path: "/var/log/supervisord" supervisord_pids_path: "/var/run/supervisord" supervisord_binary: "/usr/local/bin/supervisord" ./group_vars/all
  87. task (configure) - name: Create directories for supervisord file: path:

    "{{ item }}" state: directory owner: "{{ service_runner_user }}" group: "{{ service_runner_group }}" mode: 0755 with_items: - "{{ supervisord_conf_path }}" - "{{ supervisord_logs_path }}" - "{{ supervisord_pids_path }}" - "{{ supervisord_confs_path }}" - name: Set basic supervisord configuration (not per service) template: src: supervisord.conf.j2 dest: "{{ supervisord_conf_path }}/supervisord.conf" owner: "{{ service_runner_user }}" group: "{{ service_runner_group }}" mode: 0222 notify: Start supervisord # Restart only if changes ./roles/supervisord/tasks/main.yml
  88. task (configure) - name: Create directories for supervisord file: path:

    "{{ item }}" state: directory owner: "{{ service_runner_user }}" group: "{{ service_runner_group }}" mode: 0755 with_items: - "{{ supervisord_conf_path }}" - "{{ supervisord_logs_path }}" - "{{ supervisord_pids_path }}" - "{{ supervisord_confs_path }}" - name: Set basic supervisord configuration (not per service) template: src: supervisord.conf.j2 dest: "{{ supervisord_conf_path }}/supervisord.conf" owner: "{{ service_runner_user }}" group: "{{ service_runner_group }}" mode: 0222 notify: Start supervisord # Restart only if changes ./roles/supervisord/tasks/main.yml Start using the service user
  89. task (configure) [unix_http_server] file=/tmp/supervisor.sock chmod=0700 chown=nobody:nogroup [inet_http_server] port=*:9001 username={{ supervisord_http_user

    }} password={{ supervisord_http_pass }} [rpcinterface:supervisor] supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface [supervisord] logfile={{ supervisord_logs_path }}/supervisord.log pidfile={{ supervisord_pids_path }}/supervisord.pid user={{ service_runner_user }} [supervisorctl] serverurl=unix:///tmp/supervisor.sock chown={{ service_runner_user }}:{{ service_runner_group }} [include] files = {{ supervisord_confs_path }}/*.conf ./roles/supervisord/templates/supervisord.conf.j2
  90. task (configure) [unix_http_server] file=/tmp/supervisor.sock chmod=0700 chown=nobody:nogroup [inet_http_server] port=*:9001 username={{ supervisord_http_user

    }} password={{ supervisord_http_pass }} [rpcinterface:supervisor] supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface [supervisord] logfile={{ supervisord_logs_path }}/supervisord.log pidfile={{ supervisord_pids_path }}/supervisord.pid user={{ service_runner_user }} [supervisorctl] serverurl=unix:///tmp/supervisor.sock chown={{ service_runner_user }}:{{ service_runner_group }} [include] files = {{ supervisord_confs_path }}/*.conf ./roles/supervisord/templates/supervisord.conf.j2 OMG!! Where the fuck are these variables?!
  91. task (configure) ./inventories/sandbox/host_vars/*/sensible_data.yml $ANSIBLE_VAULT;1.1;AES256 36313839656264343164316265326630303564326135396562616430353636303232393535333635 6535633163393436373637306234353937343665323339300a346663363239393066653036373463 62636431343637326565336231383239633534643336393239396662666530623330643566363231 3365366164343133610a326337343935363930643638333464316165366438616535613738656333 33366563653433336265326566616232306137633135343962356634333232626432623133393961 32353530646236313332346262333766303533383230393165643239303761326664393161633631

    62373465396461656663343231636361323161373763633065326632303430326263613738666137 65623163346266316339623633633432306338323766303730323866346566646236666339643931 6165 Vault password: Decryption successful $ ansible-vault decrypt ./inventories/sandbox/host_vars/*/sensible_data.yml ./inventories/sandbox/host_vars/*/sensible_data.yml supervisord_http_user: "admin" supervisord_http_pass: "6aa1ac38e4d14e5e94d7f30e34252ac9 "
  92. task (init) - name: Set upstart script template: src: supervisord.init.j2

    dest: "/etc/init.d/supervisord" owner: "root" group: "root" mode: 0755 - name: Activate supervisord init.d shell: "update-rc.d -f supervisord defaults" - name: Restart supervisorctl service: name: "supervisord" state: "restarted" ./roles/supervisord/tasks/main.yml - name: Start supervisord service: name: "supervisord" state: "restarted" ./roles/supervisord/handlers/main.yml
  93. Play - name: Install postgresql hosts: dbservers sudo: True roles:

    - postgresql - name: Configure postgresql hosts: dbservers sudo: True gather_facts: False roles: - postgresql-postinstall ./provision.yml This play will call the postgres roles for dbservers the hosts
  94. Role(s) - include: install.yml - include: configuration.yml ./roles/postgresql/tasks/main.yml - include:

    configuration.yml - include: databases.yml - include: users.yml ./roles/postgresql-postinstall/tasks/main.yml
  95. task (install) - name: Install dependencies apt: name: "{{ item

    }}" state: present with_items: - "python-psycopg2" - "python-pycurl" - name: Install Postgresql apt: name: "{{ item }}" state: present with_items: - "postgresql-{{postgresql_version}}" - "postgresql-client-{{postgresql_version}}" - name: Stop Postgresql service: name: postgresql state: "stopped" ./roles/postgresql/tasks/install.yml postgresql_version: 9.3 ./group_vars/dbservers.yml
  96. - name: PostgreSQL | Make sure the postgres data directory

    exists file: path: "{{postgresql_data_directory}}" owner: "{{ postgresql_service_user }}" group: "{{ postgresql_service_group }}" state: directory mode: 0700 - name: PostgreSQL | Update configuration - pt. 1 (pg_hba.conf) template: src: pg_hba.conf.j2 dest: "{{postgresql_conf_directory}}/pg_hba.conf" owner: "{{ postgresql_service_user }}" group: "{{ postgresql_service_group }}" mode: 0640 register: postgresql_configuration_pt1 ./roles/postgresql/tasks/configuration.yml task (config)
  97. - name: PostgreSQL | Update configuration - pt. 2 (postgresql.conf)

    template: src: postgresql.conf.j2 dest: "{{postgresql_conf_directory}}/postgresql.conf" owner: "{{ postgresql_service_user }}" group: "{{ postgresql_service_group }}" mode: 0640 register: postgresql_configuration_pt2 - name: PostgreSQL | Create folder for additional configuration files file: name: "{{postgresql_conf_directory}}/conf.d" state: directory owner: "{{ postgresql_service_user }}" group: "{{ postgresql_service_group }}" mode: 0755 ./roles/postgresql/tasks/configuration.yml task (config)
  98. postgresql_version: 9.3 postgresql_encoding: 'UTF-8' postgresql_locale: 'en_US.UTF-8' postgresql_default_auth_method: "md5" postgresql_port: 5432

    postgresql_admin_user: "postgres" postgresql_service_user: "{{ service_runner_user }}" postgresql_service_group: "{{ service_runner_group }}" postgresql_listen_addresses: - "*" postgresql_databases: - name: "favorshare" - name: "favorshare-dev" # Trust local machine, don't trust external connections postgresql_pg_hba_default: - { type: local, database: all, user: '{{ postgresql_admin_user }}', address: '', method: 'trust', comment: '' } - { type: local, database: all, user: all, address: '', method: 'trust', comment: '"local" is for Unix domain socket connections only' } - { type: host, database: all, user: all, address: '127.0.0.1/32', method: 'trust', comment: 'IPv4 local connections:' } - { type: host, database: all, user: all, address: '::1/128', method: 'trust', comment: 'IPv6 local connections:' } - { type: host, database: all, user: all, address: '0.0.0.0/0', method: '{{ postgresql_default_auth_method }}', comment: 'Not local connections' } # Supervisord run script for postgres supervisord_postgresql_run_script: "/usr/local/bin/run_postgresql.sh" ./roles/postgresql/tasks/configuration.yml task (config)
  99. - name: Stop postgres (runned by system) # This will

    stop only if system started not supervisord service: name: "postgresql" state: "stopped" - name: Change user owner to all postgres files file: path: "{{ item }}" owner: "{{ service_runner_user }}" group: "{{ service_runner_group }}" recurse: true with_items: - "/run/postgresql" - "/etc/postgresql" - "/var/log/postgresql" - "/var/lib/postgresql" - name: Deactivate postgres init.d shell: "update-rc.d postgresql disable" ./roles/postgresql-postinstall/tasks/configuration.yml task (config2)
  100. - name: Deactivate auto start of postgresql lineinfile: dest: "/etc/postgresql/{{

    postgresql_version }}/main/start.conf" regexp: '^(auto|disabled)' line: 'manual' backrefs: True - name: Deactivate postgres background init lineinfile: dest: "/etc/postgresql/{{ postgresql_version }}/main/postgresql.conf" regexp: '^(silent_mode *= *)on' line: '\1off' backrefs: True - name: Create pid path on start up lineinfile: dest: "/etc/rc.local" insertbefore: '^exit +0' line: "install -d -m 2775 -o {{ service_runner_user }} -g {{ service_runner_group }} {{ postgresql_run_path }}" ./roles/postgresql-postinstall/tasks/configuration.yml task (config2)
  101. name: Set supervisord postgres configuration template: src: postgresql.conf.j2 dest: "{{

    supervisord_confs_path}}/postgresql.conf" owner: "{{ service_runner_user }}" group: "{{ service_runner_group }}" mode: 0222 - name: Set supervisord postgres init script template: src: run_postgresql.sh.j2 dest: "{{ supervisord_postgresql_run_script }}" owner: "{{ service_runner_user }}" group: "{{ service_runner_group }}" mode: 0755 - name: Restart supervisorctl service: name: "supervisord" state: "restarted" - name: Start postgres (runned by supervisord) supervisorctl: name: "postgresql" state: "restarted" ./roles/postgresql-postinstall/tasks/configuration.yml task (config2)
  102. [program:postgres] ;user={{ service_runner_user }} ;group={{ service_runner_group }} command={{ supervisord_postgresql_run_script }}

    autostart=true autorestart=true stderr_logfile={{ supervisord_logs_path }}/postgres_err.log stdout_logfile={{ supervisord_logs_path }}/postgres_out.log redirect_stderr=true stopsignal=QUIT ./roles/postgresql-postinstall/templates/postgresql.conf.j2 task (config2) !/bin/sh # This script is run by Supervisor to start PostgreSQL 9.3 in foreground mode if [ -d {{ postgresql_run_path }} ]; then chmod 2775 {{ postgresql_run_path }} else install -d -m 2775 -o {{ service_runner_user }} -g {{ service_runner_group }} {{ postgresql_run_path }} #mkdir {{ postgresql_run_path }} #chown {{ service_runner_user }}:{{ service_runner_group }} {{ postgresql_run_path }} fi #exec sudo -u {{ service_runner_user }} "/usr/lib/postgresql/{{ postgresql_version }}/bin/postgres -D /var/lib/postgresql/{{ postgresql_version }}/main -c config_file=/etc/postgresql/{{ postgresql_version }} /main/postgresql.conf" exec /usr/lib/postgresql/{{ postgresql_version }}/bin/postgres -D /var/lib/postgresql/{{ postgresql_version }} /main -c config_file=/etc/postgresql/{{ postgresql_version }}/main/postgresql.conf ./roles/postgresql-postinstall/templates/run_postgresql.sh.j2
  103. - name: Make sure the PostgreSQL databases are present postgresql_db:

    name: "{{item.name}}" encoding: "{{postgresql_encoding}}" lc_collate: "{{postgresql_locale}}" lc_ctype: "{{postgresql_locale}}" template: "template0" state: present with_items: postgresql_databases when: postgresql_databases|length > 0 - name: Add hstore to the databases with the requirement sudo: yes sudo_user: "{{ postgresql_service_user }}" shell: "psql {{item.name}} -c 'CREATE EXTENSION IF NOT EXISTS hstore;'" with_items: postgresql_databases when: item.hstore is defined and item.hstore - name: Add uuid-ossp to the database with the requirement sudo: yes sudo_user: "{{ postgresql_service_user }}" shell: "psql {{item.name}} -c 'CREATE EXTENSION IF NOT EXISTS uuid-ossp;'" with_items: postgresql_databases when: item.uuid_ossp is defined and item.uuid_ossp ./roles/postgresql-postinstall/users/databases.yml task (dbs)
  104. - name: Make sure the PostgreSQL users are present postgresql_user:

    name: "{{item.name}}" password: "{{item.pass | default('pass')}}" state: present login_host: "{{item.host | default('localhost')}}" with_items: postgresql_users when: postgresql_users|length > 0 - name: Update the user privileges postgresql_user: name: "{{item.name}}" db: "{{item.db}}" priv: "{{item.priv | default('ALL')}}" state: present login_host: "{{item.host | default('localhost')}}" with_items: postgresql_user_privileges when: postgresql_users|length > 0 ./roles/postgresql-postinstall/tasks/users.yml task (users)
  105. Play name: Install uwsgi hosts: webservers sudo: True roles: -

    gdamjan.uwsgi tags: - install - uwsgi - name: Post install uwsgi hosts: webservers sudo: True gather_facts: False roles: - uwsgi-postinstall tags: - postinstall - configuration - uwsgi ./provision.yml This play will call the uwsgi roles for webservers hosts
  106. task (install) downloading role 'uwsgi', owned by gdamjan no version

    specified, installing master - downloading role from https://github.com/gdamjan/ansible-uwsgi/archive/master.tar.gz - extracting gdamjan.uwsgi to ./roles/gdamjan.uwsgi gdamjan.uwsgi was installed successfully ansible-galaxy install gdamjan.uwsgi -p ./roles/ # uwsgi settings uwsgi_version: "2.0.5.1" ./group_vars/webservers https://galaxy.ansible.com/list#/roles/90
  107. task (conf) - name: Set supervisord apps uwsgi configuration template:

    src: uwsgi-app.conf.j2 dest: "{{ supervisord_confs_path}}/uwsgi-{{ item.key }}.conf" owner: "{{ service_runner_user }}" group: "{{ service_runner_group }}" mode: 0222 with_dict: deployment_apps ./roles/uwsgi-postinstall/tasks/main.yml [program:uwsgi-{{ item.key }}] ;user={{ service_runner_user }} ;group={{ service_runner_group }} command=uwsgi --ini {{ item.value.app_dir }}/{{ item.key }}/uwsgi.ini autostart=true autorestart=true stderr_logfile={{ supervisord_logs_path }}/{{ item.key }}_err.log stdout_logfile={{ supervisord_logs_path }}/{{ item.key }}_out.log redirect_stderr=true stopsignal=QUIT ./roles/uwsgi-postinstall/templates/uwsgi-app.conf.j2
  108. task (conf) deployment_apps: favorshare: name: "favorshare" version: "master" # Could

    be a hash, branch or tag name repo: "https://github.com/slok/favorshare" location: "{{ apps_path }}/favorshare" static_files_path: "{{ apps_path }}/favorshare/static" app_dir: "{{ apps_path }}/favorshare/favorshare" virtualenv: "{{ virtualenvs_path }}/favorshare" socket: "/var/run/favorshare/favorshare.sock" uwsgi_http_host: "" #"0.0.0.0" uwsgi_http_port: "" #8000 pidfile: "/var/run/favorshare/favorshare.pid" module: "favorshare.wsgi" processes: 5 ./group_vars/webservers These settings will be used a lot for now on
  109. Play - name: Prepare system for app deployments hosts: webservers

    sudo: True gather_facts: False roles: - app-preparation tags: - predeploy - configuration - apps ./provision.yml This play will call the app-preparation role for webservers hosts
  110. task (install) - name: Create necessary paths on start up

    lineinfile: dest: "/etc/rc.local" insertbefore: '^exit +0' line: "install -d -m 2775 -o {{ service_runner_user }} -g {{ service_runner_group }} {{ item }}" with_items: - "{{ virtualenvs_path }}" - "/var/run/favorshare" - "{{ apps_path }}" - name: Create necessary paths file: path: "{{ item }}" owner: "{{ service_runner_user }}" group: "{{ service_runner_group }}" state: directory mode: 0775 with_items: - "{{ virtualenvs_path }}" - "/var/run/favorshare" - "{{ apps_path }}" ./roles/common/tasks/main.yml
  111. task (depepndencies) - name: Install App dependencies apt: name: "{{

    item.name }}={{ item.version }}" state: present with_items: app_dependencies tags: - apt - dependencies - app ./roles/app-preparation/tasks/main.yml # Some common settings virtualenvs_path: "/opt/virtualenvs" apps_path: "/srv/www" ./group_vars/webservers
  112. Play - name: Install and configure Nginx hosts: webservers sudo:

    True gather_facts: False roles: - nginx tags: - install - configuration - nginx ./provision.yml This play will call the Nginx role for webservers hosts
  113. task (install) - name: Install Nginx apt: name: "nginx={{ nginx_version

    }}" state: "present" - name: Stop Nginx service: name: "nginx" state: "stopped" - name: Deactivate nginx init.d shell: "update-rc.d nginx disable" ./roles/nginx/tasks/main.yml # Nginx settings nginx_version: "1.4.6-1ubuntu3" nginx_workers: 5 nginx_worker_connections: 1024 nginx_conf_path: "/etc/nginx" nginx_confs_path: "{{ nginx_conf_path }}/conf.d" nginx_sites_enabled_path: "{{ nginx_conf_path }}/sites-enabled" ./group_vars/webservers
  114. task (config) - name: Change user owner to all nginx

    files file: path: "{{ item }}" owner: "{{ service_runner_user }}" group: "{{ service_runner_group }}" recurse: true with_items: - "{{ nginx_conf_path }}" - "/var/log/nginx" - "/var/lib/nginx" - name: Create nginx necessary paths on start up lineinfile: dest: "/etc/rc.local" insertbefore: '^exit +0' line: "install -d -m 2775 -o {{ service_runner_user }} -g {{ service_runner_group }} {{ item }}" with_items: - "{{ nginx_conf_path }}" - "/var/log/nginx" ./roles/nginx/tasks/main.yml
  115. task (config) - name: Set supervisord nginx configuration template: src:

    nginx-supervisor.conf.j2 dest: "{{ supervisord_confs_path }}/nginx.conf" owner: "{{ service_runner_user }}" group: "{{ service_runner_group }}" mode: 0222 notify: Restart nginx # restart will start if is stopped ./roles/nginx/tasks/main.yml [program:nginx] ;user={{ service_runner_user }} ;group={{ service_runner_group }} command=/usr/sbin/nginx autostart=true autorestart=true stderr_logfile={{ supervisord_logs_path }}/nginx_err.log stdout_logfile={{ supervisord_logs_path }}/nginx_out.log redirect_stderr=true stopsignal=QUIT ./roles/nginx/templates/nginx-supervisor.conf.j2
  116. task (apps) - name: Clean nginx examples file: path: "{{

    nginx_sites_enabled_path }}/default" state: "absent" - name: Set nginx configuration template: src: nginx.conf.j2 dest: "{{ nginx_conf_path }}/nginx.conf" owner: "{{ service_runner_user }}" group: "{{ service_runner_group }}" mode: 0744 - name: Set nginx apps site template: src: nginx-app.conf.j2 dest: "{{ nginx_sites_enabled_path }}/{{ item.key }}.conf" owner: "{{ service_runner_user }}" group: "{{ service_runner_group }}" mode: 0744 with_dict: deployment_apps notify: Restart nginx ./roles/nginx/tasks/main.yml
  117. task (apps) - name: Restart nginx supervisorctl: name: 'nginx' state:

    "restarted" ./roles/nginx/handlers/main.yml
  118. task (apps) daemon off; # IMPORTANT IF WE RUN THIS

    WITH SUPERVISORD #user {{ service_runner_user }}; # Only used if nginx started as root worker_processes {{ nginx_workers }}; pid {{ supervisord_pids_path }}/nginx.pid; events { worker_connections {{ nginx_worker_connections }}; } http { sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include {{ nginx_conf_path }}/mime.types; default_type application/octet-stream; gzip on; gzip_disable "msie6"; include {{ nginx_confs_path }}/*.conf; include {{ nginx_sites_enabled_path }}/*; } ./roles/nginx/templates/nginx.conf.j2
  119. task (apps) upstream {{ item.key }} { {% if item.value.uwsgi_http_host

    and item.value.uwsgi_http_port %} server 127.0.0.1:{{ item.value.uwsgi_http_port }}; {% else %} server unix:{{ item.value.socket }}; {% endif %} } server { listen {{ nginx_listen_port }}; server_name {{ ansible_ssh_host }}; #error_page 404 /404.html; #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/html; #} # an HTTP header important enough to have its own Wikipedia entry: # http://en.wikipedia.org/wiki/X-Forwarded-For proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; location {{ django_settings[item.key]['static_url'] }} { alias {{ item.value.static_files_path }}; } location / { include uwsgi_params; uwsgi_pass {{ item.key }}; } } ./roles/nginx/templates/nginx-app.conf.j2
  120. Play - name: Install and configure HAproxy hosts: lbservers sudo:

    True gather_facts: False roles: - haproxy tags: - install - configuration - haproxy ./provision.yml This play will call the haproxy role for lbservers host
  121. task (install) - name: Install HAproxy apt: name: "haproxy={{ haproxy_version

    }}" state: "present" - name: Install Socat apt: name: "socat" state: "present" ./roles/haproxy/tasks/main.yml # HAproxy settings haproxy_version: "1.4.24-2" haproxy_listen_port: 80 haproxy_conf_path: "/etc/haproxy" haproxy_log_file: "/var/log/haproxy_1.log" haproxy_listen_addres: "*" haproxy_balance_algorithm: "roundrobin" haproxy_admin_socket: "/var/lib/haproxy/stats" ./group_vars/lbservers
  122. task (init) # HAproxy should run as root so we

    will not use supervisord, instead use # systems upstart - name: Activate haproxy init.d (1) shell: "update-rc.d haproxy enable" - name: Activate haproxy init.d (2) lineinfile: dest: "/etc/default/haproxy" regexp: "^(ENABLED=).*" line: '\g<1>1' backrefs: True ./roles/haproxy/tasks/main.yml
  123. task (config) - name: Set haproxy configuration template: src: haproxy.cfg.j2

    dest: "{{ haproxy_conf_path }}/haproxy.cfg" owner: "root" group: "root" mode: 0744 notify: Restart haproxy - name: Set haproxy rsyslog template: src: 49-haproxy.conf.j2 dest: "/etc/rsyslog.d/49-haproxy.conf" owner: "root" group: "root" mode: 0744 notify: Restart rsyslog ./roles/haproxy/tasks/main.yml
  124. - name: Restart haproxy service: name: "haproxy" state: "restarted" -

    name: Restart rsyslog service: name: "rsyslog" state: "restarted" ./roles/haproxy/handlers/main.ymlroxy/tasks/main.yml task (config)
  125. global log 127.0.0.1 local1 notice #log loghost local0 info maxconn

    4096 #chroot /usr/share/haproxy daemon user haproxy group haproxy # turn on stats unix socket stats socket {{ haproxy_admin_socket }} level admin defaults log global mode http option httplog option dontlognull retries 3 maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 listen favorshare {{ haproxy_listen_addres }}:{{ haproxy_listen_port }} mode http balance {{ haproxy_balance_algorithm }} {% for host in groups['webservers'] %} server {{ host }} {{ hostvars[host]['ansible_ssh_host']}}:{{ nginx_listen_port }} check {% endfor %} ./roles/haproxy/templates/haproxy.cfg.j2 task (config)
  126. # .. otherwise consider putting these two in /etc/rsyslog.conf instead:

    $ModLoad imudp $UDPServerAddress 127.0.0.1 $UDPServerRun 514 # ..and in any case, put these two in /etc/rsyslog.d/49-haproxy.conf: local1.* -{{ haproxy_log_file }} & ~ # & ~ means not to put what matched in the above line anywhere else for the rest of the rules # http://serverfault.com/questions/214312/how-to-keep-haproxy-log-messages-out-of-var-log- syslog ./roles/haproxy/templates/49-haproxy.conf.j2 task (config)
  127. Phased Deployment 0 Downtime Small change at once Easyer rollback

    http://en.wikipedia.org/wiki/Phased_implementation
  128. Steps Disable server in the LB Update app Migrate database

    Enable server in the LB Update app dependencies Collect statics
  129. Steps Disable server in the LB Update app Migrate database

    Enable server in the LB Update app dependencies Collect statics Pre task Task Post task
  130. Play # Deploy stuff - name: Ship it! hosts: webservers

    sudo: True serial: 1 ./deploy.yml
  131. Play # Deploy stuff - name: Ship it! hosts: webservers

    sudo: True serial: 1 ./deploy.yml Deploy on webservers one by one
  132. Pre_tasks # Deploy pre tasks pre_tasks: - name: Disable the

    favorshare app in haproxy shell: echo "disable server favorshare/{{ inventory_hostname }}" | socat stdio "{{ hostvars[item]['haproxy_admin_socket'] }}" delegate_to: "{{ item }}" with_items: groups.lbservers ./deploy.yml
  133. Pre_tasks # Deploy pre tasks pre_tasks: - name: Disable the

    favorshare app in haproxy shell: echo "disable server favorshare/{{ inventory_hostname }}" | socat stdio "{{ hostvars[item]['haproxy_admin_socket'] }}" delegate_to: "{{ item }}" with_items: groups.lbservers ./deploy.yml This will execute the action in all the load balancers
  134. Deploy task # Deploy roles roles: - deployment ./deploy.yml -

    include: apps.yml - include: virtualenvs.yml - include: migrations.yml - include: statics.yml ./roles/deployment/tasks/main.yml
  135. Update apps # We could do this per individual -

    name: Update apps git: repo: "{{ item.value.repo }}" dest: "{{ item.value.location }}" version: "{{ item.value.version }}" accept_hostkey: True with_dict: deployment_apps - name: Check apps owner file: path: "{{ item.value.location }}" owner: "{{ service_runner_user }}" group: "{{ service_runner_group }}" recurse: true with_dict: deployment_apps ./roles/deployment/tasks/apps.yml
  136. Update apps - name: Set apps uwsgi files template: src:

    uwsgi.ini.j2 dest: "{{ item.value.app_dir }}/{{ item.key }}/uwsgi.ini" owner: "{{ service_runner_user }}" group: "{{ service_runner_group }}" mode: 0422 with_dict: deployment_apps - name: Set apps project settings template: src: settings.py.j2 dest: "{{ deployment_apps[item.key]['app_dir'] }}/{{ item.value.settings_path}}" owner: "{{ service_runner_user }}" group: "{{ service_runner_group }}" mode: 0644 with_dict: django_settings ./roles/deployment/tasks/apps.yml
  137. Update apps django_settings: favorshare: settings_path: "favorshare/settings/staging.py" requirements_path: "{{ deployment_apps['favorshare']['location'] }}/requirements/staging.txt"

    static_url: "/static" secret_key: "6h5f1mskc6lj5f44qvv8b3x7=_nfm99ymjkku_60ew)2c4vkx2" debug: "False" template_debug: "False" allowed_hosts: - "*" db_name: "favorshare" db_user: "{{ hostvars['1.db.master.sandbox.favorshare.org']['postgresql_users'][0]['name'] }}" db_password: "{{ hostvars['1.db.master.sandbox.favorshare.org']['postgresql_users'][0]['pass'] }}" db_host: "{{ hostvars['1.db.master.sandbox.favorshare.org']['ansible_ssh_host'] }}" db_port: "{{ hostvars['1.db.master.sandbox.favorshare.org']['postgresql_port'] }}" internal_ips: [] ./inventories/sandbox/group_vars/webservers
  138. Update apps from .base import * SECRET_KEY = '{{ item.value.secret_key

    }}' DEBUG = {{ item.value.debug }} TEMPLATE_DEBUG = {{ item.value.template_debug }} ALLOWED_HOSTS = [ {% for host in item.value.allowed_hosts %} "{{ host }}", {% endfor %} ] DATABASES = { "default": { "ENGINE": "django.db.backends.postgresql_psycopg2", "NAME": "{{ item.value.db_name }}", "USER": "{{ item.value.db_user }}", "PASSWORD": "{{ item.value.db_password }}", "HOST": "{{ item.value.db_host }}", "PORT": "{{ item.value.db_port }}", } } INTERNAL_IPS = ( {% for ips in item.value.internal_ips %} "{{ ips }}", {% endfor %} ) STATIC_ROOT = '{{ deployment_apps[item.key]['static_files_path'] }}' ./roles/deployment/templates/settings.py.j2
  139. Update apps [uwsgi] chdir={{ item.value.app_dir }} plugins=python virtualenv={{ item.value.virtualenv }}

    #env=DJANGO_SETTINGS_MODULE=favorshare.settings.local env=SYSTEM_ENV={{ system_env_vars['SYSTEM_ENV'] }} module={{ item.value.module }}:application master=True pidfile={{ item.value.pidfile }} pythonpath=%(chdir) {% if item.value.uwsgi_http_host and item.value.uwsgi_http_port %} http={{ item.value.uwsgi_http_host }}:{{ item.value.uwsgi_http_port }} {% else %} socket={{ item.value.socket }} {% endif %} vacuum=True processes = {{ item.value.processes }} uid={{ service_runner_user }} gid={{ service_runner_group }} ./roles/deployment/templates/uwsgi.ini.j2
  140. Virtualenvs - name: Check virtualenv created stat: path: "{{ item.value.virtualenv

    }}" register: virtualenvs_exist with_dict: deployment_apps - name: Create virtualenvs shell: "virtualenv {{ item.item.value.virtualenv }}" when: not item.stat.exists with_items: virtualenvs_exist.results - name: Install app dependencies pip: requirements: "{{ django_settings[item.key]['requirements_path'] }}" virtualenv: "{{ item.value.virtualenv }}" with_dict: deployment_apps sudo: True sudo_user: "{{ service_runner_user }}" ./roles/deployment/tasks/virtualenvs.yml
  141. Virtualenvs - name: Check virtualenvs owner file: path: "{{ item.value.virtualenv

    }}" owner: "{{ service_runner_user }}" group: "{{ service_runner_group }}" recurse: true with_dict: deployment_apps - name: Restart applications supervisorctl: name: 'uwsgi-{{ item.key }}' state: "restarted" with_dict: deployment_apps ./roles/deployment/tasks/virtualenvs.yml
  142. DB migrations - name: Sync database django_manage: command: syncdb app_path:

    "{{ item.value.app_dir }}" virtualenv: "{{ item.value.virtualenv }}" environment: SYSTEM_ENV: "{{ system_env_vars['SYSTEM_ENV'] }}" with_dict: deployment_apps - name: Migrate database django_manage: command: migrate app_path: "{{ item.value.app_dir }}" virtualenv: "{{ item.value.virtualenv }}" environment: SYSTEM_ENV: "{{ system_env_vars['SYSTEM_ENV'] }}" with_dict: deployment_apps ./roles/deployment/tasks/migrations.yml
  143. Statics - name: Delete static files file: path: "{{ item.value.static_files_path

    }}" state: "absent" with_dict: deployment_apps - name: Collect static files django_manage: command: collectstatic app_path: "{{ item.value.app_dir }}" #settings: "{{ item.value.app_dir }}/django_settings[item.key]['settings_path'] }}" virtualenv: "{{ item.value.virtualenv }}" environment: SYSTEM_ENV: "{{ system_env_vars['SYSTEM_ENV'] }}" with_dict: deployment_apps - name: Check static files owner file: path: "{{ item.value.static_files_path }}" owner: "{{ service_runner_user }}" group: "{{ service_runner_group }}" recurse: true with_dict: deployment_apps ./roles/deployment/tasks/statics.yml
  144. Post_tasks post_tasks: - name: Wait for webserver to come up

    wait_for: host={{ hostvars[inventory_hostname]['ansible_ssh_host'] }} port={{ nginx_listen_port }} state=started timeout=80 - name: Enable the server in haproxy shell: echo "enable server favorshare/{{ inventory_hostname }}" | socat stdio "{{ hostvars [item]['haproxy_admin_socket'] }}" delegate_to: "{{ item }}" with_items: groups.lbservers ./deploy.yml
  145. Post_tasks post_tasks: - name: Wait for webserver to come up

    wait_for: host={{ hostvars[inventory_hostname]['ansible_ssh_host'] }} port={{ nginx_listen_port }} state=started timeout=80 - name: Enable the server in haproxy shell: echo "enable server favorshare/{{ inventory_hostname }}" | socat stdio "{{ hostvars [item]['haproxy_admin_socket'] }}" delegate_to: "{{ item }}" with_items: groups.lbservers ./deploy.yml Wait to the Nginx to be up and running
  146. Icons: Entypo, Flaticons, Octicons Typography: Google web fonts Logos: Official

    logos Github: https://github.com Google docs: https://docs.google.com Ansible: http://www.ansible.com Syntax highligter: http://markup.su/highlighter