Slide 1

Slide 1 text

Puppet for Dummies IPC - Mainz - Germany 14-17 october 2012 workshop

Slide 2

Slide 2 text

Joshua Thijssen / Netherlands Freelance consultant and trainer @ NoxLogic & TechAdemy Development in PHP, Python, C, Java Lead developer of Saffire Blog: http://adayinthelifeof.nl Email: [email protected] Twitter: @jaytaph whoami 2

Slide 3

Slide 3 text

.plan ➡ A bit of history / intro about why use Puppet and vagrant. ➡ Install / using vagrant ➡ Intro on writing puppet manifests ➡ Actually writing puppet manifests ➡ Misc (monitoring, mcollective, enc etc) 3

Slide 4

Slide 4 text

Fingers crossed 4

Slide 5

Slide 5 text

prerequisites 5 vagrantup.com ruby-lang.org virtualbox.org

Slide 6

Slide 6 text

A bit of history 6

Slide 7

Slide 7 text

Installing by hand 7

Slide 8

Slide 8 text

More servers - preinstalls 8

Slide 9

Slide 9 text

Virtualization - even more servers! 9

Slide 10

Slide 10 text

“The cloud”(tm) 10

Slide 11

Slide 11 text

11 How are we going to manage our projects, servers, clouds, customers properly?

Slide 12

Slide 12 text

PUPPET 12

Slide 13

Slide 13 text

System admins vs developers 13

Slide 14

Slide 14 text

14 Puppet for sysadmins

Slide 15

Slide 15 text

Puppet for sysadmins ➡ Control your infrastructure from a single point (of failure). ➡ Documented upgrades through version control. ➡ Easy upgrades. ➡ Acceptance infrastructure environments. 15

Slide 16

Slide 16 text

16 Puppet for developers

Slide 17

Slide 17 text

Puppet for developers 17 LAMP

Slide 18

Slide 18 text

Puppet for developers 18 PENELOPE

Slide 19

Slide 19 text

Puppet for developers 19 PeNeLoPe = PHP NGINX LINUX POSTGRESQL

Slide 20

Slide 20 text

LAMPGMVNMCSTRAH Linux Apache MySQL PHP Gearman MongoDB CouchDB Solr Tika Redis ActiveMQ Hadoop Varnish Nginx Memcache 20 Puppet for developers

Slide 21

Slide 21 text

➡ How do you make sure all developers are using the same versions of your components? ➡ The same configuration? ➡ Even the same components! ➡ New developers? New development install ➡ Keep development, acceptance, production in sync? 21 Puppet for developers

Slide 22

Slide 22 text

22 previous troubles * projects * developers = Puppet for developers

Slide 23

Slide 23 text

Title Text 23

Slide 24

Slide 24 text

Vagrant 24

Slide 25

Slide 25 text

My first vagrant 25 $ git clone git://gist.github.com/3863869.git vagrant001 $ cd vagrant001 $ vagrant up

Slide 26

Slide 26 text

My first vagrant 26 [default] Importing base box 'lucid32'... [default] The guest additions on this VM do not match the install version of VirtualBox! This may cause things such as forwarded ports, shared folders, and more to not work properly. If any of those things fail on this machine, please update the guest additions and repackage the box. Guest Additions Version: 4.1.14 VirtualBox Version: 4.2.0 [default] Matching MAC address for NAT networking... [default] Clearing any previously set forwarded ports... [default] Forwarding ports... [default] -- 22 => 2222 (adapter 1) [default] Creating shared folders metadata... [default] Clearing any previously set network interfaces... [default] Booting VM... [default] Waiting for VM to boot. This can take a few minutes. [default] VM booted and ready for use! [default] Mounting shared folders... [default] -- v-root: /vagrant

Slide 27

Slide 27 text

My first vagrant 27

Slide 28

Slide 28 text

Vagrant is a tool for building and distributing virtualized development environments. 28

Slide 29

Slide 29 text

AWESOMESAUCE 29

Slide 30

Slide 30 text

Actually, it’s a simple tool that speaks to the virtualbox api (but it’s still awesome) 30

Slide 31

Slide 31 text

➡ Downloads (optionally) the requested base box ➡ Deploys and boots up a new VM. ➡ Runs optional provisioner (puppet / chef / shell) 31

Slide 32

Slide 32 text

Multi VM’s Vagrant::Config.run do |config| config.vm.box = 'centos-62-64-puppet' config.vm.box_url = 'http://../centos-6.2-64bit-puppet-vbox.4.1.12.box' config.vm.define :web do |web_config| web_config.vm.host_name = 'web.example.org' web_config.vm.forward_port 80 8080 ... end config.vm.define :database do |db_config| db_config.vm.host_name = 'db.example.org' db_config.vm.forward_port 3306 3306 ... end end Vagrantfile 32

Slide 33

Slide 33 text

Joind.in example Vagrant::Config.run do |config| # We define one box (joindin), but config.vm.define :joindin do |ji_config| ji_config.vm.box = 'centos-62-64-puppet' ji_config.vm.box_url = 'http://.../centos-6.2-64bit-puppet-vbox.4.1.12.box' ji_config.vm.host_name = "joind.in" ji_config.vm.forward_port 80, 8080 # config.vm.share_folder "v-data", "/vagrant_data", "../data" ji_config.vm.boot_mode = :gui ji_config.vm.provision :puppet do |puppet| puppet.manifests_path = "puppet/manifests" puppet.module_path = "puppet/modules" puppet.manifest_file = "joindin.pp" puppet.options = [ '--verbose', ] end end end 33 https://github.com/joindin/joind.in

Slide 34

Slide 34 text

Quick tips ➡ Use 32bit boxes. Only 64bit when you need to, or when you are sure all developers can run them. ➡ Use NFS mounts on linux / osx. (Can’t on windows) config.vm.share_folder(“v-root”, “/vagrant”, “.”, :nfs => (RUBY_PLATFORM =~ /linux/ or RUBY_PLATFORM =~ /darwin/)) 34

Slide 35

Slide 35 text

Base boxes ➡ Package from current images ➡ Download them (http://vagrantbox.es) ➡ Minimal install (netinstall) ➡ vagrant user + “public” private key ➡ ssh server 35

Slide 36

Slide 36 text

Base boxes 36 $ vagrant box list lucid32 centos-63-32bit-puppet $ vagrant box add lucid32 lucid32.box $ vagrant box add centos-63-32bit-puppet centos63.box $ vagrant package $ vagrant package --vagrantfile Vagrantfile.pkg --include README.txt

Slide 37

Slide 37 text

Shared directories ➡ Work from local directory (IDE) ➡ Run remote (33.33.33.10) ➡ /vagrant default shared ➡ NFS, vboxfs ➡ watch out with file permissions! 37

Slide 38

Slide 38 text

PUPPET 38

Slide 39

Slide 39 text

➡ Open source configuration management tool. ➡ Puppet Labs (Reductive Labs) ➡ Written in Ruby ➡ Open source: https://github.com/puppetlabs ➡ Commercial version available (puppet enterprise) 39

Slide 40

Slide 40 text

➡ Don’t tell HOW to do stuff. ➡ Tell WHAT to do. ¹ “yum install httpd” “apt-get install apache2” “install and run the apache webserver” 40

Slide 41

Slide 41 text

41 Schematic representation of a puppet infrastructure

Slide 42

Slide 42 text

Puppet 42

Slide 43

Slide 43 text

Puppet CA Puppet Master Puppet Agent Puppet Agent Puppet Agent https 43

Slide 44

Slide 44 text

➡ puppet master (puppetmasterd) ➡ puppet cert (puppetca) ➡ puppet agent (puppetd) ➡ facter 44

Slide 45

Slide 45 text

Puppet master ➡ Central server ➡ File & Configuration server ➡ REST(ish) over HTTPS interface 45

Slide 46

Slide 46 text

Puppet CA ➡ Certificate signing ➡ Creates, signs, checks x509 certificates ➡ So you don’t have to worry about them 46

Slide 47

Slide 47 text

List all nodes 47 root@puppetmaster:~# puppet cert --list --all + puppetmaster.noxlogic.local (74:A7:C8:27:72:0D:C1:DD:B8:71:0D:4F:37:69:3D:0C) puppetnode1.noxlogic.local (09:9D:1E:01:D0:A7:BA:FB:8C:F4:2D:96:78:34:54:44)

Slide 48

Slide 48 text

Sign a node 48 root@puppetmaster:~# puppet cert --sign puppetnode1.noxlogic.local .... root@puppetmaster:~# puppet cert --list --all + puppetmaster.noxlogic.local (74:A7:C8:27:72:0D:C1:DD:B8:71:0D:4F:37:69:3D:0C) + puppetnode1.noxlogic.local (CC:50:49:98:1D:F9:06:36:0E:6E:31:F5:27:D8:50:D8)

Slide 49

Slide 49 text

Puppet agent ➡ Runs on every node that will be managed by puppet as a daemon (or crontab, or mcollective). ➡ Calls the puppet master every 30 minutes for updates. ➡ Receives and executes a “catalog”. 49

Slide 50

Slide 50 text

Facter ➡ Runs on nodes to gather system information. ➡ Returns $variables to be used on the puppet master in the manifest files. 50

Slide 51

Slide 51 text

Facter 51 [root@puppetnode1 ~]# facter --puppet architecture => x86_64 fqdn => puppetnode1.noxlogic.local interfaces => eth1,eth2,lo ipaddress_eth1 => 192.168.1.114 ipaddress_eth2 => 192.168.56.200 kernel => Linux kernelmajversion => 2.6 operatingsystem => CentOS operatingsystemrelease => 6.0 processor0 => Intel(R) Core(TM)2 Duo CPU T7500 @ 2.20GHz puppetversion => 2.6.9

Slide 52

Slide 52 text

Facts ➡ You can create your own facts: ➡ project names ➡ master / slave databases ➡ zend server ➡ directadmin / plesk 52

Slide 53

Slide 53 text

53 zendserver.rb: Facter.add(“Zendserver”) do confine :kernel => :linux setcode do if FileTest.exists?(“/usr/local/zend/bin”) “true” else “false” end end end Facts

Slide 54

Slide 54 text

Multi VM vagrant 54 $ git clone git://gist.github.com/3887712.git vagrant002 $ cd vagrant002 $ vagrant up

Slide 55

Slide 55 text

55 Our first manifests

Slide 56

Slide 56 text

Title Text 56 package { “strace” : ensure => present, } file { “/home/jaytaph/secret-ingredient.txt” : ensure => present, mode => 0600, user => ‘jaytaph’, group => ‘noxlogic’, content => “beer”, }

Slide 57

Slide 57 text

package { “httpd” : ensure => present, } service { “httpd”: running => true, enable => true, } require => Package[“httpd”], 57

Slide 58

Slide 58 text

Centos / Redhat service: httpd package: httpd config: /etc/httpd/conf/httpd.conf vhosts: /etc/httpd/conf.d/*.conf Debian / Ubuntu service: apache2 package: apache2 config: /etc/apache2/httpd.conf vhosts: /etc/apache2/sites-available 58

Slide 59

Slide 59 text

class webserver { package { “apache”: case $operatingsystem { centos, redhat { $packagename = “httpd” } debian, ubuntu { $packagename = “apache2” } default : { fail(‘I don’t know this OS/distro’) } } name => $packagename, ensure => installed, } service { “apache” : running => true, enable => true, require => Package[“apache”], } } 59

Slide 60

Slide 60 text

[root@puppetnode1 ~]# facter --puppet architecture => x86_64 fqdn => puppetnode1.noxlogic.local interfaces => eth1,eth2,lo ipaddress_eth1 => 192.168.1.114 ipaddress_eth2 => 192.168.56.200 kernel => Linux kernelmajversion => 2.6 operatingsystem => CentOS operatingsystemrelease => 6.0 processor0 => Intel(R) Core(TM)2 Duo CPU T7500 @ 2.20GHz puppetversion => 2.6.9 60

Slide 61

Slide 61 text

Puppet resources • augeas • computer • cron • exec • file • filebucket • group • host • interface • k5login • macauthorization • mailalias • maillist • mcx • mount • nagios_command • nagios_contact • nagios_contactgroup • nagios_host • nagios_hostdependency • nagios_hostescalation • nagios_hostextinfo • nagios_hostgroup • nagios_service • nagios_servicedependency • nagios_serviceescalation • nagios_serviceextinfo • nagios_servicegroup • nagios_timeperiod • notify • package • resources • router • schedule • scheduled_task • selboolean • selmodule • service • ssh_authorized_key • sshkey • stage • tidy • user • vlan • yumrepo • zfs • zone • zpool 61

Slide 62

Slide 62 text

node “web01.example.org” { include webserver } node /^db\d+\.example\.org$/ { package { “mysql-server” : ensure => installed, } } 62 /etc/puppet/manifests/site.pp:

Slide 63

Slide 63 text

node “web01.example.local” { $webserver_name = “web01.example.local” $webserver_alias = “www.example.local” $webserver_docroot = “/var/www/web01” include webserver } node “web02.example.local” { $webserver_name = “web02.example.local” $webserver_alias = “crm.example.local” $webserver_docroot = “/var/www/web02” include webserver } 63

Slide 64

Slide 64 text

Multi VM vagrant with puppet installed 64 $ git clone git://gist.github.com/3887842.git vagrant003 $ cd vagrant003 $ vagrant up

Slide 65

Slide 65 text

Crude apache install 65 https://gist.github.com/3887955

Slide 66

Slide 66 text

66

Slide 67

Slide 67 text

Monitoring Distributed basically...

Slide 68

Slide 68 text

Distributed monitoring ➡ Export resources ➡ @@nagios_host ➡ @@nagios_service ➡ Collect on (monitoring) server: ➡ Nagios_host <<| |>> ➡ Nagios_service <<| |>> 68

Slide 69

Slide 69 text

Distributed monitoring 69 http://www.slideshare.net/PuppetLabs/distributed-monitoring- at-hyves-puppet

Slide 70

Slide 70 text

Frontends The foreman - Puppet dashboard

Slide 71

Slide 71 text

71

Slide 72

Slide 72 text

72

Slide 73

Slide 73 text

73

Slide 74

Slide 74 text

74

Slide 75

Slide 75 text

ENC External Node Configuration

Slide 76

Slide 76 text

[master] node_terminus = exec external_nodes = /path/to/your/app puppet.conf:

Slide 77

Slide 77 text

--- classes: common: puppet: ntp: ntpserver: 0.pool.ntp.org aptsetup: additional_apt_repos: - deb localrepo.example.com/ubuntu lucid production - deb localrepo.example.com/ubuntu lucid vendor parameters: ntp_servers: - 0.pool.ntp.org - ntp.example.com mail_server: mail.example.com iburst: true environment: production output YAML:

Slide 78

Slide 78 text

➡ Input is nodename. ➡ Output is YAML structure. ➡ *CAN* mix site.pp and ENC, but wouldn’t recommend it. (http://docs.puppetlabs.com/ guides/external_nodes.html#how-merging- works) ➡ Possible to store nodes inside databases, LDAP etc..

Slide 79

Slide 79 text

MCollective 79

Slide 80

Slide 80 text

Title Text ➡ Marionette Collective ➡ server orchestration or parallel job execution system 80

Slide 81

Slide 81 text

Title Text 81 ACTIVEMQ Client MCollective Server Node Middleware Client MCollective Server MCollective Server Collective

Slide 82

Slide 82 text

82 $ mco ping node1.phpconference.org time=51.48 ms node2.phpconference.org time=91.23 ms puppetmaster.phpconference.org time=91.60 ms ---- ping statistics ---- 3 replies max: 91.60 min: 51.48 avg: 78.10

Slide 83

Slide 83 text

Title Text 83 $ mco facts kernel Report for fact: kernel Linux found 3 times Finished processing 3 / 3 hosts in 47.99 ms $ mco facts hostname Report for fact: hostname node1 found 1 times node2 found 1 times puppetmaster found 1 times Finished processing 3 / 3 hosts in 50.65 ms

Slide 84

Slide 84 text

➡ mco rpc 84

Slide 85

Slide 85 text

➡ find all (zombie) processes in your collective. ➡ find servers with 80% of utilized memory and running MySQL. ➡ restart all apache webservers in the UK, with less than 4GB of memory, except the ones running on debian 6.0 85

Slide 86

Slide 86 text

➡ Run or deploy software ➡ Restart services ➡ Start puppet agent ➡ Upgrade your system ➡ Write your own agents! 86