Slide 1

Slide 1 text

13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 1/50 Proxmox Monheim 04. Januar 2016 Philipp Haußleiter

Slide 2

Slide 2 text

13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 2/50 Agenda Proxmox Installation Cluster Setup Distributed FS

Slide 3

Slide 3 text

13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 3/50 Proxmox

Slide 4

Slide 4 text

13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 4/50 About Proxmox Proxmox Virtual Environment is a powerful Open Source Server Virtualization Platform, based on KVM and LXC. https://pve.proxmox.com/wiki/Main_Page

Slide 5

Slide 5 text

13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 5/50 History Proxmox VE 4.1 Proxmox VE 4.0 Proxmox VE 4.0 beta2 Proxmox VE 4.0 beta1 Proxmox VE 3.4 Proxmox VE 3.3 Proxmox VE 3.2 Proxmox VE 3.1 Proxmox VE 3.0 Proxmox VE 2.3 Proxmox VE 2.2 Proxmox VE 2.1 Proxmox VE 2.0 Proxmox VE 1.9 Proxmox VE 1.8 Proxmox VE 1.7 Proxmox VE 1.6 (updated) - ISO Installer. with 2.6.32 Kernel with OpenVZ including KVM 0.12.5 Proxmox VE 1.6 - ISO Installer with 2.6.32 Kernel with OpenVZ including KVM 0.12.5 Proxmox VE 1.5 New Kernel 2.6.24 and 2.6.32, including KVM 0.12.4 and gPXE Proxmox VE 1.5 Proxmox VE 1.4 Proxmox VE 1.4 beta2 Proxmox VE 1.4 beta1 Proxmox VE 1.3 Proxmox VE 1.2 Proxmox VE 1.1 Proxmox VE 1.0 - First stable release Proxmox VE 0.9beta2 Proxmox VE 0.9 15.4.2008: First public release 22.06.2015: removing Support for OpenVZ - adding LXC (=> 3.x Kernel) 19.02.2015: add ZFS support 11.12.2015: current Release based on Debian Jessie 8.2.0 Linux kernel 4.2.6 http://pve.proxmox.com/wiki/Roadmap

Slide 6

Slide 6 text

13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 6/50 Features Lightweight (1-20 Nodes) Open Source (es existiert kommerzieller Support) Kernel-based Virtual Machine (KVM) Container-based virtualization (LXC) - früher OpenVZ HA/Cluster-Support (inkl. Live Migration) Multi-User http://www.proxmox.com/en/proxmox-ve/features

Slide 7

Slide 7 text

13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 7/50 Comparison Proxmox VE VMware vSphere Windows Hyper-V Citrix XenServ Guest operating system support Windows and Linux (KVM) Other operating systems are known to work and are community supported Windows, Linux, UNIX Modern Windows OS, Linux support is limited Most Windows OS, support is limit Open Source yes no no yes Linux Containers (LXC) (known as OS Virtualization) yes no no no Single-view for Management (centralized control) yes Yes, but requires dedicated management server (or VM) Yes, but requires dedicated management server (or VM) yes Simple Subscription Structure Yes, one subscription pricing, all features enabled no no no High Availability yes yes Requires Microsoft Failover clustering, limited guest OS support yes Live VM snapshots: Backup a running VM yes yes limited yes Bare metal hypervisor yes yes yes yes Virtual machine live migration yes yes yes yes Max. RAM and CPU per Host 160 CPU/2 TB Ram 160 CPU/2 TB Ram 64 CPU/1 TB Ram ?

Slide 8

Slide 8 text

13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 8/50 Services standard Linux Tooling: Debian based (8.2) LXC KVM iptables

Slide 9

Slide 9 text

13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 9/50 Daemons pve-cluster: This service is the heart of any Proxmox VE installation. It provides the Proxmox_Cluster_file_system (pmxcfs), a database-driven file system for storing configuration files, replicated in real time on all nodes using corosync. pvedaemon is the REST API server. All API calls which require root privileges are done using this Server. pveproxy is the REST API proxy server, listening on port 8006 - used in PVE 3.0+ onwards. This service run as user 'www- data', and forwards request to other nodes (or pvedaemon) if required. API calls which do not require root privileges are directly answered by this server. pvestatd is the PVE Status Daemon. It queries the status of all resources (VMs, Containers and Storage), and send the result to all cluster members. pve-manager is just a startup script (not a daemon), used to start/stop all VMs and Containers. pve-firewall: Proxmox VE Firewall manages the Firewall(iptables) which works cluster wide. pvefw-logger: Proxmox VE Firewall logger logs the Firewall events. pve-ha-crm is the Proxmox VE High Availability Cluster Resource Manager, he manage the cluster which means there is only one active if a ha-resource is set, and this is the cluster master. pve-ha-lrm This is the Proxmox VE High Availability Local Resource Manager, every node has an active lrm if ha is enabled. https://pve.proxmox.com/wiki/Service_daemons

Slide 10

Slide 10 text

13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 10/50 Installation

Slide 11

Slide 11 text

13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 11/50 Setup VM

Slide 12

Slide 12 text

13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 12/50 Network HOST-Only: 10.0.2.0/24 NAT: 10.0.3.0/24

Slide 13

Slide 13 text

13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 13/50 Network Setup

Slide 14

Slide 14 text

13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 14/50 Installing Proxmox Create ZPool for Storage Network Configuration User/Pass root/toor1234 [email protected]

Slide 15

Slide 15 text

13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 15/50 Node Network Login to Node ssh [email protected] Network for Proxmox1 (pve1) vmbr0 inet addr:10.0.2.15 eth1 inet addr:10.0.3.15 Create Route to Internet: route add default gw 10.0.3.2

Slide 16

Slide 16 text

13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 16/50 # cat /etc/network/interfaces auto lo iface lo inet loopback auto vmbr0 iface vmbr0 inet static address 10.0.2.15 netmask 255.255.255.0 gateway 10.0.3.2 bridge_ports eth0 bridge_stp off bridge_fd 0 auto eth1 iface eth1 inet dhcp # /etc/init.d/networking restart

Slide 17

Slide 17 text

13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 17/50 Updates rm /etc/apt/sources.list.d/pve- enterprise.list cat /etc/apt/sources.list deb http://ftp.debian.org/debian jessie main contrib # PVE pve-no-subscription repository provided by proxmox.com deb http://download.proxmox.com/debian jessie pve-no-subscription # security updates deb http://security.debian.org/ jessie/updates main contrib

Slide 18

Slide 18 text

13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 18/50 disable Nag Screen (optional) cd /usr/share/pve-manager/ext4/ And create a backup of it just in case ;) cp pvemanagerlib.js pvemanagerlib.js.bkup Now open it up and search for “data.status” (use ctrl+w to search): nano pvemanagerlib.js Change if (data.status !== 'Active') { to if (false) {

Slide 19

Slide 19 text

13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 19/50 Create first Container Formatting '/var/lib/vz/images/100/vm-100-disk-1.raw', fmt=raw size=858993 mke2fs 1.42.12 (29-Aug-2014) Discarding device blocks: 4096/2097152 done Creating filesystem with 2097152 4k blocks and 524288 inodes Filesystem UUID: e2a0c4c2-680d-42c0-bf06-461f1c857b4a Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632 Allocating group tables: 0/64 done Writing inode tables: 0/64 done Creating journal (32768 blocks): done Multiple mount protection is enabled with update interval 5 seconds. Writing superblocks and filesystem accounting information: 0/64 Warning, had trouble writing out superblocks.TASK ERROR: command 'mkfs.ext

Slide 20

Slide 20 text

13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 20/50 ZFS # zpool status pool: rpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 sda2 ONLINE 0 0 0 errors: No known data errors

Slide 21

Slide 21 text

13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 21/50 create a Partition # zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 4.90G 25.9G 96K /rpool rpool/ROOT 802M 25.9G 96K /rpool/ROOT rpool/ROOT/pve-1 802M 25.9G 802M / rpool/swap 4.12G 30.0G 64K - # zfs create rpool/vz # zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 4.90G 25.9G 96K /rpool rpool/ROOT 802M 25.9G 96K /rpool/ROOT rpool/ROOT/pve-1 802M 25.9G 802M / rpool/swap 4.12G 30.0G 64K - rpool/vz 96K 25.9G 96K /rpool/vz

Slide 22

Slide 22 text

13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 22/50 Dashboard

Slide 23

Slide 23 text

13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 23/50 Storage

Slide 24

Slide 24 text

13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 24/50 add Storage and create Container Now a Container Creation should work… Cleanup the old try: # rm -R /var/lib/vz/images/100

Slide 25

Slide 25 text

13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 25/50 create first VM cd /var/lib/vz/template/iso/ Download FreeBSD 10.2 Boot ISO wget http://ftp.freebsd.org/pub/FreeBSD/releases/ISO- IMAGES/10.2/FreeBSD-10.2-RELEASE- amd64-bootonly.iso

Slide 26

Slide 26 text

13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 26/50 fixing Network Setup

Slide 27

Slide 27 text

13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 27/50 updating Container

Slide 28

Slide 28 text

13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 28/50 Cluster

Slide 29

Slide 29 text

13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 29/50 Cluster Network

Slide 30

Slide 30 text

13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 30/50 Cluster Setup root@pve2:~# ping 10.0.2.15 PING 10.0.2.15 (10.0.2.15) 56(84) bytes of data. 64 bytes from 10.0.2.15: icmp_seq=1 ttl=64 time=0.418 ms ping 10.0.2.16 PING 10.0.2.16 (10.0.2.16) 56(84) bytes of data. 64 bytes from 10.0.2.16: icmp_seq=1 ttl=64 time=0.276 ms

Slide 31

Slide 31 text

13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 31/50 Hosts file cat /etc/hosts 127.0.0.1 localhost.localdomain localhost 10.0.2.15 pve1.example.com pve1 pvelocalhost 10.0.2.16 pve2.example.com pve2 pvelocalhost # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters ff02::3 ip6-allhosts

Slide 32

Slide 32 text

13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 32/50 pvecm > root@pve1:~# pvecm create test Corosync Cluster Engine Authentication key generator. Gathering 1024 bits for key from /dev/urandom. Writing corosync key to /etc/corosync/authkey. > root@pve1:~# pvecm status Quorum information ------------------ Date: Sat Jan 2 21:11:49 2016 Quorum provider: corosync_votequorum Nodes: 1 Node ID: 0x00000001 Ring ID: 4 Quorate: Yes Votequorum information ---------------------- Expected votes: 1 Highest expected: 1 Total votes: 1 Quorum: 1 Flags: Quorate Membership information ---------------------- Nodeid Votes Name 0x00000001 1 10.0.2.15 (local)

Slide 33

Slide 33 text

13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 33/50 add Node to a Cluster > root@pve2:~# pvecm add pve1 The authenticity of host 'pve1 (10.0.2.15)' can't be established. ECDSA key fingerprint is c5:94:9b:f0:09:db:f9:85:f4:b3:34:73:48:c4:5e:d7. Are you sure you want to continue connecting (yes/no)? yes root@pve1's password: copy corosync auth key stopping pve-cluster service backup old database waiting for quorum...OK generating node certificates merge known_hosts file restart services successfully added node 'pve2' to cluster. > root@pve1:~# pvecm status Quorum information ------------------ Date: Sat Jan 2 21:13:29 2016 Quorum provider: corosync_votequorum Nodes: 2 Node ID: 0x00000001 Ring ID: 8 Quorate: Yes Votequorum information ---------------------- Expected votes: 2 Highest expected: 2 Total votes: 2 Quorum: 2 Flags: Quorate Membership information ---------------------- Nodeid Votes Name 0x00000001 1 10.0.2.15 (local) 0x00000002 1 10.0.2.16

Slide 34

Slide 34 text

13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 34/50 Migration Container (currently only while offline) VM/KVM Live Migration possible while online (video) Demo Video

Slide 35

Slide 35 text

13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 35/50 Distributed Filesystem

Slide 36

Slide 36 text

13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 36/50 GlusterFS Setup root@pve1:/# apt-cache search glusterfs glusterfs-client - clustered file-system (client package) glusterfs-common - GlusterFS common libraries and translator modules glusterfs-dbg - GlusterFS debugging symbols glusterfs-server - clustered file-system (server package) tgt - Linux SCSI target user-space daemon and tools tgt-dbg - Linux SCSI target user-space daemon and tools - debug symbols tgt-glusterfs - Linux SCSI target user-space daemon and tools - GlusterFS tgt-rbd - Linux SCSI target user-space daemon and tools - RBD support

Slide 37

Slide 37 text

13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 37/50 Installation apt-get install glusterfs-server root@pve1:/# gluster peer probe pve1 peer probe: success. Probe on localhost not needed root@pve1:/# gluster peer probe pve2 peer probe: success.

Slide 38

Slide 38 text

13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 38/50 Status Check root@pve1:/# gluster peer status Number of Peers: 1 Hostname: pve2 Uuid: a116b6cd-43e8-416f-8356-ba7ffa8d5d56 State: Peer in Cluster (Connected)

Slide 39

Slide 39 text

13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 39/50 Prepare local FS root@pve1:~# zfs create rpool/data root@pve2:~# zfs create rpool/data root@pve2:~# zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 5.28G 25.5G 96K /rpool rpool/ROOT 806M 25.5G 96K /rpool/ROOT rpool/ROOT/pve-1 806M 25.5G 806M / rpool/data 96K 25.5G 96K /rpool/data rpool/swap 4.12G 29.6G 64K - rpool/vz 380M 25.5G 96K /rpool/vz rpool/vz/subvol-100-disk-1 380M 7.63G 380M /rpool/vz/subvol-100-disk

Slide 40

Slide 40 text

13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 40/50 Create GlusterFS Storage root@pve1:~# zfs set mountpoint=/data rpool/data root@pve1:~# gluster volume create data transport tcp pve1:/data pve2:/dat root@pve1:/# gluster volume info Volume Name: data Type: Distribute Volume ID: 63c54a01-32cc-4772-b6c9-1f0afa5656f8 Status: Created Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: pve1:/data Brick2: pve2:/data also for pve2.

Slide 41

Slide 41 text

13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 41/50 Start GlusterFS Volume root@pve1:~# gluster volume start data volume start: data: success

Slide 42

Slide 42 text

13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 42/50 Cleanup root@pve1:~# mv /var/lib/vz/template/iso/* /mnt/pve/data/template/iso/ root@pve1:~# mv /var/lib/vz/template/cache/* /mnt/pve/data/template/cache/ root@pve2:~# mv /var/lib/vz/template/iso/* /mnt/pve/data/template/iso/ root@pve2:~# mv /var/lib/vz/template/cache/* /mnt/pve/data/template/cache/

Slide 43

Slide 43 text

13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 43/50 Add GlusterFS as Proxmox Storage

Slide 44

Slide 44 text

13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 44/50 Done before after

Slide 45

Slide 45 text

13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 45/50 Backup and Restore

Slide 46

Slide 46 text

13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 46/50 Firewall

Slide 47

Slide 47 text

13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 47/50 Firewall Scopes Cluster Host VM uses iptables

Slide 48

Slide 48 text

13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 48/50 Settings via Dashboard

Slide 49

Slide 49 text

13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 49/50 some Tests root@pve1:/# ncat -4 -l 2000 --keep-open --exec "/bin/cat" philipp$ telnet 10.0.2.15 2000 Trying 10.0.2.15...

Slide 50

Slide 50 text

13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 50/50 Links: http://myatus.com/p/poor-mans-proxmox-cluster http://www.admin-magazin.de/Das-Heft/2012/02/Das-verteilte- Dateisystem-GlusterFS-aufsetzen-und-verwalten https://github.com/GlusterFS/Notes https://pve.proxmox.com/wiki/Proxmox_VE_Firewall https://forge.gluster.org/gluster-docs- project/pages/GlusterFS_34_Release_Notes https://michael.lustfield.net/misc/ground-up-infrastructure