Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Proxmox

 Proxmox

Proxmox VE is a complete open source server virtualization management solution. It is based on KVM virtualization and container-based virtualization and manages virtual machines, storage, virtualized networks, and HA Clustering.

Philipp Haussleiter

January 04, 2016
Tweet

More Decks by Philipp Haussleiter

Other Decks in Technology

Transcript

  1. 13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 4/50 About Proxmox Proxmox Virtual Environment is

    a powerful Open Source Server Virtualization Platform, based on KVM and LXC. https://pve.proxmox.com/wiki/Main_Page
  2. 13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 5/50 History Proxmox VE 4.1 Proxmox VE

    4.0 Proxmox VE 4.0 beta2 Proxmox VE 4.0 beta1 Proxmox VE 3.4 Proxmox VE 3.3 Proxmox VE 3.2 Proxmox VE 3.1 Proxmox VE 3.0 Proxmox VE 2.3 Proxmox VE 2.2 Proxmox VE 2.1 Proxmox VE 2.0 Proxmox VE 1.9 Proxmox VE 1.8 Proxmox VE 1.7 Proxmox VE 1.6 (updated) - ISO Installer. with 2.6.32 Kernel with OpenVZ including KVM 0.12.5 Proxmox VE 1.6 - ISO Installer with 2.6.32 Kernel with OpenVZ including KVM 0.12.5 Proxmox VE 1.5 New Kernel 2.6.24 and 2.6.32, including KVM 0.12.4 and gPXE Proxmox VE 1.5 Proxmox VE 1.4 Proxmox VE 1.4 beta2 Proxmox VE 1.4 beta1 Proxmox VE 1.3 Proxmox VE 1.2 Proxmox VE 1.1 Proxmox VE 1.0 - First stable release Proxmox VE 0.9beta2 Proxmox VE 0.9 15.4.2008: First public release 22.06.2015: removing Support for OpenVZ - adding LXC (=> 3.x Kernel) 19.02.2015: add ZFS support 11.12.2015: current Release based on Debian Jessie 8.2.0 Linux kernel 4.2.6 http://pve.proxmox.com/wiki/Roadmap
  3. 13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 6/50 Features Lightweight (1-20 Nodes) Open Source

    (es existiert kommerzieller Support) Kernel-based Virtual Machine (KVM) Container-based virtualization (LXC) - früher OpenVZ HA/Cluster-Support (inkl. Live Migration) Multi-User http://www.proxmox.com/en/proxmox-ve/features
  4. 13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 7/50 Comparison Proxmox VE VMware vSphere Windows

    Hyper-V Citrix XenServ Guest operating system support Windows and Linux (KVM) Other operating systems are known to work and are community supported Windows, Linux, UNIX Modern Windows OS, Linux support is limited Most Windows OS, support is limit Open Source yes no no yes Linux Containers (LXC) (known as OS Virtualization) yes no no no Single-view for Management (centralized control) yes Yes, but requires dedicated management server (or VM) Yes, but requires dedicated management server (or VM) yes Simple Subscription Structure Yes, one subscription pricing, all features enabled no no no High Availability yes yes Requires Microsoft Failover clustering, limited guest OS support yes Live VM snapshots: Backup a running VM yes yes limited yes Bare metal hypervisor yes yes yes yes Virtual machine live migration yes yes yes yes Max. RAM and CPU per Host 160 CPU/2 TB Ram 160 CPU/2 TB Ram 64 CPU/1 TB Ram ?
  5. 13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 9/50 Daemons pve-cluster: This service is the

    heart of any Proxmox VE installation. It provides the Proxmox_Cluster_file_system (pmxcfs), a database-driven file system for storing configuration files, replicated in real time on all nodes using corosync. pvedaemon is the REST API server. All API calls which require root privileges are done using this Server. pveproxy is the REST API proxy server, listening on port 8006 - used in PVE 3.0+ onwards. This service run as user 'www- data', and forwards request to other nodes (or pvedaemon) if required. API calls which do not require root privileges are directly answered by this server. pvestatd is the PVE Status Daemon. It queries the status of all resources (VMs, Containers and Storage), and send the result to all cluster members. pve-manager is just a startup script (not a daemon), used to start/stop all VMs and Containers. pve-firewall: Proxmox VE Firewall manages the Firewall(iptables) which works cluster wide. pvefw-logger: Proxmox VE Firewall logger logs the Firewall events. pve-ha-crm is the Proxmox VE High Availability Cluster Resource Manager, he manage the cluster which means there is only one active if a ha-resource is set, and this is the cluster master. pve-ha-lrm This is the Proxmox VE High Availability Local Resource Manager, every node has an active lrm if ha is enabled. https://pve.proxmox.com/wiki/Service_daemons
  6. 13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 15/50 Node Network Login to Node ssh

    [email protected] Network for Proxmox1 (pve1) vmbr0 inet addr:10.0.2.15 eth1 inet addr:10.0.3.15 Create Route to Internet: route add default gw 10.0.3.2
  7. 13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 16/50 # cat /etc/network/interfaces auto lo iface

    lo inet loopback auto vmbr0 iface vmbr0 inet static address 10.0.2.15 netmask 255.255.255.0 gateway 10.0.3.2 bridge_ports eth0 bridge_stp off bridge_fd 0 auto eth1 iface eth1 inet dhcp # /etc/init.d/networking restart
  8. 13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 17/50 Updates rm /etc/apt/sources.list.d/pve- enterprise.list cat /etc/apt/sources.list

    deb http://ftp.debian.org/debian jessie main contrib # PVE pve-no-subscription repository provided by proxmox.com deb http://download.proxmox.com/debian jessie pve-no-subscription # security updates deb http://security.debian.org/ jessie/updates main contrib
  9. 13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 18/50 disable Nag Screen (optional) cd /usr/share/pve-manager/ext4/

    And create a backup of it just in case ;) cp pvemanagerlib.js pvemanagerlib.js.bkup Now open it up and search for “data.status” (use ctrl+w to search): nano pvemanagerlib.js Change if (data.status !== 'Active') { to if (false) {
  10. 13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 19/50 Create first Container Formatting '/var/lib/vz/images/100/vm-100-disk-1.raw', fmt=raw

    size=858993 mke2fs 1.42.12 (29-Aug-2014) Discarding device blocks: 4096/2097152 done Creating filesystem with 2097152 4k blocks and 524288 inodes Filesystem UUID: e2a0c4c2-680d-42c0-bf06-461f1c857b4a Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632 Allocating group tables: 0/64 done Writing inode tables: 0/64 done Creating journal (32768 blocks): done Multiple mount protection is enabled with update interval 5 seconds. Writing superblocks and filesystem accounting information: 0/64 Warning, had trouble writing out superblocks.TASK ERROR: command 'mkfs.ext
  11. 13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 20/50 ZFS # zpool status pool: rpool

    state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 sda2 ONLINE 0 0 0 errors: No known data errors
  12. 13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 21/50 create a Partition # zfs list

    NAME USED AVAIL REFER MOUNTPOINT rpool 4.90G 25.9G 96K /rpool rpool/ROOT 802M 25.9G 96K /rpool/ROOT rpool/ROOT/pve-1 802M 25.9G 802M / rpool/swap 4.12G 30.0G 64K - # zfs create rpool/vz # zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 4.90G 25.9G 96K /rpool rpool/ROOT 802M 25.9G 96K /rpool/ROOT rpool/ROOT/pve-1 802M 25.9G 802M / rpool/swap 4.12G 30.0G 64K - rpool/vz 96K 25.9G 96K /rpool/vz
  13. 13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 24/50 add Storage and create Container Now

    a Container Creation should work… Cleanup the old try: # rm -R /var/lib/vz/images/100
  14. 13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 25/50 create first VM cd /var/lib/vz/template/iso/ Download

    FreeBSD 10.2 Boot ISO wget http://ftp.freebsd.org/pub/FreeBSD/releases/ISO- IMAGES/10.2/FreeBSD-10.2-RELEASE- amd64-bootonly.iso
  15. 13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 30/50 Cluster Setup root@pve2:~# ping 10.0.2.15 PING

    10.0.2.15 (10.0.2.15) 56(84) bytes of data. 64 bytes from 10.0.2.15: icmp_seq=1 ttl=64 time=0.418 ms ping 10.0.2.16 PING 10.0.2.16 (10.0.2.16) 56(84) bytes of data. 64 bytes from 10.0.2.16: icmp_seq=1 ttl=64 time=0.276 ms
  16. 13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 31/50 Hosts file cat /etc/hosts 127.0.0.1 localhost.localdomain

    localhost 10.0.2.15 pve1.example.com pve1 pvelocalhost 10.0.2.16 pve2.example.com pve2 pvelocalhost # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters ff02::3 ip6-allhosts
  17. 13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 32/50 pvecm > root@pve1:~# pvecm create test

    Corosync Cluster Engine Authentication key generator. Gathering 1024 bits for key from /dev/urandom. Writing corosync key to /etc/corosync/authkey. > root@pve1:~# pvecm status Quorum information ------------------ Date: Sat Jan 2 21:11:49 2016 Quorum provider: corosync_votequorum Nodes: 1 Node ID: 0x00000001 Ring ID: 4 Quorate: Yes Votequorum information ---------------------- Expected votes: 1 Highest expected: 1 Total votes: 1 Quorum: 1 Flags: Quorate Membership information ---------------------- Nodeid Votes Name 0x00000001 1 10.0.2.15 (local)
  18. 13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 33/50 add Node to a Cluster >

    root@pve2:~# pvecm add pve1 The authenticity of host 'pve1 (10.0.2.15)' can't be established. ECDSA key fingerprint is c5:94:9b:f0:09:db:f9:85:f4:b3:34:73:48:c4:5e:d7. Are you sure you want to continue connecting (yes/no)? yes root@pve1's password: copy corosync auth key stopping pve-cluster service backup old database waiting for quorum...OK generating node certificates merge known_hosts file restart services successfully added node 'pve2' to cluster. > root@pve1:~# pvecm status Quorum information ------------------ Date: Sat Jan 2 21:13:29 2016 Quorum provider: corosync_votequorum Nodes: 2 Node ID: 0x00000001 Ring ID: 8 Quorate: Yes Votequorum information ---------------------- Expected votes: 2 Highest expected: 2 Total votes: 2 Quorum: 2 Flags: Quorate Membership information ---------------------- Nodeid Votes Name 0x00000001 1 10.0.2.15 (local) 0x00000002 1 10.0.2.16
  19. 13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 36/50 GlusterFS Setup root@pve1:/# apt-cache search glusterfs

    glusterfs-client - clustered file-system (client package) glusterfs-common - GlusterFS common libraries and translator modules glusterfs-dbg - GlusterFS debugging symbols glusterfs-server - clustered file-system (server package) tgt - Linux SCSI target user-space daemon and tools tgt-dbg - Linux SCSI target user-space daemon and tools - debug symbols tgt-glusterfs - Linux SCSI target user-space daemon and tools - GlusterFS tgt-rbd - Linux SCSI target user-space daemon and tools - RBD support
  20. 13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 37/50 Installation apt-get install glusterfs-server root@pve1:/# gluster

    peer probe pve1 peer probe: success. Probe on localhost not needed root@pve1:/# gluster peer probe pve2 peer probe: success.
  21. 13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 38/50 Status Check root@pve1:/# gluster peer status

    Number of Peers: 1 Hostname: pve2 Uuid: a116b6cd-43e8-416f-8356-ba7ffa8d5d56 State: Peer in Cluster (Connected)
  22. 13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 39/50 Prepare local FS root@pve1:~# zfs create

    rpool/data root@pve2:~# zfs create rpool/data root@pve2:~# zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 5.28G 25.5G 96K /rpool rpool/ROOT 806M 25.5G 96K /rpool/ROOT rpool/ROOT/pve-1 806M 25.5G 806M / rpool/data 96K 25.5G 96K /rpool/data rpool/swap 4.12G 29.6G 64K - rpool/vz 380M 25.5G 96K /rpool/vz rpool/vz/subvol-100-disk-1 380M 7.63G 380M /rpool/vz/subvol-100-disk
  23. 13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 40/50 Create GlusterFS Storage root@pve1:~# zfs set

    mountpoint=/data rpool/data root@pve1:~# gluster volume create data transport tcp pve1:/data pve2:/dat root@pve1:/# gluster volume info Volume Name: data Type: Distribute Volume ID: 63c54a01-32cc-4772-b6c9-1f0afa5656f8 Status: Created Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: pve1:/data Brick2: pve2:/data also for pve2.
  24. 13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 42/50 Cleanup root@pve1:~# mv /var/lib/vz/template/iso/* /mnt/pve/data/template/iso/ root@pve1:~#

    mv /var/lib/vz/template/cache/* /mnt/pve/data/template/cache/ root@pve2:~# mv /var/lib/vz/template/iso/* /mnt/pve/data/template/iso/ root@pve2:~# mv /var/lib/vz/template/cache/* /mnt/pve/data/template/cache/
  25. 13.1.2016 Slides http://localhost:8000/slides.html?chapter=README.md&print-pdf#/ 49/50 some Tests root@pve1:/# ncat -4 -l

    2000 --keep-open --exec "/bin/cat" philipp$ telnet 10.0.2.15 2000 Trying 10.0.2.15...